Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
3,500 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convolutional Neural Network
In this second exercise-notebook we will play with Convolutional Neural Network (CNN).
As you should have seen, a CNN is a feed-forward neural network tipically composed of Convolutional, MaxPooling and Dense layers.
If the task implemented by the CNN is a classification task, the last Dense layer should use the Softmax activation, and the loss should be the categorical crossentropy.
Reference
Step1: To reduce the risk of overfitting, we also apply some image transformation, like rotations, shifts and flips. All these can be easily implemented using the Keras Image Data Generator.
Warning
Step2: Now we can start training.
At each iteration, a batch of 500 images is requested to the ImageDataGenerator object, and then fed to the network. | Python Code:
from keras.datasets import cifar10
from keras.utils import np_utils
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
X_train = X_train.astype("float32")
X_test = X_test.astype("float32")
X_train /= 255
X_test /= 255
Explanation: Convolutional Neural Network
In this second exercise-notebook we will play with Convolutional Neural Network (CNN).
As you should have seen, a CNN is a feed-forward neural network tipically composed of Convolutional, MaxPooling and Dense layers.
If the task implemented by the CNN is a classification task, the last Dense layer should use the Softmax activation, and the loss should be the categorical crossentropy.
Reference: https://github.com/fchollet/keras/blob/master/examples/cifar10_cnn.py
Training the network
We will train our network on the CIFAR10 dataset, which contains 50,000 32x32 color training images, labeled over 10 categories, and 10,000 test images.
As this dataset is also included in Keras datasets, we just ask the keras.datasets module for the dataset.
Training and test images are normalized to lie in the $\left[0,1\right]$ interval.
End of explanation
from keras.preprocessing.image import ImageDataGenerator
generated_images = ImageDataGenerator(
featurewise_center=True, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=True, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
rotation_range=0, # randomly rotate images in the range (degrees, 0 to 180)
width_shift_range=0.2, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.2, # randomly shift images vertically (fraction of total height)
horizontal_flip=True, # randomly flip images
vertical_flip=False) # randomly flip images
generated_images.fit(X_train)
Explanation: To reduce the risk of overfitting, we also apply some image transformation, like rotations, shifts and flips. All these can be easily implemented using the Keras Image Data Generator.
Warning: The following cells may be computational Intensive....
End of explanation
X_train.shape
gen = generated_images.flow(X_train, Y_train, batch_size=500, shuffle=True)
X_batch, Y_batch = next(gen)
X_batch.shape
from keras.utils import generic_utils
n_epochs = 2
for e in range(n_epochs):
print('Epoch', e)
print('Training...')
progbar = generic_utils.Progbar(X_train.shape[0])
for X_batch, Y_batch in generated_images.flow(X_train, Y_train, batch_size=500, shuffle=True):
loss = model.train_on_batch(X_batch, Y_batch)
progbar.add(X_batch.shape[0], values=[('train loss', loss[0])])
Explanation: Now we can start training.
At each iteration, a batch of 500 images is requested to the ImageDataGenerator object, and then fed to the network.
End of explanation |
3,501 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PREPROCESSING
Clean article collection
Step2: Save article information in a table
Step3: Remove short (usually advertisements), Guardian (British news), Stack of Stuff (list of links), and duplicate articles
Step4: Clean article content
Step5: Remove special characters, tokenize and lemmatize the articles, and remove stop and miscellaneous words | Python Code:
from sqlalchemy import create_engine
from sqlalchemy_utils import database_exists, create_database
import psycopg2
import newspaper
from datetime import datetime
import pickle
import pandas as pd
import numpy as np
with open ("bubble_popper_postgres.txt","r") as myfile:
lines = [line.replace("\n","") for line in myfile.readlines()]
db, us, pw = 'bubble_popper', lines[0], lines[1]
engine = create_engine('postgresql://%s:%s@localhost:5432/%s'%(us,pw,db))
connstr = "dbname='%s' user='%s' host='localhost' password='%s'"%(db,us,pw)
conn = None; conn = psycopg2.connect(connstr)
Explanation: PREPROCESSING
Clean article collection
End of explanation
query = SELECT * FROM pub_scores
pub_scores = pd.read_sql(query,conn)
columns = ['publication','source','heard','trust','distrust','content','title','url']
articles = pd.DataFrame(columns=columns)
for handle in pub_scores['twitter']:
print(str(datetime.now()),handle)
articleList = pickle.load(open('pub_text_'+handle+'.pkl','rb'))
content = [article.text for article in articleList]
title = [article.title for article in articleList]
url = [article.url for article in articleList]
publication = np.repeat(pub_scores['Source'][pub_scores['twitter']==handle],len(content))
source = np.repeat(pub_scores['source'][pub_scores['twitter']==handle],len(content))
heard = np.repeat(pub_scores['heard'][pub_scores['twitter']==handle],len(content))
trust = np.repeat(pub_scores['trust'][pub_scores['twitter']==handle],len(content))
distrust = np.repeat(pub_scores['distrust'][pub_scores['twitter']==handle],len(content))
temp = pd.DataFrame({'publication':publication,
'source':source,
'heard':heard,
'trust':trust,
'distrust':distrust,
'content':content,
'title':title,
'url':url})
articles = articles.append(temp,ignore_index=True)
pickle.dump(articles,open('pub_articles.pkl','wb'))
articles.to_sql('pub_articles',engine,if_exists='replace')
Explanation: Save article information in a table
End of explanation
short_text = []
for i,article in enumerate(pub_articles['content'].values.tolist()):
if len(article)<=0:
short_text.append(i)
guardian_text = []
for i,publication in enumerate(pub_articles['publication'].values.tolist()):
if publication == 'Guardian':
guardian_text.append(i)
stack_text = [i for i in range(0,len(pub_articles)) if 'Stack of Stuff' in pub_articles['title'].iloc[i]]
drop_text = short_text + guardian_text + stack_text
drop_text = list(set(drop_text))
articles = pub_articles.drop(pub_articles.index[drop_text])
articles = articles.drop_duplicates('content')
pickle.dump(articles,open('pub_articles_trimmed.pkl','wb'))
articles.to_sql('pub_articles_clean',engine,if_exists='replace')
Explanation: Remove short (usually advertisements), Guardian (British news), Stack of Stuff (list of links), and duplicate articles
End of explanation
from stop_words import get_stop_words
from nltk.stem import WordNetLemmatizer
from gensim import corpora, models
import gensim
Explanation: Clean article content
End of explanation
doc_set = articles['content'].values.tolist()
doc_set = [doc.replace("\n"," ") for doc in doc_set]
doc_set = [doc.replace("\'","") for doc in doc_set]
doc_set = [gensim.utils.simple_preprocess(doc) for doc in doc_set]
wordnet_lemmatizer = WordNetLemmatizer()
doc_set = [[wordnet_lemmatizer.lemmatize(word) for word in doc] for doc in doc_set]
doc_set = [[wordnet_lemmatizer.lemmatize(word,pos='v') for word in doc] for doc in doc_set]
en_stop = get_stop_words('en')
letters = ["a","b","c","d","e","f","g","h","i","j","k","l","m","n","o","p","q","r","s","t","u","v","w","x","y","z"]
other = ["wa","ha","one","two","id","re","http","com","mr","image","photo","caption","don","sen","pic","co",
"source","watch","play","duration","video","momentjs","getty","images","newsletter"]
doc_set = [[word for word in doc if not word in (en_stop+letters+other)] for doc in doc_set]
pickle.dump(doc_set,open('pub_articles_cleaned_super.pkl','wb'))
Explanation: Remove special characters, tokenize and lemmatize the articles, and remove stop and miscellaneous words
End of explanation |
3,502 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Licensed under the Apache License, Version 2.0 (the "License");
Step1: Read the Human Proteome
Step3: LysC digestion
Step4: Generate binder sets
Step5: A binder set is chosen randomly and is represented by a string.
The character - separates each binder.
The character
Step6: In this example the the fragment 'RSWWAFDDDAFDDDDD' is read using the binder set 'WW
Step7: Evaluate proteome identification for binder sets with a range of properties | Python Code:
import collections
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
Explanation: Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
# Download from uniprot: https://www.uniprot.org/help/human_proteome
!wget -O uniprot.fasta "https://www.uniprot.org/uniprot/?query=reviewed%3Ayes+AND+proteome%3Aup000005640&format=fasta"
def fasta_iterator(file_handle):
partial_sequence = ''
line = file_handle.readline()
while line:
if line.startswith('>'):
if partial_sequence:
yield partial_sequence
partial_sequence = ''
else:
partial_sequence += line.strip()
line = file_handle.readline()
if partial_sequence:
yield partial_sequence
def read_seqs_from_fasta(filepath):
all_seqs = []
with open(filepath, 'rt') as f:
for seq in fasta_iterator(f):
all_seqs.append(seq)
print('Read %d entries from %s' % (len(all_seqs), filepath))
return all_seqs
full_seqs = read_seqs_from_fasta('uniprot.fasta')
Explanation: Read the Human Proteome
End of explanation
LYSINE = 'K'
def lys_c_digest(protein_sequence):
Return the pieces of a sequence after Lys-C digestion.
pieces = []
last_cut_pos = len(protein_sequence)
for i in reversed(range(len(protein_sequence) - 1)):
current_char = protein_sequence[i]
if current_char == LYSINE:
piece = protein_sequence[i + 1:last_cut_pos + 1]
pieces.append(piece)
last_cut_pos = i
if last_cut_pos >= 0:
piece = protein_sequence[0:last_cut_pos + 1]
pieces.append(piece)
return list(reversed(pieces))
def make_fragment_df(seqs):
tuples = []
for protein_num, seq in enumerate(seqs):
digested_fragments = lys_c_digest(seq)
for fragment_num, fragment in enumerate(digested_fragments):
tuples.append((protein_num, fragment_num, fragment))
df = pd.DataFrame.from_records(tuples, columns=('protein_num', 'fragment_num', 'raw_fragment'))
df['fragment_len'] = df['raw_fragment'].str.len()
return df
full_fragment_df = make_fragment_df(full_seqs)
full_fragment_df
_ = sns.histplot(full_fragment_df, x='fragment_len', bins=range(1, 100, 5))
_ = plt.title('Histogram of Lys-C digestion fragments')
num_proteins_total = full_fragment_df['protein_num'].nunique()
num_proteins_total
Explanation: LysC digestion
End of explanation
amino_acids = list('ACDEFGHIKLMNPQRSTVWY')
dipeptides = [x + y for x in amino_acids for y in amino_acids]
len(dipeptides)
BINDER_SEP = '-'
TARGET_SEP = ':'
def generate_binder_set(num_dipeptides, num_binder):
binder_list = []
for _ in range(num_binder):
# Each binder will bind to a random selection of dipeptide targets.
targets = np.random.choice(dipeptides, size=num_dipeptides, replace=False)
binder = TARGET_SEP.join(targets)
binder_list.append(binder)
binder_set = BINDER_SEP.join(binder_list)
return binder_set
np.random.seed(12345) # Set random seed for deterministic behavior
binder_set = generate_binder_set(num_dipeptides=2, num_binder=3)
print(binder_set)
Explanation: Generate binder sets
End of explanation
# Make fragments padded to constant length using non-amino acid overflow character
READ_OVERFLOW = 'Z'
max_len = full_fragment_df['fragment_len'].max()
full_fragment_df['padded_fragment'] = full_fragment_df['raw_fragment'].str.pad(max_len, side='right', fillchar=READ_OVERFLOW)
full_fragment_df
NO_BARCODE = '_'
def get_barcode_dict(binder_set):
'''Convert a binder set into a dict that gives the binder for a target.'''
barcode_dict = collections.defaultdict(lambda: NO_BARCODE)
for i, binder in enumerate(binder_set.split(BINDER_SEP)):
for target in binder.split(TARGET_SEP):
barcode_dict[target] = str(i)
# More than one binder could bind to the same target.
# To get a lower bound on the number of proteins identified map the target
# to just one of the binders.
return barcode_dict
barcode_dict = get_barcode_dict(binder_set)
print(barcode_dict)
DIPEPTIDE_LEN = 2
def get_barcode_read(fragment, read_length, binder_set):
barcode_dict = get_barcode_dict(binder_set)
return ''.join(barcode_dict[fragment[i:i+DIPEPTIDE_LEN]] for i in range(read_length))
binder_set = 'WW:KS-FI:AF-SW:AT'
fragment = 'RSWWAFDDDAFDDDDD' # Made up for illustrative purposes.
barcode_read = get_barcode_read(fragment, 12, binder_set)
print(barcode_read)
Explanation: A binder set is chosen randomly and is represented by a string.
The character - separates each binder.
The character : separates the dipeptide targets that a binder binds to.
So, if the binder set is WW:KS_FI:AF_SW:AT then there are three binders. The first binds to WW and KS. The second binds to FI and AF. The third binds to SW and AT.
Read a fragment as barcodes using a binder set
End of explanation
READ_LENGTH = 12
def try_binder_set(binder_set):
full_fragment_df['temp_read'] = full_fragment_df['padded_fragment'].apply(lambda x: get_barcode_read(x, READ_LENGTH, binder_set))
# Handle the case where a read appears multiple times in the same protein.
temp_fragment_df = full_fragment_df.drop_duplicates(
subset=['temp_read', 'protein_num'], keep='first')
num_identified_proteins = temp_fragment_df.drop_duplicates(subset='temp_read', keep=False)['protein_num'].nunique()
return num_identified_proteins
# Look at the performance for a binder set.
binder_set = 'WW:KS-FI:AF-SW:AT'
print('binder_set = ', binder_set)
num_identified_proteins = try_binder_set(binder_set)
print('num_identified_proteins = ', num_identified_proteins)
print("Proteome identified: {:.1%}".format(1. * num_identified_proteins / num_proteins_total))
# Look at the performance for a larger binder set with more targets each.
num_dipeptides=8
print('num_dipeptides = ', num_dipeptides)
num_binder=10
print('num_binder = ', num_binder)
np.random.seed(12345) # Set random seed for deterministic behavior
binder_set = generate_binder_set(num_dipeptides=num_dipeptides, num_binder=num_binder)
print('binder_set = ', binder_set)
num_identified_proteins = try_binder_set(binder_set)
print('num_identified_proteins = ', num_identified_proteins)
print("Proteome identified: {:.1%}".format(1. * num_identified_proteins / num_proteins_total))
Explanation: In this example the the fragment 'RSWWAFDDDAFDDDDD' is read using the binder set 'WW:KS-FI:AF-SW:AT'. There are 12 binding cycles each ending with a single amino acid removed by edman degradation.
```
example_fragment = 'RSWWAFDDDAFDDDDD'
example_binder_set = 'WW:KS-FI:AF-SW:AT'
binder_dict = {
'AF': '1',
'AT': '2',
'FI': '1',
'KS': '0',
'SW': '2',
'WW': '0'
}
read result is '_20_1____1__'
```
On the first cycle, the end dipeptide is RS. There is no binder in the set for that dipeptide target so no barcode is left for that cycle.
On the next cycle, the end dipeptide is SW (the R has been removed). Binder #2 targets that dipeptide and so the bacode '2' is left. The barcode sequence would indicate both binder and cycle number.
On the next cycle, the end dipeptide is WW (the S has been removed). Binder #0 targets that dipeptide and so the bacode '0' is left.
The process continues until the last cycle number. The barcodes read would be: Binder #2 on cycle 2, Binder #0 on cycle 3, Binder #1 on cycle 5, Binder #1 on cycle 10. This is represented here as the string '_20_1____1__'.
A fragement is matched to a specific protein if the barcode read is unique i.e. no other protein fragment gets the same barcode sequence read.
Fraction of proteome identified by different binder sets
End of explanation
results = []
np.random.seed(12345) # Set random seed for deterministic behavior
# Small number of properties to evaluate (for faster run time).
RANGE_NUM_BINDERS_IN_SET = [1, 5, 10, 15]
RANGE_NUM_TARGETS_PER_BINDER = [1, 4, 8, 350]
NUM_SAMPLES_PER_CONDITION = 1
# Values used to generate the results in the paper.
# RANGE_NUM_BINDERS_IN_SET = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 25, 50, 75, 100]
# RANGE_NUM_TARGETS_PER_BINDER = [1, 2, 3, 4, 5, 6, 7, 8, 9, 25, 50, 100, 150, 200, 250, 300, 400]
# NUM_SAMPLES_PER_CONDITION = 20
# This takes ~1 hour to run
for sample in range(NUM_SAMPLES_PER_CONDITION):
for num_dipeptides in RANGE_NUM_TARGETS_PER_BINDER:
for num_binder in RANGE_NUM_BINDERS_IN_SET:
binder_set = generate_binder_set(num_dipeptides=num_dipeptides, num_binder=num_binder)
num_identified_proteins = try_binder_set(binder_set)
new_tuple = (num_binder, num_dipeptides, sample, binder_set, num_identified_proteins)
results.append(new_tuple)
print("num_binder=%d, num_dipeptides=%d, sample=%d, binder_set=%s, num_identified_proteins=%d" % new_tuple)
print("{:.1%}".format(1. * num_identified_proteins / num_proteins_total))
long_df = pd.DataFrame.from_records(results, columns=['num_binder', 'num_dipeptides', 'sample', 'binder_name', 'num_identified_proteins'])
long_df['proteome_fraction_identified'] = 1. * long_df['num_identified_proteins'] / num_proteins_total
pivoted = long_df.pivot_table(values='proteome_fraction_identified', index='num_dipeptides', columns='num_binder', aggfunc='median')
pivoted.style.background_gradient(axis=None).format('{:.0%}')
Explanation: Evaluate proteome identification for binder sets with a range of properties
End of explanation |
3,503 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Depth First Search
The function search take three arguments to solve a search problem
Step1: The function dfs takes five arguments to solve a search problem
- state is a state of the search problem.
It is assumed that we have already found a path from the start state of our search problem
that leads to state.
- goal is the goal state, and
- next_states is a function with signature $\texttt{next_states}
Step2: Display Code
Below, we ensure that we only import graphviz if this notebook is not loaded from another notebook. This works by checking that the variable file is not set.
Step3: The function $\texttt{toDot}(\texttt{source}, \texttt{Edges}, \texttt{Fringe}, \texttt{Visited})$ takes a graph that is represented by
its Edges, a set of nodes Fringe, and set Visited of nodes that have already been visited.
Step4: Testing
Step5: Solving the Sliding Puzzle | Python Code:
def search(start, goal, next_states):
return dfs(start, goal, next_states, [start], { start })
Explanation: Depth First Search
The function search take three arguments to solve a search problem:
- start is the start state of the search problem,
- goal is the goal state, and
- next_states is a function with signature $\texttt{next_states}:Q \rightarrow 2^Q$, where $Q$ is the set of states.
For every state $s \in Q$, $\texttt{next_states}(s)$ is the set of states that can be reached from $s$ in one step.
If successful, search returns a path from start to goal that is a solution of the search problem
$$ \langle Q, \texttt{next_states}, \texttt{start}, \texttt{goal} \rangle. $$
End of explanation
def dfs(state, goal, next_states, Path, PathSet):
if state == goal:
return Path
for ns in next_states(state):
if ns not in PathSet:
Path .append(ns)
PathSet.add(ns)
Result = dfs(ns, goal, next_states, Path, PathSet)
if Result is not None:
return Result
Path .pop()
PathSet.remove(ns)
return None
Explanation: The function dfs takes five arguments to solve a search problem
- state is a state of the search problem.
It is assumed that we have already found a path from the start state of our search problem
that leads to state.
- goal is the goal state, and
- next_states is a function with signature $\texttt{next_states}:Q \rightarrow 2^Q$, where $Q$ is the set of states.
For every state $s \in Q$, $\texttt{next_states}(s)$ is the set of states that can be reached from $s$ in one step.
- Path is a path leading from the start state of the search problem to state.
Therefore, start = Path[0].
- PathSet is the set of all nodes occurring in the list Path.
The implementation of dfs works as follows:
- If state is equal to goal, our search is successful. Since by assumption
the list Path is a path connecting the start state of our search problem with state,
Path is the solution to the search problem.
- Otherwise, next_states(state) is the set of states that are reachable from state
in one step. Any of the states ns in this set could be the next state on a path
that leads to goal. Therefore, we try recursively to reach goal from
every state ns. Note that we have to change Path to the list
Path + [ns] when we call the procedure dfs recursively. This way, we retain the invariant of
dfs that the list Path is a path connecting the start state of our search problem with state.
- In order to avoid running in circles we check that the state ns is not already a member of the
set PathSet. It would be very inefficient to search in the list Path. Therefore, we search
in PathSet instead because this set contains the same elements as the list Path.
- If one of the recursive calls of dfs returns a list, this list is a solution to our
search problem and hence it is returned. However, if instead the value
None is returned, the for loop needs to carry on and test the other
successors of state.
Note that the recursive invocation of dfs returns None if the end of the
for loop is reached and no solution has been returned so far.
End of explanation
try:
__file__
except NameError:
import graphviz as gv
Explanation: Display Code
Below, we ensure that we only import graphviz if this notebook is not loaded from another notebook. This works by checking that the variable file is not set.
End of explanation
def toDot(source, goal, Edges, Path):
V = set()
for x, L in Edges.items():
V.add(x)
for y in L:
V.add(y)
dot = gv.Digraph(node_attr={'shape': 'record', 'style': 'rounded'})
dot.attr(rankdir='LR')
for x in V:
if x in Path and x == goal:
dot.node(str(x), label=str(x), color='magenta')
elif x in Path:
dot.node(str(x), label=str(x), color='red')
else:
dot.node(str(x), label=str(x))
for u in V:
if Edges.get(u):
for v in Edges[u]:
if u in Path and v in Path and Path.index(v) == Path.index(u) + 1:
dot.edge(str(u), str(v), color='brown', style='bold')
elif u in Path and v in Path and Path.index(v) + 1 == Path.index(u):
dot.edge(str(u), str(v), color='blue', style='bold', dir='back')
else:
dot.edge(str(u), str(v), dir='both')
return dot
Explanation: The function $\texttt{toDot}(\texttt{source}, \texttt{Edges}, \texttt{Fringe}, \texttt{Visited})$ takes a graph that is represented by
its Edges, a set of nodes Fringe, and set Visited of nodes that have already been visited.
End of explanation
n = 6
def nextStates(node):
x, y = node
if x == 0 and y == 0:
return { (1, 0), (0, 1) }
if x == 0 and 0 < y < n-1:
return { (x+1, y), (x, y+1), (x, y-1) }
if 0 < x < n-1 and y == 0:
return { (x+1, y), (x, y+1), (x-1, y) }
if 0 < x < n-1 and 0 < y < n-1:
return { (x+1, y), (x, y+1), (x-1, y), (x, y-1) }
if x == n-1 and y == 0:
return { (x, y+1), (x-1, y)}
if x == 0 and y == n-1:
return { (x, y-1), (x+1, y)}
if x == n-1 and 0 < y < n-1:
return { (x, y+1), (x-1, y), (x, y-1) }
if 0 < x < n-1 and y == n-1:
return { (x+1, y), (x-1, y), (x, y-1) }
if x == n-1 and y == n-1:
return { (x-1, y), (x, y-1) }
return {}
def remove_back_edge(r, c, NS):
return [(x,y) for (x,y) in NS if x >= r and y >= c]
def create_edges(n):
Edges = {}
for row in range(n):
for col in range(n):
if (row, col) != (n-1, n-1):
Edges[(row, col)] = remove_back_edge(row, col, nextStates((row, col)))
for k in range(n-1):
Edges[(k, n-1)] = [(k+1, n-1)]
Edges[(n-1, k)] = [(n-1, k+1)]
return Edges
def search_show(start, goal, next_states, Edges):
Result = dfs_show(start, goal, next_states, [start], Edges)
display(toDot(start, goal, Edges, Result))
def dfs_show(state, goal, next_states, Path, Edges):
if state == goal:
return Path
for ns in next_states(state):
if ns not in Path:
display(toDot(state, goal, Edges, Path))
Result = dfs_show(ns, goal, next_states, Path + [ns], Edges)
if Result:
return Result
def main():
Edges = create_edges(n)
search_show((0,0), (n//2,n//2), nextStates, Edges)
main()
Explanation: Testing
End of explanation
%run Sliding-Puzzle.ipynb
import sys
sys.setrecursionlimit(200000)
%load_ext memory_profiler
%%time
Path = search(start, goal, next_states)
print(f'Length of path: {len(Path)-1}')
animation(Path)
Explanation: Solving the Sliding Puzzle
End of explanation |
3,504 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 style="text-decoration
Step1: Sur le schéma ci-dessus, le nombre de capteurs est de 6 (les capteurs situés devant le robot E-Puck), pour avoir une plus grande utilité des DNF, nous avons modifié le robot virtuel pour avoir 50 capteurs. En réalité seulement la moitié est utilisée, ce qui permet ainsi au robot de revenir vers la cible dès qu'il n'a plus la cible devant.
2) DNF direction
Nous avons ensuite rajouté une cible dont le robot ne connait que la direction de la même manière que pour les DNF des capteurs infrarouges.
3) Navigation
Dans le but d'avoir un comportement intéressant face aux obstacles, nous avons décidé que la navigation serait régit par deux activations, l'une correspondant à la présence d'obstacle et l'autre la direction de la cible inhibé par la présence d'obstacle dans cette même direction.
Ainsi le robot va toujours en direction de la cible excepté dans le cas de la présence d'un obstacle entre eux deux, dans ce cas le robot tournera autour de l'obstacle jusqu'à retrouver la direction de la cible.
Nous avions aussi envisagé un déplacement de l'activation de la direction dans le cas d'un obstacle devant, mais cette manipulation demandant avec le logiciel DNFPY un mouvement très lent n'était pas envisageable dans ce cadre là bien que ce soit plus proche d'un comportement naturel.
5) Vitesse des roues
La formule permettant de calculer la vitesse se base sur les activations ci-dessus.
$v=\frac{\sum_{x=-\Pi}^{+\Pi}(Fd(x)activationN(x))+\sum_{x=-\Pi}^{+\Pi}(Fo(x)activationI(x))}{\sum_{x=-\Pi}^{+\Pi}activationI(x)+\sum_{x=-\Pi}^{+\Pi}activationN(x)}$
Avec $Fd = \frac{2}{1+e^{(\pm x)}}-0.5$ et $Fo = \frac{2}{1+e^{(\pm x*10)}}-1$.
Les fonction étant inverse selon le moteur correspondant de telle manière que le robot s'écarte des obstacles et se rapproche de la cible.
a) Face à un obstacle
Dans le cas d'un seul obstacle situé entre la cible et le robot, l'activation N étant inhibé, on peut simplifier la formule de cet manière
Step2: Pour vérifier le bon fonctionnement dynamique de la formule face à un obstacle, observons $\frac{d\psi}{dt} = \omega$ en fonction de $\psi$, l'angle du robot dans le repère absolu.
Nous connaissons la taille de l'axe des deux roues ($\Delta = 52 mm$) et le rayon des roues ($r = 20.5 mm$).
On en déduit que face à un obstacle $\omega =\frac{r}{\Delta}(v_d-v_g)= \frac{2r}{\Delta}v_d$ avec $v_d = -v_g$ la vitesse angulaire du moteur droit.
Et hors de l'obstacle $\omega = 0rad/s$ car les deux roues vont à la même vitesse.
On a donc
$$\omega = \frac{d\psi}{dt} =
\left {
\begin{array}{r, c, l}
\frac{4r}{\Delta (1+e^{(-\psi+\phi)*10} -1)} & si & x \in [-\frac{\pi}{2},\frac{\pi}{2}] \
0 & sinon &
\end{array}
\right .
$$
Sur le graphe ci-dessous qui reprèsente $\omega$, le point rouge correspond à un point fixe instable ou répulseur. En effet, à droite du point la dérivée est positive donc l'angle va augmenter et à gauche elle est négative donc l'angle va diminuer.
Step3: b) Cas de l'obstacle double
Un des buts de cet méthode de calcul de la vitesse et de pouvoir gérer deux obstacles situés devant le robot avec parfois la possibilité de passer entre. Les DNFs permettent d'avoir une activation qui se regroupe ou deux activations séparés selon l'angle de l'obstacle.
Pour étudier la réaction du robot face à de tels obstacles, nous avons fait une expérience sur V-Rep à l'aide d'un scénario. On positionne deux cubes devant le robot avec l'interstice entre les deux qui augmentent à chaque fois, on récupère à t donné la vitesse de rotation du robot. La vitesse de rotation est aléatoirement positive ou négative mais pour une plus belle visualisation, nous avons représenté la valeur absolue et son négatif.
Le robot tournera soit à droite, soit à gauche face à l'obstacle si la distance est courte. Si il a la place de passer (avec une certaine marge de confiance), il continuera tout droit.
Step4: c) Sans obstacle
Rappel de la formule de la vitesse des moteurs dans ce cas
Step5: La réunion de ces deux courbes correspond à un point fixe stable ou attracteur que l'on pourrait retrouver en traçant la vitesse de rotation de la même façon que dans le cas d'un obstacle ponctuel.
II. Programmation
Pour arriver à ce résulat deux outils ont été utilisés | Python Code:
x = np.array([1, 2, 3, 4, 5, 6])
ir = np.array([0.000000000000000000e+00,
0.000000000000000000e+00,
6.056077528688350031e-03,
8.428876313973869550e-03,
0.000000000000000000e+00,
0.000000000000000000e+00])
ir=ir*100
dnf = np.array([-1.090321063995361328e+00,
-6.263688206672668457e-01,
2.505307266418066447e-03,
1.392887643318315938e+00,
-6.031024456024169922e-01,
-6.263688206672668457e-01])
activation = np.array([0,0,0,1,0,0])
plt.plot(x,ir,label='Capteur IR')
plt.plot(x,dnf,label='Dnf')
plt.plot(x,activation,label='Activation')
plt.legend()
plt.xlabel("Numéro du capteur")
plt.ylabel("Intensité")
plt.title("Du capteur à l'activation")
plt.show()
Explanation: <h1 style="text-decoration: underline, overline; color: navy;">Rapport de stage</h1>
<h2 style="text-decoration: underline;">Robot controlé par DNF</h2>
Le but de ce stage consiste à utiliser des champs neuronaux dynamiques (ou DNF : Dynamic Neural Fields), c'est à dire plusieurs éléments permettant un calcul décentralisé et distribué avec un comportement émergent face à un stimulis extérieur, pour permettre l'élaboration d'un robot autonome capable d'éviter des obstacles.
Pour cette première expérience le modèle de robot utilisé sera un e-puck : un petit robot cylindrique avec des capteurs infrarouges tout autour de lui et une caméra. Les stimulis extérieurs seront donc pour cette première partie les capteurs de proximité infrarouge.
I. Théorie et calculs
1) DNF infrarouges
Les champs neuronaux dynamiques sont directement reliés aux capteurs infrarouges puis pour savoir quel angle semble le plus suceptible d'être un obstacle, on va utiliser l'activation du DNF qui va renvoyer 0 si rien n'est détecté ou une valeur positive sinon.
Sur le schéma suivant, l'intensité du capteur a été multiplié par 100. L'intensité du capteur est égal à la distance maximale de réception moins la distance actuele détectée. Elle varie donc de 0 à 0.04 m.
End of explanation
x, st = np.linspace(-math.pi/2,math.pi/2, retstep=True)
vL = 2/(1+np.exp(x*10))-1
vR = 2/(1+np.exp(-x*10))-1
plt.plot(x,vL,label='Vitesse roue gauche')
plt.plot(x,vR,label='Vitesse roue droite')
plt.legend()
plt.xlabel("Position de l'obstacle")
plt.ylabel("Vitesse angulaire (rad/s)")
plt.title("Vitesses selon l'angle de l'obstacle")
plt.xticks([-math.pi/2, 0, math.pi/2], [r'$-\frac{\pi}{2}$', r'$0$', r'$+\frac{\pi}{2}$'])
plt.show()
Explanation: Sur le schéma ci-dessus, le nombre de capteurs est de 6 (les capteurs situés devant le robot E-Puck), pour avoir une plus grande utilité des DNF, nous avons modifié le robot virtuel pour avoir 50 capteurs. En réalité seulement la moitié est utilisée, ce qui permet ainsi au robot de revenir vers la cible dès qu'il n'a plus la cible devant.
2) DNF direction
Nous avons ensuite rajouté une cible dont le robot ne connait que la direction de la même manière que pour les DNF des capteurs infrarouges.
3) Navigation
Dans le but d'avoir un comportement intéressant face aux obstacles, nous avons décidé que la navigation serait régit par deux activations, l'une correspondant à la présence d'obstacle et l'autre la direction de la cible inhibé par la présence d'obstacle dans cette même direction.
Ainsi le robot va toujours en direction de la cible excepté dans le cas de la présence d'un obstacle entre eux deux, dans ce cas le robot tournera autour de l'obstacle jusqu'à retrouver la direction de la cible.
Nous avions aussi envisagé un déplacement de l'activation de la direction dans le cas d'un obstacle devant, mais cette manipulation demandant avec le logiciel DNFPY un mouvement très lent n'était pas envisageable dans ce cadre là bien que ce soit plus proche d'un comportement naturel.
5) Vitesse des roues
La formule permettant de calculer la vitesse se base sur les activations ci-dessus.
$v=\frac{\sum_{x=-\Pi}^{+\Pi}(Fd(x)activationN(x))+\sum_{x=-\Pi}^{+\Pi}(Fo(x)activationI(x))}{\sum_{x=-\Pi}^{+\Pi}activationI(x)+\sum_{x=-\Pi}^{+\Pi}activationN(x)}$
Avec $Fd = \frac{2}{1+e^{(\pm x)}}-0.5$ et $Fo = \frac{2}{1+e^{(\pm x*10)}}-1$.
Les fonction étant inverse selon le moteur correspondant de telle manière que le robot s'écarte des obstacles et se rapproche de la cible.
a) Face à un obstacle
Dans le cas d'un seul obstacle situé entre la cible et le robot, l'activation N étant inhibé, on peut simplifier la formule de cet manière :
$v=\frac{\sum_{x=-\Pi}^{+\Pi}(Fo(x)*activationI(x))}{\sum_{x=-\Pi}^{+\Pi}activationI(x)}$
L'activation est une gaussienne mais prenons ici une activation de type porte avec un obstacle ponctuel pour simplifier. On peut donc prendre $v=Fo(x)= \frac{2}{1+e^{(\pm x*10)}}-1$
Voici les vitesses des roues :
End of explanation
def plotPsi(phi):
x=np.linspace(-math.pi,math.pi)
vR = 2/(1+np.exp((-x+phi)*10))-1
for i in range(50):
if x[i]<-math.pi/2+phi or x[i]>math.pi/2+phi:
vR[i]=0
r=20.5
delta=52
w=2*r/delta*vR
plt.plot(x,w, label='Rotation')
plt.scatter(phi,w[phi], 40, color ='red')
plt.legend()
plt.xlabel("Angle psi")
plt.ylabel("Rotation du robot (rad/s)")
plt.title("Dynamique selon l'obstacle situé à "+str(phi)+" rad")
plt.xticks([-math.pi,-math.pi/2, 0, math.pi/2, math.pi],[r'$-\pi$', r'$-\frac{\pi}{2}$', r'$0$', r'$+\frac{\pi}{2}$', r'$+\pi$'])
plt.show()
slider_phi = widgets.FloatSliderWidget(min=-math.pi/2, max=math.pi/2, step=0.1, value=0)
w=widgets.interactive(plotPsi,phi = slider_phi)
display(w)
Explanation: Pour vérifier le bon fonctionnement dynamique de la formule face à un obstacle, observons $\frac{d\psi}{dt} = \omega$ en fonction de $\psi$, l'angle du robot dans le repère absolu.
Nous connaissons la taille de l'axe des deux roues ($\Delta = 52 mm$) et le rayon des roues ($r = 20.5 mm$).
On en déduit que face à un obstacle $\omega =\frac{r}{\Delta}(v_d-v_g)= \frac{2r}{\Delta}v_d$ avec $v_d = -v_g$ la vitesse angulaire du moteur droit.
Et hors de l'obstacle $\omega = 0rad/s$ car les deux roues vont à la même vitesse.
On a donc
$$\omega = \frac{d\psi}{dt} =
\left {
\begin{array}{r, c, l}
\frac{4r}{\Delta (1+e^{(-\psi+\phi)*10} -1)} & si & x \in [-\frac{\pi}{2},\frac{\pi}{2}] \
0 & sinon &
\end{array}
\right .
$$
Sur le graphe ci-dessous qui reprèsente $\omega$, le point rouge correspond à un point fixe instable ou répulseur. En effet, à droite du point la dérivée est positive donc l'angle va augmenter et à gauche elle est négative donc l'angle va diminuer.
End of explanation
dist=np.array([0.06,0.065,0.07,0.075,0.08,0.085,0.09,0.095,0.1,0.105,0.11,0.115])
dist=dist*2
dpdt=[0.08491489,0.18710179,-0.24631846,-0.28696026,0.28273605,0.27711486,-0.07904169,0.00187831,0.00484938,-0.00184582,0.00069609,0.00435697]
dpdta=np.abs(dpdt)
dpdtb=-dpdta
plt.plot(dist,dpdta,label='Tourne vers la droite')
plt.plot(dist,dpdtb,label='Tourne vers la gauche')
plt.legend()
plt.xlabel("Distance entre les deux cubes")
plt.ylabel("Vitesse angulaire (rad/s)")
plt.title("Vitesse de rotation selon la distance")
plt.show()
Explanation: b) Cas de l'obstacle double
Un des buts de cet méthode de calcul de la vitesse et de pouvoir gérer deux obstacles situés devant le robot avec parfois la possibilité de passer entre. Les DNFs permettent d'avoir une activation qui se regroupe ou deux activations séparés selon l'angle de l'obstacle.
Pour étudier la réaction du robot face à de tels obstacles, nous avons fait une expérience sur V-Rep à l'aide d'un scénario. On positionne deux cubes devant le robot avec l'interstice entre les deux qui augmentent à chaque fois, on récupère à t donné la vitesse de rotation du robot. La vitesse de rotation est aléatoirement positive ou négative mais pour une plus belle visualisation, nous avons représenté la valeur absolue et son négatif.
Le robot tournera soit à droite, soit à gauche face à l'obstacle si la distance est courte. Si il a la place de passer (avec une certaine marge de confiance), il continuera tout droit.
End of explanation
x, st = np.linspace(-math.pi,math.pi, retstep=True)
vL = 2/(1+np.exp(-x))-0.5
vR = 2/(1+np.exp(x))-0.5
plt.plot(x,vL,label='Vitesse roue gauche')
plt.plot(x,vR,label='Vitesse roue droite')
plt.legend()
plt.xlabel("Position de la cible")
plt.ylabel("Vitesse angulaire (rad/s)")
plt.title("Vitesses selon l'angle de la cible")
plt.xticks([-math.pi,-math.pi/2, 0, math.pi/2,math.pi], [r'$-\pi$',r'$-\frac{\pi}{2}$', r'$0$', r'$+\frac{\pi}{2}$',r'$+\pi$'])
plt.show()
Explanation: c) Sans obstacle
Rappel de la formule de la vitesse des moteurs dans ce cas :
$v=\frac{\sum_{x=-\Pi}^{+\Pi}(Fd(x)*activationN(x))}{\sum_{x=-\Pi}^{+\Pi}activationN(x)}$
On peut de la même manière que dans le cas d'un obstacle prendre $v = Fd(x) = \frac{2}{1+e^{(\pm x)}}-0.5$.
End of explanation
Image(filename='Organigramme.jpeg')
Explanation: La réunion de ces deux courbes correspond à un point fixe stable ou attracteur que l'on pourrait retrouver en traçant la vitesse de rotation de la même façon que dans le cas d'un obstacle ponctuel.
II. Programmation
Pour arriver à ce résulat deux outils ont été utilisés :
* Vrep : simulateur de robot
* DNFPY : le logiciel codant des DNF en python par Benoit
DNFPY utilise un système de map reliée entre elle par parenté. Les racines correspondent au résultat qui fait appel à une classe fille pour les calculs et ainsi de suite.
1) Hiérarchie des cartes
L'organisation des maps est créé dans la classe ModelEPuckDNF comme suit :
End of explanation |
3,505 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Minimal Example
Step1: Create fake "observations"
Step2: Create a New System
Step3: Add GPs
See the API docs for b.add_gaussian_process and gaussian_process.
Note that the original Figure 7 from the fitting release paper (Conroy et al. 2020) used PHOEBE 2.3, which made use of celerite instead of celerite2 and sklearn introduced in PHOEBE 2.4.
Step4: Run Forward Model
Since the system itself is still time-independent, the model is computed for one cycle according to compute_phases, but is then interpolated at the phases of the times in the dataset to compute and expose the fluxes including gaussian processes at the dataset times.
If the model were time-dependent, then using compute_times or compute_phases without covering a sufficient time-span will raise an error. | Python Code:
#!pip install -I "phoebe>=2.4,<2.5"
import matplotlib.pyplot as plt
plt.rc('font', family='serif', size=14, serif='STIXGeneral')
plt.rc('mathtext', fontset='stix')
import phoebe
import numpy as np
logger = phoebe.logger('warning')
# we'll set the random seed so that the noise model is reproducible
np.random.seed(123456789)
Explanation: Minimal Example: Gaussian Processes
In this example script, we'll reproduce Figure 7 from the fitting release paper (Conroy et al. 2020).
<img src="http://phoebe-project.org/images/figures/2020Conroy+_fig7.png" alt="Figure 7" width="800px"/>
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
b = phoebe.default_binary()
b.add_dataset('lc', compute_times=phoebe.linspace(0,5,501))
b.run_compute()
times = b.get_value(qualifier='times', context='model')
fluxes = b.get_value(qualifier='fluxes', context='model') + np.random.normal(size=times.shape) * 0.07 + 0.2*np.sin(times)
sigmas = np.ones_like(fluxes) * 0.05
Explanation: Create fake "observations"
End of explanation
b = phoebe.default_binary()
b.add_dataset('lc', times=times, fluxes=fluxes, sigmas=sigmas)
afig, mplfig = b.plot(show=True)
afig, mplfig = b.plot(x='phases', show=True)
b.run_compute(model='withoutGPs')
Explanation: Create a New System
End of explanation
b.add_gaussian_process('celerite2', dataset='lc01', kernel='sho')
b.add_gaussian_process('celerite2', dataset='lc01', kernel='matern32')
Explanation: Add GPs
See the API docs for b.add_gaussian_process and gaussian_process.
Note that the original Figure 7 from the fitting release paper (Conroy et al. 2020) used PHOEBE 2.3, which made use of celerite instead of celerite2 and sklearn introduced in PHOEBE 2.4.
End of explanation
print(b.run_checks_compute())
b.flip_constraint('compute_phases', solve_for='compute_times')
b.set_value('compute_phases', phoebe.linspace(0,1,101))
print(b.run_checks_compute())
b.run_compute(model='withGPs')
afig, mplfig = b.plot(c={'withoutGPs': 'red', 'withGPs': 'green'},
ls={'withoutGPs': 'dashed', 'withGPs': 'solid'},
s={'model': 0.03},
save='figure_GPs_times.pdf',
show=True)
afig, mplfig = b.plot(c={'withoutGPs': 'red', 'withGPs': 'green'},
ls={'withoutGPs': 'dashed', 'withGPs': 'solid'},
s={'model': 0.03},
x='phases',
save='figure_GPs_phases.pdf', show=True)
Explanation: Run Forward Model
Since the system itself is still time-independent, the model is computed for one cycle according to compute_phases, but is then interpolated at the phases of the times in the dataset to compute and expose the fluxes including gaussian processes at the dataset times.
If the model were time-dependent, then using compute_times or compute_phases without covering a sufficient time-span will raise an error.
End of explanation |
3,506 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Jupyter Notebook desenvolvido por Gustavo S.S.
"Na ciência, o crédito vai para o homem que convence o mundo,
não para o que primeiro teve a ideia" - Francis Darwin
Capacitores e Indutores
Contrastando com um resistor,
que gasta ou dissipa energia de
forma irreversível, um indutor ou
um capacitor armazena ou libera
energia (isto é, eles têm capacidade
de memória).
Capacitor
Capacitor é um elemento passivo projetado para armazenar energia em seu
campo elétrico. Um capacitor é formado por duas placas condutoras separadas por um
isolante (ou dielétrico).
Quando uma fonte de tensão v é conectada ao capacitor, como na Figura
6.2, a fonte deposita uma carga positiva q sobre uma placa e uma carga negativa
–q na outra placa. Diz-se que o capacitor armazena a carga elétrica. A quantidade
de carga armazenada, representada por q, é diretamente proporcional à
tensão aplicada v de modo que
Step1: Problema Prático 6.1
Qual é a tensão entre os terminais de um capacitor de 4,5 uF se a carga em uma placa
for 0,12 mC? Quanta energia é armazenada?
Step2: Exemplo 6.2
A tensão entre os terminais de um capacitor de 5 uF é
Step3: Problema Prático 6.2
Se um capacitor de 10 uF for conectado a uma fonte de tensão com
Step4: Exemplo 6.3
Determine a tensão através de um capacitor de 2 uF se a corrente através dele for
i(t) 6e^-3.000t mA
Suponha que a tensão inicial no capacitor seja igual a zero.
Step5: Problema Prático 6.3
A corrente contínua através de um capacitor de 100 uF é
Step6: Exemplo 6.4
Determine a corrente através de um capacitor de 200 mF cuja tensão é mostrada na
Figura 6.9.
Step7: Problema Prático 6.4
Um capacitor inicialmente descarregado de 1 mF possui a corrente mostrada na Figura 6.11 entre seus terminais. Calcule a tensão entre seus terminais nos instantes t = 2 ms
e t = 5 ms.
Step8: Exemplo 6.5
Obtenha a energia armazenada em cada capacitor na Figura 6.12a em condições
de CC.
Step9: Problema Prático 6.5
Em condições CC, determine a energia armazenada nos capacitores da Figura 6.13.
Step10: Capacitores em Série e Paralelo
Paralelo
A capacitância equivalente de N capacitores ligados em paralelo é a soma
de suas capacitâncias individuais.
\begin{align}
{\Large C_{eq} = C_1 + C_2 + ... + C_N = \sum_{i=1}^{N} C_i}
\end{align}
Série
A capacitância equivalente dos capacitores associados em série é o inverso
da soma dos inversos das capacitâncias individuais.
\begin{align}
{\Large \frac{1}{C_{eq}} = \frac{1}{C_1} + \frac{1}{C_2} + ... + \frac{1}{C_N}}
\end{align}
\begin{align}
{\Large C_{eq} = \frac{1}{\sum_{i=1}^{N} \frac{1}{C_i}}}
\end{align}
\begin{align}
\{\Large C_{eq} = (\sum_{i=1}^{N} (C_i)^{-1})^{-1}}
\end{align}
Para 2 Capacitores
Step11: Problema Prático 6.6
Determine a capacitância equivalente nos terminais do circuito da Figura 6.17.
Step12: Exemplo 6.7
Para o circuito da Figura 6.18, determine a tensão em cada capacitor.
Step13: Problema Prático 6.7
Determine a tensão em cada capacitor na Figura 6.20. | Python Code:
print("Exemplo 6.1")
C = 3*(10**(-12))
V = 20
q = C*V
print("Carga armazenada:",q,"C")
w = q**2/(2*C)
print("Energia armazenada:",w,"J")
Explanation: Jupyter Notebook desenvolvido por Gustavo S.S.
"Na ciência, o crédito vai para o homem que convence o mundo,
não para o que primeiro teve a ideia" - Francis Darwin
Capacitores e Indutores
Contrastando com um resistor,
que gasta ou dissipa energia de
forma irreversível, um indutor ou
um capacitor armazena ou libera
energia (isto é, eles têm capacidade
de memória).
Capacitor
Capacitor é um elemento passivo projetado para armazenar energia em seu
campo elétrico. Um capacitor é formado por duas placas condutoras separadas por um
isolante (ou dielétrico).
Quando uma fonte de tensão v é conectada ao capacitor, como na Figura
6.2, a fonte deposita uma carga positiva q sobre uma placa e uma carga negativa
–q na outra placa. Diz-se que o capacitor armazena a carga elétrica. A quantidade
de carga armazenada, representada por q, é diretamente proporcional à
tensão aplicada v de modo que:
\begin{align}
{\Large q = Cv}
\end{align}
Capacitância é a razão entre a carga depositada em uma placa de um capacitor
e a diferença de potencial entre as duas placas, medidas em farads (F). Embora a capacitância C de um capacitor seja a razão entre a carga q por placa e a tensão aplicada v, ela não depende de q ou v, mas, sim, das dimensões físicas do capacitor
\begin{align}
{\Large C = \epsilon \frac{A}{d}}
\end{align}
Onde A é a área de cada placa, d é a distância entre as placas e ε é a permissividade elétrica do material dielétrico entre as placas
Para obter a relação corrente-tensão do capacitor, utilizamos:
\begin{align}
{\Large i = C \frac{dv}{dt}}
\end{align}
Diz-se que os capacitores que realizam a Equação acima são lineares. Para um capacitor não linear, o gráfico da relação corrente-tensão não é uma linha reta. E embora alguns capacitores sejam não lineares, a maioria é linear.
Relação Tensão-Corrente:
\begin{align}
{\Large v(t) = \frac{1}{C} \int_{t_0}^{t} i(\tau)d\tau + v(t_0)}
\end{align}
A Potência Instantânea liberada para o capacitor é:
\begin{align}
{\Large p = vi = Cv \frac{dv}{dt}}
\end{align}
A energia armazenada no capacitor é:
\begin{align}
{\Large w = \int_{-\infty}^{t} p(\tau)d\tau}
\=
\{\Large C \int_{-\infty}^{t} v \frac{dv}{d\tau}d\tau}
\=
\{\Large C \int_{v(-\infty)}^{v(t)} vdv}
\=
\{\Large \frac{1}{2} Cv^2}
\end{align}
Percebemos que v(-∞) = 0, pois o capacitor foi descarregado em t = -∞. Logo:
\begin{align}
{\Large w = \frac{1}{2} Cv^2}
\
\{\Large w = \frac{q^2}{2C}}
\end{align}
As quais representam a energia armazenada no campo elétrico existente entre as placas do capacitor. Essa energia pode ser recuperada, já que um capacitor ideal não pode dissipar energia. De fato, a palavra capacitor deriva da capacidade de esse elemento armazenar energia em um campo elétrico.
Um capacitor é um circuito aberto em CC.
A tensão em um capacitor não pode mudar abruptamente.
O capacitor ideal não dissipa energia, mas absorve potência do circuito ao armazenar energia em seu campo e retorna energia armazenada previamente ao liberar potência para o circuito.
Um capacitor real, não ideal, possui uma resistência de fuga em paralelo conforme pode ser observado no modelo visto na Figura 6.8. A resistência de fuga pode chegar a valores bem elevados como 100 MΩ e pode ser desprezada para a maioria das aplicações práticas.
Exemplo 6.1
a. Calcule a carga armazenada em um capacitor de 3 pF com 20 V entre seus terminais.
b. Determine a energia armazenada no capacitor.
End of explanation
print("Problema Prático 6.1")
C = 4.5*10**-6
q = 0.12*10**-3
V = q/C
print("Tensão no capacitor:",V,"V")
w = q**2/(2*C)
print("Energia armazenada:",w,"J")
Explanation: Problema Prático 6.1
Qual é a tensão entre os terminais de um capacitor de 4,5 uF se a carga em uma placa
for 0,12 mC? Quanta energia é armazenada?
End of explanation
print("Exemplo 6.2")
import numpy as np
from sympy import *
C = 5*10**-6
t = symbols('t')
v = 10*cos(6000*t)
i = C*diff(v,t)
print("Corrente que passa no capacitor:",i,"A")
Explanation: Exemplo 6.2
A tensão entre os terminais de um capacitor de 5 uF é:
v(t) 10 cos 6.000t V
Calcule a corrente que passa por ele.
End of explanation
print("Problema Prático 6.2")
C = 10*10**-6
v = 75*sin(2000*t)
i = C * diff(v,t)
print("Corrente:",i,"A")
Explanation: Problema Prático 6.2
Se um capacitor de 10 uF for conectado a uma fonte de tensão com:
v(t) 75 sen 2.000t V
determine a corrente através do capacitor.
End of explanation
print("Exemplo 6.3")
C = 2*10**-6
i = 6*exp(-3000*t)*10**-3
v = integrate(i,(t,0,t))
v = v/C
print("Tensão no capacitor:",v,"V")
Explanation: Exemplo 6.3
Determine a tensão através de um capacitor de 2 uF se a corrente através dele for
i(t) 6e^-3.000t mA
Suponha que a tensão inicial no capacitor seja igual a zero.
End of explanation
print("Problema Prático 6.3")
C = 100*10**-6
i = 50*sin(120*np.pi*t)*10**-3
v = integrate(i,(t,0,0.001))
v = v/C
print("Tensão no capacitor para t = 1ms:",v,"V")
v = integrate(i,(t,0,0.005))
v = v/C
print("Tensão no capacitor para t = 5ms:",v,"V")
Explanation: Problema Prático 6.3
A corrente contínua através de um capacitor de 100 uF é:
i(t) = 50 sen(120pi*t) mA.
Calcule a tensão nele nos instantes t = 1 ms e t = 5 ms. Considere v(0) = 0.
End of explanation
print("Exemplo 6.4")
#v(t) = 50t, 0<t<1
#v(t) = 100 - 50t, 1<t<3
#v(t) = -200 + 50t, 3<t<4
#v(t) = 0, caso contrario
C = 200*10**-6
v1 = 50*t
v2 = 100 - 50*t
v3 = -200 + 50*t
i1 = C*diff(v1,t)
i2 = C*diff(v2,t)
i3 = C*diff(v3,t)
print("Corrente para 0<t<1:",i1,"A")
print("Corrente para 1<t<3:",i2,"A")
print("Corrente para 3<t<4:",i3,"A")
Explanation: Exemplo 6.4
Determine a corrente através de um capacitor de 200 mF cuja tensão é mostrada na
Figura 6.9.
End of explanation
print("Problema Prático 6.4")
C = 1*10**-3
i = 50*t*10**-3
v = integrate(i,(t,0,0.002))
v = v/C
print("Tensão para t=2ms:",v,"V")
i = 100*10**-3
v = integrate(i,(t,0,0.005))
v = v/C
print("Tensão para t=5ms:",v,"V")
Explanation: Problema Prático 6.4
Um capacitor inicialmente descarregado de 1 mF possui a corrente mostrada na Figura 6.11 entre seus terminais. Calcule a tensão entre seus terminais nos instantes t = 2 ms
e t = 5 ms.
End of explanation
print("Exemplo 6.5")
C1 = 2*10**-3
C2 = 4*10**-3
I1 = (6*10**-3)*(3000)/(3000 + 2000 + 4000) #corrente que passa no resistor de 2k
Vc1 = I1*2000 # tensao sobre o cap1 = tensao sobre o resistor 2k
wc1 = (C1*Vc1**2)/2
print("Energia do Capacitor 1:",wc1,"J")
Vc2 = I1*4000
wc2 = (C2*Vc2**2)/2
print("Energia do Capacitor 2:",wc2,"J")
Explanation: Exemplo 6.5
Obtenha a energia armazenada em cada capacitor na Figura 6.12a em condições
de CC.
End of explanation
print("Problema Prático 6.5")
C1 = 20*10**-6
C2 = 30*10**-6
Vf = 50 #tensao da fonte
Req = 1000 + 3000 + 6000
Vc1 = Vf*(3000+6000)/Req
Vc2 = Vf*3000/Req
wc1 = (C1*Vc1**2)/2
wc2 = (C2*Vc2**2)/2
print("Energia no Capacitor 1:",wc1,"J")
print("Energia no Capacitor 2:",wc2,"J")
Explanation: Problema Prático 6.5
Em condições CC, determine a energia armazenada nos capacitores da Figura 6.13.
End of explanation
print("Exemplo 6.6")
u = 10**-6 #definicao de micro
Ceq1 = (20*u*5*u)/((20 + 5)*u)
Ceq2 = Ceq1 + 6*u + 20*u
Ceq3 = (Ceq2*60*u)/(Ceq2 + 60*u)
print("Capacitância Equivalente:",Ceq3,"F")
Explanation: Capacitores em Série e Paralelo
Paralelo
A capacitância equivalente de N capacitores ligados em paralelo é a soma
de suas capacitâncias individuais.
\begin{align}
{\Large C_{eq} = C_1 + C_2 + ... + C_N = \sum_{i=1}^{N} C_i}
\end{align}
Série
A capacitância equivalente dos capacitores associados em série é o inverso
da soma dos inversos das capacitâncias individuais.
\begin{align}
{\Large \frac{1}{C_{eq}} = \frac{1}{C_1} + \frac{1}{C_2} + ... + \frac{1}{C_N}}
\end{align}
\begin{align}
{\Large C_{eq} = \frac{1}{\sum_{i=1}^{N} \frac{1}{C_i}}}
\end{align}
\begin{align}
\{\Large C_{eq} = (\sum_{i=1}^{N} (C_i)^{-1})^{-1}}
\end{align}
Para 2 Capacitores:
\begin{align}
{\Large C_{eq} = \frac{C_1 C_2}{C_1 + C_2}}
\end{align}
Exemplo 6.6
Determine a capacitância equivalente vista entre os terminais a-b do circuito da
Figura 6.16.
End of explanation
print("Problema Prático 6.6")
Ceq1 = (60*u*120*u)/((60 + 120)*u)
Ceq2 = 20*u + Ceq1
Ceq3 = 50*u + 70*u
Ceq4 = (Ceq2 * Ceq3)/(Ceq2 + Ceq3)
print("Capacitância Equivalente:",Ceq4,"F")
Explanation: Problema Prático 6.6
Determine a capacitância equivalente nos terminais do circuito da Figura 6.17.
End of explanation
print("Exemplo 6.7")
m = 10**-3
Vf = 30
Ceq1 = 40*m + 20*m
Ceq2 = 1/(1/(20*m) + 1/(30*m) + 1/(Ceq1))
print("Capacitância Equivalente:",Ceq2,"F")
q = Ceq2*Vf
v1 = q/(20*m)
v2 = q/(30*m)
v3 = Vf - v1 - v2
print("Tensão v1:",v1,"V")
print("Tensão v2:",v2,"V")
print("Tensão v3:",v3,"V")
Explanation: Exemplo 6.7
Para o circuito da Figura 6.18, determine a tensão em cada capacitor.
End of explanation
print("Problema Prático 6.7")
Vf = 90
Ceq1 = (30*u * 60*u)/(30*u + 60*u)
Ceq2 = Ceq1 + 20*u
Ceq3 = (40*u * Ceq2)/(40*u + Ceq2)
print("Capacitância Equivalente:",Ceq3,"F")
q1 = Ceq3*Vf
v1 = q1/(40*u)
v2 = Vf - v1
q3 = Ceq1*v2
v3 = q3/(60*u)
v4 = q3/(30*u)
print("Tensão v1:",v1,"V")
print("Tensão v2:",v2,"V")
print("Tensão v3:",v3,"V")
print("Tensão v4:",v4,"V")
Explanation: Problema Prático 6.7
Determine a tensão em cada capacitor na Figura 6.20.
End of explanation |
3,507 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: TFP Probabilistic Layers
Step2: Make things Fast!
Before we dive in, let's make sure we're using a GPU for this demo.
To do this, select "Runtime" -> "Change runtime type" -> "Hardware accelerator" -> "GPU".
The following snippet will verify that we have access to a GPU.
Step3: Note
Step4: Note that preprocess() above returns image, image rather than just image because Keras is set up for discriminative models with an (example, label) input format, i.e. $p\theta(y|x)$. Since the goal of the VAE is to recover the input x from x itself (i.e. $p_\theta(x|x)$), the data pair is (example, example).
VAE Code Golf
Specify model.
Step5: Do inference.
Step6: Look Ma, No ~~Hands~~Tensors! | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
#@title Import { display-mode: "form" }
import numpy as np
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_datasets as tfds
import tensorflow_probability as tfp
tfk = tf.keras
tfkl = tf.keras.layers
tfpl = tfp.layers
tfd = tfp.distributions
Explanation: TFP Probabilistic Layers: Variational Auto Encoder
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/probability/examples/Probabilistic_Layers_VAE"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Probabilistic_Layers_VAE.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Probabilistic_Layers_VAE.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/Probabilistic_Layers_VAE.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
In this example we show how to fit a Variational Autoencoder using TFP's "probabilistic layers."
Dependencies & Prerequisites
End of explanation
if tf.test.gpu_device_name() != '/device:GPU:0':
print('WARNING: GPU device not found.')
else:
print('SUCCESS: Found GPU: {}'.format(tf.test.gpu_device_name()))
Explanation: Make things Fast!
Before we dive in, let's make sure we're using a GPU for this demo.
To do this, select "Runtime" -> "Change runtime type" -> "Hardware accelerator" -> "GPU".
The following snippet will verify that we have access to a GPU.
End of explanation
datasets, datasets_info = tfds.load(name='mnist',
with_info=True,
as_supervised=False)
def _preprocess(sample):
image = tf.cast(sample['image'], tf.float32) / 255. # Scale to unit interval.
image = image < tf.random.uniform(tf.shape(image)) # Randomly binarize.
return image, image
train_dataset = (datasets['train']
.map(_preprocess)
.batch(256)
.prefetch(tf.data.AUTOTUNE)
.shuffle(int(10e3)))
eval_dataset = (datasets['test']
.map(_preprocess)
.batch(256)
.prefetch(tf.data.AUTOTUNE))
Explanation: Note: if for some reason you cannot access a GPU, this colab will still work. (Training will just take longer.)
Load Dataset
End of explanation
input_shape = datasets_info.features['image'].shape
encoded_size = 16
base_depth = 32
prior = tfd.Independent(tfd.Normal(loc=tf.zeros(encoded_size), scale=1),
reinterpreted_batch_ndims=1)
encoder = tfk.Sequential([
tfkl.InputLayer(input_shape=input_shape),
tfkl.Lambda(lambda x: tf.cast(x, tf.float32) - 0.5),
tfkl.Conv2D(base_depth, 5, strides=1,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2D(base_depth, 5, strides=2,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2D(2 * base_depth, 5, strides=1,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2D(2 * base_depth, 5, strides=2,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2D(4 * encoded_size, 7, strides=1,
padding='valid', activation=tf.nn.leaky_relu),
tfkl.Flatten(),
tfkl.Dense(tfpl.MultivariateNormalTriL.params_size(encoded_size),
activation=None),
tfpl.MultivariateNormalTriL(
encoded_size,
activity_regularizer=tfpl.KLDivergenceRegularizer(prior)),
])
decoder = tfk.Sequential([
tfkl.InputLayer(input_shape=[encoded_size]),
tfkl.Reshape([1, 1, encoded_size]),
tfkl.Conv2DTranspose(2 * base_depth, 7, strides=1,
padding='valid', activation=tf.nn.leaky_relu),
tfkl.Conv2DTranspose(2 * base_depth, 5, strides=1,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2DTranspose(2 * base_depth, 5, strides=2,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2DTranspose(base_depth, 5, strides=1,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2DTranspose(base_depth, 5, strides=2,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2DTranspose(base_depth, 5, strides=1,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2D(filters=1, kernel_size=5, strides=1,
padding='same', activation=None),
tfkl.Flatten(),
tfpl.IndependentBernoulli(input_shape, tfd.Bernoulli.logits),
])
vae = tfk.Model(inputs=encoder.inputs,
outputs=decoder(encoder.outputs[0]))
Explanation: Note that preprocess() above returns image, image rather than just image because Keras is set up for discriminative models with an (example, label) input format, i.e. $p\theta(y|x)$. Since the goal of the VAE is to recover the input x from x itself (i.e. $p_\theta(x|x)$), the data pair is (example, example).
VAE Code Golf
Specify model.
End of explanation
negloglik = lambda x, rv_x: -rv_x.log_prob(x)
vae.compile(optimizer=tf.optimizers.Adam(learning_rate=1e-3),
loss=negloglik)
_ = vae.fit(train_dataset,
epochs=15,
validation_data=eval_dataset)
Explanation: Do inference.
End of explanation
# We'll just examine ten random digits.
x = next(iter(eval_dataset))[0][:10]
xhat = vae(x)
assert isinstance(xhat, tfd.Distribution)
#@title Image Plot Util
import matplotlib.pyplot as plt
def display_imgs(x, y=None):
if not isinstance(x, (np.ndarray, np.generic)):
x = np.array(x)
plt.ioff()
n = x.shape[0]
fig, axs = plt.subplots(1, n, figsize=(n, 1))
if y is not None:
fig.suptitle(np.argmax(y, axis=1))
for i in range(n):
axs.flat[i].imshow(x[i].squeeze(), interpolation='none', cmap='gray')
axs.flat[i].axis('off')
plt.show()
plt.close()
plt.ion()
print('Originals:')
display_imgs(x)
print('Decoded Random Samples:')
display_imgs(xhat.sample())
print('Decoded Modes:')
display_imgs(xhat.mode())
print('Decoded Means:')
display_imgs(xhat.mean())
# Now, let's generate ten never-before-seen digits.
z = prior.sample(10)
xtilde = decoder(z)
assert isinstance(xtilde, tfd.Distribution)
print('Randomly Generated Samples:')
display_imgs(xtilde.sample())
print('Randomly Generated Modes:')
display_imgs(xtilde.mode())
print('Randomly Generated Means:')
display_imgs(xtilde.mean())
Explanation: Look Ma, No ~~Hands~~Tensors!
End of explanation |
3,508 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Marked Point Pattern
In addition to the unmarked point pattern, non-binary attributes might be associated with each point, leading to the so-called marked point pattern. The characteristics of a marked point pattern are
Step1: Create an attribute named quad which has a value for each event.
Step2: Attach the attribute quad to the point pattern
Step3: Explode a marked point pattern into a sequence of individual point patterns. Since the mark quad has 4 unique values, the sequence will be of length 4.
Step4: Plot the 4 individual sequences
Step5: Plot the 4 unmarked point patterns using the same axes for a convenient comparison of locations | Python Code:
from pysal.explore.pointpats import PoissonPointProcess, PoissonClusterPointProcess, Window, poly_from_bbox, PointPattern
import pysal.lib as ps
from pysal.lib.cg import shapely_ext
%matplotlib inline
import matplotlib.pyplot as plt
# open the virginia polygon shapefile
va = ps.io.open(ps.examples.get_path("virginia.shp"))
polys = [shp for shp in va]
# Create the exterior polygons for VA from the union of the county shapes
state = shapely_ext.cascaded_union(polys)
# create window from virginia state boundary
window = Window(state.parts)
window.bbox
window.centroid
samples = PoissonPointProcess(window, 200, 1, conditioning=False, asPP=False)
csr = PointPattern(samples.realizations[0])
cx, cy = window.centroid
cx
cy
west = csr.points.x < cx
south = csr.points.y < cy
east = 1 - west
north = 1 - south
Explanation: Marked Point Pattern
In addition to the unmarked point pattern, non-binary attributes might be associated with each point, leading to the so-called marked point pattern. The characteristics of a marked point pattern are:
Location pattern of the events are of interest
Stochastic attribute attached to the events is of interest
Unmarked point pattern can be modified to be a marked point pattern using the method add_marks while the method explode could decompose a marked point pattern into a sequence of unmarked point patterns. Both methods belong to the class PointPattern.
End of explanation
quad = 1 * east * north + 2 * west * north + 3 * west * south + 4 * east * south
type(quad)
quad
Explanation: Create an attribute named quad which has a value for each event.
End of explanation
csr.add_marks([quad], mark_names=['quad'])
csr.df
Explanation: Attach the attribute quad to the point pattern
End of explanation
csr_q = csr.explode('quad')
len(csr_q)
csr
csr.summary()
Explanation: Explode a marked point pattern into a sequence of individual point patterns. Since the mark quad has 4 unique values, the sequence will be of length 4.
End of explanation
plt.xlim?
plt.xlim()
for ppn in csr_q:
ppn.plot()
Explanation: Plot the 4 individual sequences
End of explanation
x0, y0, x1, y1 = csr.mbb
ylim = (y0, y1)
xlim = (x0, x1)
for ppn in csr_q:
ppn.plot()
plt.xlim(xlim)
plt.ylim(ylim)
Explanation: Plot the 4 unmarked point patterns using the same axes for a convenient comparison of locations
End of explanation |
3,509 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
Step1: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
Step2: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise
Step3: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
Step4: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
Step5: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Note
Step6: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this
Step7: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[
Step8: Now, run through our entire review data set and convert each review to a word vector.
Step9: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
Step10: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords
Step11: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note
Step12: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
Step13: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters.
Step14: Try out your own text! | Python Code:
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
Explanation: Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
End of explanation
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
Explanation: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
End of explanation
from collections import Counter
total_counts = Counter()# bag of words here
for idx,row in reviews.iterrows():
# print(row)
for word in row[0].split(' '):
total_counts[word] +=1
print("Total words in data set: :", len(total_counts))
Explanation: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation). When you break up the reviews into words, use .split(' ') instead of .split() so your results match ours.
End of explanation
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
Explanation: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
End of explanation
print(vocab[-1], ': ', total_counts[vocab[-1]])
Explanation: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
End of explanation
word2idx = ## create the word-to-index dictionary here
Explanation: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Note: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many words tied for that number of counts, and the Counter class does not guarantee which one will be returned in the case of a tie.
Now for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension.
Exercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on.
End of explanation
def text_to_vector(text):
pass
Explanation: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this:
Initialize the word vector with np.zeros, it should be the length of the vocabulary.
Split the input string of text into a list of words with .split(' '). Again, if you call .split() instead, you'll get slightly different results than what we show here.
For each word in that list, increment the element in the index associated with that word, which you get from word2idx.
Note: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary.
End of explanation
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
Explanation: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
array([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0])
```
End of explanation
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
Explanation: Now, run through our entire review data set and convert each review to a word vector.
End of explanation
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)
trainY
Explanation: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
End of explanation
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
model = tflearn.DNN(net)
return model
Explanation: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with the categorical cross-entropy.
Finally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like
net = tflearn.input_data([None, 10]) # Input
net = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 2, activation='softmax') # Output
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
End of explanation
model = build_model()
Explanation: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon.
End of explanation
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=10)
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
End of explanation
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
Explanation: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters.
End of explanation
# Helper function that uses your model to predict sentiment
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]
print('Sentence: {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
sentence = "Moonlight is by far the best movie of 2016."
test_sentence(sentence)
sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful"
test_sentence(sentence)
Explanation: Try out your own text!
End of explanation |
3,510 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'hadgem3-gc31-ll', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: MOHC
Source ID: HADGEM3-GC31-LL
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:14
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
3,511 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<p>参考にしました</p>
<p>http
Step1: <p>メトロポリス法</p>
<p>(1)パラメーターqの初期値を選ぶ</p>
<p>(2)qを増やすか減らすかをランダムに決める</p>
<p>(3)q(新)において尤度が大きくなるならqの値をq(新)に変更する</p>
<p>(4)q(新)で尤度が小さくなる場合であっても、確率rでqの値をq(新)に変更する</p>
Step2: <p>例題:個体差と生存種子数</p>
ある植物を考える。i番目の個体の生存種子数をyiとする。yiは0以上8以下である。以下はヒストグラムである。
Step3: 種子生存確率が9通の生存確率qの二項分布で説明できるとする。
<p>実際のデータを説明できていない。</p>
・・・個体差を考慮できるGLMMを用いる。
<p>logit(qi) = β + ri</p>
切片βは全個体に共通するパラメーター、riは個体差で平均0、標準偏差sの正規分布に従う。
事後分布∝p(Y|β, {ri})×事前分布
<p>βの事前分布には無情報事前分布を指定する。</p>
p(β)=1/√2π×100^2 × exp(-β^2/2×100^2)
Step4: <p>riの事前分布には平均0、標準偏差sの正規分布を仮定する。</p>
p(ri|s)=1/√2π×s^2 × exp(-ri^2/2×s^2)
<p>sの事前分布には無情報事前分布を指定する。</p>
p(s)=(0から10^4までの連続一様分布)
Step5: yの数が大きく全て使うと時間がかかりすぎるので6体だけ選び出す。 | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pymc as pm2
import pymc3 as pm
import time
import math
import numpy.random as rd
import pandas as pd
from pymc3 import summary
from pymc3.backends.base import merge_traces
import theano.tensor as T
Explanation: <p>参考にしました</p>
<p>http://qiita.com/kenmatsu4/items/a0c703762a2429e21793</p>
<p>http://www.slideshare.net/shima__shima/2014-mtokyoscipy6</p>
<p>https://github.com/scipy-japan/tokyo-scipy/tree/master/006/shima__shima</p>
<p>岩波データサイエンス</p>
<p>データ解析のための統計モデリング入門</p>
End of explanation
def comb(n, r):
if n == 0 or r == 0: return 1
return comb(n, r - 1) * (n - r + 1) / r
def prob(n, y, q):
p = comb(n, y) * q ** y * (1 - q) ** (n - y)
return p
def likelighood(n, y, q):
p = 1.0
for i in y:
p = p*prob(n, i, q)
return p
def metropolis(n, y, q, b, num):
qlist = np.array([q])
for i in range(num):
old_q = q
q = q+np.random.choice([b, -b])
old_l = likelighood(n, y, old_q)
new_l = likelighood(n, y, q)
if new_l > old_l:
old_q = q
else:
r = new_l/old_l
q = np.random.choice([q, old_q], p=[r, 1.0-r])
q = round(q, 5)
qlist = np.append(qlist, q)
return q, qlist
y = [4, 3, 4, 5, 5, 2, 3, 1, 4, 0, 1, 5, 5, 6, 5, 4, 4, 5, 3, 4]
q, qlist = metropolis(8, y, 0.3, 0.01, 10000)
plt.plot(qlist)
plt.hist(qlist)
qlist.mean()
N = 40
X = np.random.uniform(10, size=N)
Y = X*30 + 4 + np.random.normal(0, 16, size=N)
plt.plot(X, Y, "o")
multicore = False
saveimage = False
itenum = 1000
t0 = time.clock()
chainnum = 3
with pm.Model() as model:
alpha = pm.Normal('alpha', mu=0, sd =20)
beta = pm.Normal('beta', mu=0, sd=20)
sigma = pm.Uniform('sigma', lower=0)
y = pm.Normal('y', mu=beta*X + alpha, sd=sigma, observed=Y)
start = pm.find_MAP()
step = pm.NUTS(state=start)
with model:
if(multicore):
trace = pm.sample(itenum, step, start=start,
njobs=chainnum, random_seed=range(chainnum),
progress_bar=False)
else:
ts = [pm.sample(itenum, step, chain=i, progressbar=False) for i in range(chainnum)]
trace = merge_traces(ts)
if(saveimage):
pm.tracepot(trace).savefig("simple_linear_trace.png")
print "Rhat="+str(pm.gelman_rubin(trace))
t1=time.clock()
print "elapsed time=" + str(t1-t0)
if(not multicore):
trace = ts[0]
with model:
pm.traceplot(trace, model.vars)
pm.forestplot(trace)
summary(trace)
multicore = True
t0 = time.clock()
with model:
if(multicore):
trace = pm.sample(itenum, step, start=start,
njobs=chainnum, random_seed=range(chainnum),
progress_bar=False)
else:
ts = [pm.sample(itenum, step, chain=i, progressbar=False) for i in range(chainnum)]
trace = merge_traces(ts)
if(saveimage):
pm.tracepot(trace).savefig("simple_linear_trace.png")
print "Rhat="+str(pm.gelman_rubin(trace))
t1=time.clock()
print "elapsed time=" + str(t1-t0)
if(not multicore):
trace = ts[0]
with model:
pm.traceplot(trace, model.vars)
Explanation: <p>メトロポリス法</p>
<p>(1)パラメーターqの初期値を選ぶ</p>
<p>(2)qを増やすか減らすかをランダムに決める</p>
<p>(3)q(新)において尤度が大きくなるならqの値をq(新)に変更する</p>
<p>(4)q(新)で尤度が小さくなる場合であっても、確率rでqの値をq(新)に変更する</p>
End of explanation
data = pd.read_csv("http://hosho.ees.hokudai.ac.jp/~kubo/stat/iwanamibook/fig/hbm/data7a.csv")
plt.bar(range(9), data.groupby('y').sum().id)
data.groupby('y').sum().T
Explanation: <p>例題:個体差と生存種子数</p>
ある植物を考える。i番目の個体の生存種子数をyiとする。yiは0以上8以下である。以下はヒストグラムである。
End of explanation
plt.hist(np.random.normal(0, 100, 1000))
Explanation: 種子生存確率が9通の生存確率qの二項分布で説明できるとする。
<p>実際のデータを説明できていない。</p>
・・・個体差を考慮できるGLMMを用いる。
<p>logit(qi) = β + ri</p>
切片βは全個体に共通するパラメーター、riは個体差で平均0、標準偏差sの正規分布に従う。
事後分布∝p(Y|β, {ri})×事前分布
<p>βの事前分布には無情報事前分布を指定する。</p>
p(β)=1/√2π×100^2 × exp(-β^2/2×100^2)
End of explanation
Y = np.array(data.y)[:6]
Explanation: <p>riの事前分布には平均0、標準偏差sの正規分布を仮定する。</p>
p(ri|s)=1/√2π×s^2 × exp(-ri^2/2×s^2)
<p>sの事前分布には無情報事前分布を指定する。</p>
p(s)=(0から10^4までの連続一様分布)
End of explanation
def invlogit(v):
return T.exp(v)/(T.exp(v) + 1)
with pm.Model() as model_hier:
s = pm.Uniform('s', 0, 1.0E+2)
beta = pm.Normal('beta', 0, 1.0E+2)
r = pm.Normal('r', 0, s, shape=len(Y))
q = invlogit(beta+r)
y = pm.Binomial('y', 8, q, observed=Y) #p(q|Y)
step = pm.Slice([s, beta, r])
trace_hier = pm.sample(1000, step)
with model_hier:
pm.traceplot(trace_hier, model_hier.vars)
summary(trace_hier)
trace_hier
x_sample = np.random.normal(loc=1.0, scale=1.0, size=1000)
with pm.Model() as model:
mu = pm.Normal('mu', mu=0., sd=0.1)
x = pm.Normal('x', mu=mu, sd=1., observed=x_sample)
with model:
start = pm.find_MAP()
step = pm.NUTS()
trace = pm.sample(10000, step, start)
pm.traceplot(trace)
plt.savefig("result1.jpg")
ndims = 2
nobs = 20
n = 1000
y_sample = np.random.binomial(1, 0.5, size=(n,))
x_sample=np.empty(n)
x_sample[y_sample==0] = np.random.normal(-1, 1, size=(n, ))[y_sample==0]
x_sample[y_sample==1] = np.random.normal(1, 1, size=(n, ))[y_sample==1]
with pm.Model() as model:
p = pm.Beta('p', alpha=1.0, beta=1.0)
y = pm.Bernoulli('y', p=p, observed=y_sample)
mu0 = pm.Normal('mu0', mu=0., sd=1.)
mu1 = pm.Normal('mu1', mu=0., sd=1.)
mu = pm.Deterministic('mu', mu0 * (1-y_sample) + mu1 * y_sample)
x = pm.Normal('x', mu=mu, sd=1., observed=x_sample)
with model:
start = pm.find_MAP()
step = pm.NUTS()
trace = pm.sample(10000, step, start)
pm.traceplot(trace)
plt.savefig("result2.jpg")
Explanation: yの数が大きく全て使うと時間がかかりすぎるので6体だけ選び出す。
End of explanation |
3,512 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Flight Delay Predictions with PixieDust
<img style="max-width
Step1: <h3>If PixieDust was just installed or upgraded, <span style="color
Step2: Train multiple classification models
The following cells train four models
Step3: Evaluate the models
pixiedust_flightpredict provides a plugin to the PixieDust display api and adds a menu (look for the plane icon) that computes the accuracy metrics for the models, including the confusion table.
Step4: Run the predictive model application
This cell runs the embedded PixieDust application, which lets users enter flight information. The models run and predict the probability that the flight will be on-time.
Step5: Get aggregated results for all the flights that have been predicted.
The following cell shows a map with all the airports and flights searched to-date. Each edge represents an aggregated view of all the flights between 2 airports. Click on it to display a group list of flights showing how many users are on the same flight. | Python Code:
!pip install --upgrade --user pixiedust
!pip install --upgrade --user pixiedust-flightpredict
Explanation: Flight Delay Predictions with PixieDust
<img style="max-width: 800px; padding: 25px 0px;" src="https://ibm-watson-data-lab.github.io/simple-data-pipe-connector-flightstats/flight_predictor_architecture.png"/>
This notebook features a Spark Machine Learning application that predicts whether a flight will be delayed based on weather data. Read the step-by-step tutorial
The application workflow is as follows:
1. Configure the application parameters
2. Load the training and test data
3. Build the classification models
4. Evaluate the models and iterate
5. Launch a PixieDust embedded application to run the models
Prerequisite
This notebook is a follow-up to Predict Flight Delays with Apache Spark MLlib, FlightStats, and Weather Data. Follow the steps in that tutorial and at a minimum:
Set up a FlightStats account
Provision the Weather Company Data service
Obtain or build the training and test data sets
Learn more about the technology used:
Weather Company Data
FlightStats
Apache Spark MLlib
PixieDust
pixiedust_flightpredict
Install latest pixiedust and pixiedust-flightpredict plugin
Make sure you are running the latest pixiedust and pixiedust-flightpredict versions. After upgrading, restart the kernel before continuing to the next cells.
End of explanation
import pixiedust_flightpredict
pixiedust_flightpredict.configure()
Explanation: <h3>If PixieDust was just installed or upgraded, <span style="color: red">restart the kernel</span> before continuing.</h3>
Import required python package and set Cloudant credentials
Have available your credentials for Cloudant, Weather Company Data, and FlightStats, as well as the training and test data info from Predict Flight Delays with Apache Spark MLlib, FlightStats, and Weather Data
Run this cell to launch and complete the Configuration Dashboard, where you'll load the training and test data. Ensure all <i class="fa fa-2x fa-times" style="font-size:medium"></i> tasks are completed. After editing configuration, you can re-run this cell to see the updated status for each task.
End of explanation
from pyspark.mllib.regression import LabeledPoint
from pyspark.mllib.linalg import Vectors
from numpy import array
import numpy as np
import math
from datetime import datetime
from dateutil import parser
from pyspark.mllib.classification import LogisticRegressionWithLBFGS
logRegModel = LogisticRegressionWithLBFGS.train(labeledTrainingData.map(lambda lp: LabeledPoint(lp.label,\
np.fromiter(map(lambda x: 0.0 if np.isnan(x) else x,lp.features.toArray()),dtype=np.double )))\
, iterations=1000, validateData=False, intercept=False)
print(logRegModel)
from pyspark.mllib.classification import NaiveBayes
#NaiveBayes requires non negative features, set them to 0 for now
modelNaiveBayes = NaiveBayes.train(labeledTrainingData.map(lambda lp: LabeledPoint(lp.label, \
np.fromiter(map(lambda x: x if x>0.0 else 0.0,lp.features.toArray()),dtype=np.int)\
))\
)
print(modelNaiveBayes)
from pyspark.mllib.tree import DecisionTree
modelDecisionTree = DecisionTree.trainClassifier(labeledTrainingData.map(lambda lp: LabeledPoint(lp.label,\
np.fromiter(map(lambda x: 0.0 if np.isnan(x) else x,lp.features.toArray()),dtype=np.double )))\
, numClasses=training.getNumClasses(), categoricalFeaturesInfo={})
print(modelDecisionTree)
from pyspark.mllib.tree import RandomForest
modelRandomForest = RandomForest.trainClassifier(labeledTrainingData.map(lambda lp: LabeledPoint(lp.label,\
np.fromiter(map(lambda x: 0.0 if np.isnan(x) else x,lp.features.toArray()),dtype=np.double )))\
, numClasses=training.getNumClasses(), categoricalFeaturesInfo={},numTrees=100)
print(modelRandomForest)
Explanation: Train multiple classification models
The following cells train four models: Logistic Regression, Naive Bayes, Decision Tree, and Random Forest.
Feel free to update these models or build your own models.
End of explanation
display(testData)
Explanation: Evaluate the models
pixiedust_flightpredict provides a plugin to the PixieDust display api and adds a menu (look for the plane icon) that computes the accuracy metrics for the models, including the confusion table.
End of explanation
import pixiedust_flightpredict
from pixiedust_flightpredict import *
pixiedust_flightpredict.flightPredict("LAS")
Explanation: Run the predictive model application
This cell runs the embedded PixieDust application, which lets users enter flight information. The models run and predict the probability that the flight will be on-time.
End of explanation
import pixiedust_flightpredict
pixiedust_flightpredict.displayMapResults()
Explanation: Get aggregated results for all the flights that have been predicted.
The following cell shows a map with all the airports and flights searched to-date. Each edge represents an aggregated view of all the flights between 2 airports. Click on it to display a group list of flights showing how many users are on the same flight.
End of explanation |
3,513 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Finding relations between clusters of people and clusters of stuff they purchase
The old saying goes something on the lines of "you are what you eat". On modern, digitally connected societies the sentiment shifts towards "you are what you consume". That may include food, of course.
One important difference since a decade ago is what can be measured about how a user consumes a product. Lately, it seems like every little thing that we purchase/consume can be used to build a projection of ourselves from our habits.
That "projection" allows companies to build products you enjoy more (and thus, continue paying for). Like any other tool in history it can be used for less constructive purposes. Either way, the idea is poking the reptilian parts of your reward circuitry so they release the right cocktail of hormones. A cocktail that makes you reach for your wallet or click "I accept the terms and conditions".
There are many classic recipes for such cocktails and a lot of work is put on refining them. As a user, actively tasting those recipes in modern products can be as interesting as wine tasting, minus the inebriation.
The ethics around studying customers are not the focus of this post (as relevant as that discussion is). The actual focus of this post is trying out different open-source libraries on a simplified model of "people according to what we know about them".
We will use graph theory, statistics, mathematics, machine learning and borrow a technique or two from unfamiliar places. Have fun!
Less informal problem statement
Suppose we have two populations $A$ and $B$ with totally different features. The two populations are related by a Function $\cal{F}$ mapping $A_{i}$ to ${B_j ... B_k }$. So $\cal{F}$ takes an element of $A$ and returns a subset of $B$.
We can model $\cal{F}$ with an undirected graph. Relations of $A$ and $B$ thru $\cal{F}$ can look like the following figure.
Step1: The structure which we just described here happens to match a well-studied family of graphs known as Bipartite Graphs. There are tons of algorithms/ math tools that can be used to study such graphs. Lucky us!
If we were to find clusters of elements of $A$ and $B$ -separately-, and we would count the edges between the elements of inside the different clusters, that would give us one way to measure how related are clusters of populations of $A$ and $B$.
For the previous example, such graph would look similar to the following figure.
Step2: Given some data we generate, we want to see what different algorithms tell us about
Step3: Agglomerative Clustering
Borrowing example from
Step4: Try other linkage methods in agglomerative | Python Code:
F=graphviz.Graph()#(engine='neato')
F.graph_attr['rankdir'] = 'LR'
F.edge('A_1','B_1')
F.edge('A_1','B_2')
F.edge('A_2','B_1')
F.edge('A_3','B_1')
F.edge('A_4','B_2')
F.edge('A_5','B_2')
F.edge('A_5','B_3')
F
Explanation: Finding relations between clusters of people and clusters of stuff they purchase
The old saying goes something on the lines of "you are what you eat". On modern, digitally connected societies the sentiment shifts towards "you are what you consume". That may include food, of course.
One important difference since a decade ago is what can be measured about how a user consumes a product. Lately, it seems like every little thing that we purchase/consume can be used to build a projection of ourselves from our habits.
That "projection" allows companies to build products you enjoy more (and thus, continue paying for). Like any other tool in history it can be used for less constructive purposes. Either way, the idea is poking the reptilian parts of your reward circuitry so they release the right cocktail of hormones. A cocktail that makes you reach for your wallet or click "I accept the terms and conditions".
There are many classic recipes for such cocktails and a lot of work is put on refining them. As a user, actively tasting those recipes in modern products can be as interesting as wine tasting, minus the inebriation.
The ethics around studying customers are not the focus of this post (as relevant as that discussion is). The actual focus of this post is trying out different open-source libraries on a simplified model of "people according to what we know about them".
We will use graph theory, statistics, mathematics, machine learning and borrow a technique or two from unfamiliar places. Have fun!
Less informal problem statement
Suppose we have two populations $A$ and $B$ with totally different features. The two populations are related by a Function $\cal{F}$ mapping $A_{i}$ to ${B_j ... B_k }$. So $\cal{F}$ takes an element of $A$ and returns a subset of $B$.
We can model $\cal{F}$ with an undirected graph. Relations of $A$ and $B$ thru $\cal{F}$ can look like the following figure.
End of explanation
F=graphviz.Graph()
F.graph_attr['rankdir'] = 'LR'
F.edge('A_1, A_2, A_3','B_1')
F.edge('A_1, A_2, A_3','B_1')
F.edge('A_1, A_2, A_3','B_1')
F.edge('A_1, A_2, A_3','B_2, B_3')
F.edge('A_4, A_5','B_2, B_3')
F.edge('A_4, A_5','B_2, B_3')
F
Explanation: The structure which we just described here happens to match a well-studied family of graphs known as Bipartite Graphs. There are tons of algorithms/ math tools that can be used to study such graphs. Lucky us!
If we were to find clusters of elements of $A$ and $B$ -separately-, and we would count the edges between the elements of inside the different clusters, that would give us one way to measure how related are clusters of populations of $A$ and $B$.
For the previous example, such graph would look similar to the following figure.
End of explanation
with open("population_config.json","r") as configfilex:
cs=json.load(configfilex)
population=makePopulation(100,["something","age","postcode"],
pop_crude,
c1=cs["c1"],
c2=cs["c2"],
c3=cs["c3"])
print (population.keys())
n_clusters=len(set(population["cluster_label"]))
print (n_clusters)
Explanation: Given some data we generate, we want to see what different algorithms tell us about:
Easily visible clusters in $A$ and $B$.
Given clusters in $A$ and $B$, find what can $\cal{F}$ say about them.
See how well the algorithms are suited for finding the "Truth" about the data.
Simple, right?. In order to do this, we will need to find a way to:
Generate $A$ and $B$ given some "True" cluster membership and some constraints.
Generate $\cal{F}$ according to some "True" relation we want to study.
For generating $A$ and $B$ we want to consider different proportions and dependencies.
We expect that different algorithms will be more suited for different types of structures and relations. Unfortunately, in real life we can't really know this in retrospective. If you are lucky, there will be some theory on top of which you can -reasonably- support the choice of an algorithm, but if you are exploring something very dynamic you are better off with an open mind.
Testing different user models, spec models and purchasing functions
The Bipartite graph (Bigraph) representation lends itself pretty easily to model relations observable in snapshots of time in which a set of people ($A$) perform an action over a set of items ($B$). And let's call that action a decision to purchase. The set of those decisions is returned by $\cal{F}$.
In that way we can take a dataset of timestamps of purchases, and use it to construct a Bigraph.
Configuring crude population clusters
End of explanation
fittable=np.array([population["something"],
population["postcode"],
population["age"]]).T
alg=AgglomerativeClustering(n_clusters=n_clusters, linkage='ward')
alg.fit(fittable)
clusterlabels=list(zip(alg.labels_,population["cluster_label"]))
print("Outs:{}".format(clusterlabels))
#ninja visualizing hamming distance using html
with open("htmlcolors.json","r") as colorfile:
colors=json.load(colorfile)
#sample "nclusters" colors from that list, use them for visulaizing
colors=list(colors.keys())
colors=np.random.choice(colors,size=n_clusters)
print(colors)
colormatch={label:colors[indx] for indx,label in enumerate(population["cluster_label"])}
#todo: something like map(lambda x,y: colormatch[x]=y, (alg.labels_,colors))
print (colormatch)
cell="<td bgcolor={}>{}</td>"
row="<tr>{}</tr>"
table="<!DOCTYPE html><html><body><table>{}</table></body></html>"
#%% HTML
#<iframe width="100%" height "25%" src="outs/clabels.html"></iframe>
Explanation: Agglomerative Clustering
Borrowing example from: http://scikit-learn.org/stable/auto_examples/cluster/plot_digits_linkage.html#sphx-glr-auto-examples-cluster-plot-digits-linkage-py
End of explanation
linkage_methods=['ward', 'average', 'complete']
aggl=lambda x: AgglomerativeClustering(n_clusters=n_clusters, linkage=x)
import webcolors
Explanation: Try other linkage methods in agglomerative
End of explanation |
3,514 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Initialize
Define the Training Data Set
Define the training dataset for the independent and dependent variables
Step1: Define the Test Set
Define the training dataset for the independent variables. In this case it is a "continuous" curve
Step2: Train the Model
Instantiate the kernels, instantiate the GPR with the kernel, and train the model.
Step3: Regression
Perform the regression based on the set of training data. The best estimate of the prediction is given by the mean of the distribution from which the posterior samples are drawn.
Predict (Initial Hyperparameters)
Perform regression using the initial user-specified hyperparameters.
Step4: Optimize Hyperparameters
Optimize over the hyperparameters.
Step5: array([ 1.47895967, 3.99711988, 0.16295754])
array([ 1.80397587, 4.86011667, 0.18058626])
Predict (Optimized Hyperparameters)
Perform the regression from the hyperparameters that optimize the log marginal likelihood. Note the improvement in the fit in comparison to the actual function (red dotted line). | Python Code:
x = np.random.RandomState(0).uniform(-5, 5, 20)
#x = np.random.uniform(-5, 5, 20)
y = x*np.sin(x)
#y += np.random.normal(0,0.5,y.size)
y += np.random.RandomState(34).normal(0,0.5,y.size)
Explanation: Initialize
Define the Training Data Set
Define the training dataset for the independent and dependent variables
End of explanation
x_star = np.linspace(-5,5,500)
Explanation: Define the Test Set
Define the training dataset for the independent variables. In this case it is a "continuous" curve
End of explanation
#Define the basic kernels
k1 = SqExp(0.45,2)
k2 = RQ(0.5,2,3)
k3 = ExpSine(0.1,2,30)
k4 = WhiteNoise(0.01)
#Define the combined kernel
k1 = k1+k4
#Instantiate the GP predictor object with the desired kernel
gp = GPR(k1)
#Train the model
gp.train(x,y)
Explanation: Train the Model
Instantiate the kernels, instantiate the GPR with the kernel, and train the model.
End of explanation
#Predict a new set of test data given the independent variable observations
y_mean1,y_var1 = gp.predict(x_star,False)
#Convert the variance to the standard deviation
y_err1 = np.sqrt(y_var1)
plt.scatter(x,y,s=30)
plt.plot(x_star,x_star*np.sin(x_star),'r:')
plt.plot(x_star,y_mean1,'k-')
plt.fill_between(x_star,y_mean1+y_err1,y_mean1-y_err1,alpha=0.5)
Explanation: Regression
Perform the regression based on the set of training data. The best estimate of the prediction is given by the mean of the distribution from which the posterior samples are drawn.
Predict (Initial Hyperparameters)
Perform regression using the initial user-specified hyperparameters.
End of explanation
gp.optimize('SLSQP')
Explanation: Optimize Hyperparameters
Optimize over the hyperparameters.
End of explanation
#Predict a new set of test data given the independent variable observations
y_mean2,y_var2 = gp.predict(x_star,False)
#Convert the variance to the standard deviation
y_err2 = np.sqrt(y_var2)
plt.scatter(x,y,s=30)
plt.plot(x_star,x_star*np.sin(x_star),'r:')
plt.plot(x_star,y_mean2,'k-')
plt.fill_between(x_star,y_mean2+y_err2,y_mean2-y_err2,alpha=0.5)
Explanation: array([ 1.47895967, 3.99711988, 0.16295754])
array([ 1.80397587, 4.86011667, 0.18058626])
Predict (Optimized Hyperparameters)
Perform the regression from the hyperparameters that optimize the log marginal likelihood. Note the improvement in the fit in comparison to the actual function (red dotted line).
End of explanation |
3,515 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Predicting Student performance</h1>
<br>
Data
Step1: <h3>Male - Female distribution</h3>
Step2: <h3>Age distribution</h3>
Step3: <h3>Grade distribution</h3>
Step4: <h3>SVM</h3>
<h4>Portuguese</h4>
Step5: <h4>Math</h4>
Step6: <h3>Naive Bayes</h3>
<h4>Portuguese</h4>
Step7: <h4>Math</h4> | Python Code:
import os.path
base_dir = os.path.join('data')
input_path_port = os.path.join('student', 'student_port.csv')
input_path_math = os.path.join('student', 'student_math.csv')
file_name_port = os.path.join(base_dir, input_path_port)
file_name_math = os.path.join(base_dir, input_path_math)
filtered_port = sc.textFile(file_name_port).filter(lambda l: 'school' not in l)
filtered_math = sc.textFile(file_name_math).filter(lambda l: 'school' not in l)
print 'Count : ' + str(filtered_port.count())
print filtered_port.take(1)
print 'Count : ' + str(filtered_math.count())
print filtered_math.take(1)
from pyspark.mllib.regression import LabeledPoint
def make_features(line):
raw_features = line.split(';')
features = []
if int(raw_features[32]) > 10:
lbl = 1
else:
lbl = 0
# Female = [0, 1], Male = [1, 0]
features.extend([0, 1] if raw_features[1] == '"F"' else [1, 0])
# Age
features.append(int(raw_features[2]))
# Family size < 3 = 0, > 3 = 1
features.append(0 if raw_features[4] == '"LT3"' else 1)
# mother education
features.append(int(raw_features[6]))
# Father education
features.append(int(raw_features[7]))
# Study time
features.append(int(raw_features[13]))
# Alcohol consumption
features.append(int(raw_features[26]))
return LabeledPoint(lbl, features)
features_port = filtered_port.map(make_features)
features_math = filtered_math.map(make_features)
print features_port.take(1)
print features_math.take(1)
Explanation: <h1>Predicting Student performance</h1>
<br>
Data : https://archive.ics.uci.edu/ml/datasets/Student+Performance
<h4>Attributes for both student-mat.csv (Math course) and student-por.csv (Portuguese language course) datasets:</h4>
<ol type="1">
<li>school - student's school (binary: "GP" - Gabriel Pereira or "MS" - Mousinho da Silveira)
<li>sex - student's sex (binary: "F" - female or "M" - male)
<li>age - student's age (numeric: from 15 to 22)
<li>address - student's home address type (binary: "U" - urban or "R" - rural)
<li>famsize - family size (binary: "LE3" - less or equal to 3 or "GT3" - greater than 3)
<li>Pstatus - parent's cohabitation status (binary: "T" - living together or "A" - apart)
<li>Medu - mother's education (numeric: 0 - none, 1 - primary education (4th grade), 2 – 5th to 9th grade, 3 – secondary education or 4 – higher education)
<li>Fedu - father's education (numeric: 0 - none, 1 - primary education (4th grade), 2 – 5th to 9th grade, 3 – secondary education or 4 – higher education)
<li>Mjob - mother's job (nominal: "teacher", "health" care related, civil "services" (e.g. administrative or police), "at_home" or "other")
<li>Fjob - father's job (nominal: "teacher", "health" care related, civil "services" (e.g. administrative or police), "at_home" or "other")
<li>reason - reason to choose this school (nominal: close to "home", school "reputation", "course" preference or "other")
<li>guardian - student's guardian (nominal: "mother", "father" or "other")
<li>traveltime - home to school travel time (numeric: 1 - <15 min., 2 - 15 to 30 min., 3 - 30 min. to 1 hour, or 4 - >1 hour)
<li>studytime - weekly study time (numeric: 1 - <2 hours, 2 - 2 to 5 hours, 3 - 5 to 10 hours, or 4 - >10 hours)
<li>failures - number of past class failures (numeric: n if 1<=n<3, else 4)
<li>schoolsup - extra educational support (binary: yes or no)
<li>famsup - family educational support (binary: yes or no)
<li>paid - extra paid classes within the course subject (Math or Portuguese) (binary: yes or no)
<li>activities - extra-curricular activities (binary: yes or no)
<li>nursery - attended nursery school (binary: yes or no)
<li>higher - wants to take higher education (binary: yes or no)
<li>internet - Internet access at home (binary: yes or no)
<li>romantic - with a romantic relationship (binary: yes or no)
<li>famrel - quality of family relationships (numeric: from 1 - very bad to 5 - excellent)
<li>freetime - free time after school (numeric: from 1 - very low to 5 - very high)
<li>goout - going out with friends (numeric: from 1 - very low to 5 - very high)
<li>Dalc - workday alcohol consumption (numeric: from 1 - very low to 5 - very high)
<li>Walc - weekend alcohol consumption (numeric: from 1 - very low to 5 - very high)
<li>health - current health status (numeric: from 1 - very bad to 5 - very good)
<li>absences - number of school absences (numeric: from 0 to 93)
<h4>These grades are related with the course subject, Math or Portuguese:</h4>
<li>G1 - first period grade (numeric: from 0 to 20)
<li>G2 - second period grade (numeric: from 0 to 20)
<li>G3 - final grade (numeric: from 0 to 20, output target)
Additional note: there are several (382) students that belong to both datasets .
These students can be identified by searching for identical attributes
that characterize each student.
End of explanation
from pyspark.mllib.stat import Statistics
import matplotlib.pyplot as plt
%matplotlib inline
# http://karthik.github.io/2014-02-18-UTS/lessons/thw-matplotlib/tutorial.html
# Portuguese
summary1 = Statistics.colStats(features_port.map(lambda lp: lp.features))
labels = ['Male', 'Female']
fracs1 = [summary1.mean()[0], summary1.mean()[1]]
explode = (0, 0.05)
fig = plt.figure(figsize=(15, 7))
fig.suptitle('Portuguese - Math', fontsize=14, fontweight='bold')
ax1 = fig.add_subplot(121)
ax1.pie(fracs1, explode=explode, labels=labels, autopct='%1.1f%%', shadow=True, startangle=90)
# Math
summary2 = Statistics.colStats(features_math.map(lambda lp: lp.features))
fracs2 = [summary2.mean()[0], summary2.mean()[1]]
ax2 = fig.add_subplot(122)
ax2.pie(fracs2, explode=explode, labels=labels, autopct='%1.1f%%', shadow=True, startangle=90)
plt.show()
pass
Explanation: <h3>Male - Female distribution</h3>
End of explanation
# Portuguese
x_axis_port = [15, 16, 17, 18, 19, 20, 21, 22]
y_axis_port = (features_port
.map(lambda lp: (lp.features[2], 1))
.reduceByKey(lambda x, y: x + y)
.map(lambda tup: tup[1])
.collect())
fig = plt.figure(figsize=(15, 7))
ax1 = fig.add_subplot(121)
ax1.set_xlabel('Age')
ax1.set_ylabel('Amount')
ax1.bar(x_axis_port, y_axis_port, color='lightgreen', align='center')
# Math
x_axis_math = [15, 16, 17, 18, 19, 20, 21, 22]
y_axis_math = (features_math
.map(lambda lp: (lp.features[2], 1))
.reduceByKey(lambda x, y: x + y)
.map(lambda tup: tup[1])
.collect())
fig.suptitle('Portuguese - Math', fontsize=14, fontweight='bold')
ax2 = fig.add_subplot(122)
ax2.set_xlabel('Age')
ax2.set_ylabel('Amount')
ax2.bar(x_axis_math, y_axis_math, color='lightgreen', align='center')
pass
Explanation: <h3>Age distribution</h3>
End of explanation
# Portuguese
first_port = features_port.filter(lambda lp: lp.label == 0.0).count()
second_port = features_port.filter(lambda lp: lp.label == 1.0).count()
labels = ['Failed', 'Passed']
fracs1 = [first_port, second_port]
colors = ['red', 'green']
explode = (0.07, 0.07)
fig = plt.figure(figsize=(15, 7))
ax1 = fig.add_subplot(121)
ax1.pie(fracs1, explode=explode, labels=labels, colors=colors, autopct='%1.1f%%', shadow=True, startangle=120)
# Math
first_math = features_math.filter(lambda lp: lp.label == 0.0).count()
second_math = features_math.filter(lambda lp: lp.label == 1.0).count()
fracs2 = [first_math, second_math]
fig.suptitle('Portuguese - Math', fontsize=14, fontweight='bold')
ax2 = fig.add_subplot(122)
ax2.pie(fracs2, explode=explode, labels=labels, colors=colors, autopct='%1.1f%%', shadow=True, startangle=120)
pass
import math
# rescale the features by centering
# and dividing by the variance
def rescale(features):
r = []
for i, f in enumerate(features):
c = f - mean.value[i]
s = c / math.sqrt(variance.value[i]) if c != 0.0 else 0.0
r.append(s)
return r
summary1 = Statistics.colStats(features_port.map(lambda lp: lp.features))
# broadcast as list
mean = sc.broadcast(summary1.mean())
variance = sc.broadcast(summary1.variance())
scaled_features_port = features_port.map(lambda lp: LabeledPoint(lp.label, rescale(lp.features)))
summary2 = Statistics.colStats(features_math.map(lambda lp: lp.features))
# broadcast as list
mean = sc.broadcast(summary2.mean())
variance = sc.broadcast(summary2.variance())
scaled_features_math = features_math.map(lambda lp: LabeledPoint(lp.label, rescale(lp.features)))
print scaled_features_port.take(1)
print scaled_features_math.take(1)
Explanation: <h3>Grade distribution</h3>
End of explanation
from sklearn import svm
def eval_metrics(lbl_pred):
tp = float(lbl_pred.filter(lambda lp: lp[0]==1.0 and lp[1]==1.0).count())
tn = float(lbl_pred.filter(lambda lp: lp[0]==0.0 and lp[1]==0.0).count())
fp = float(lbl_pred.filter(lambda lp: lp[0]==1.0 and lp[1]==0.0).count())
fn = float(lbl_pred.filter(lambda lp: lp[0]==0.0 and lp[1]==1.0).count())
precision = tp / (tp + fp)
recall = tp / (tp + fn)
F_measure = 2 * precision * recall / (precision + recall)
accuracy = (tp + tn) / (tp + tn + fp + fn)
return([tp, tn, fp, fn], [precision, recall, F_measure, accuracy])
train_port, test_port = scaled_features_port.randomSplit([0.7, 0.3], seed = 0)
labels = train_port.map(lambda lp: lp.label).collect()
features = train_port.map(lambda lp: lp.features).collect()
lin_clf = svm.LinearSVC()
lin_clf.fit(features, labels)
labels_and_predictions = test_port.map(lambda lp: (lin_clf.predict(lp.features), lp.label))
metrics = eval_metrics(labels_and_predictions)
print('Precision : %.2f' % round(metrics[1][0], 2))
print('Recall : %.2f' % round(metrics[1][1], 2))
print('F1 : %.2f' % round(metrics[1][2], 2))
print('Accuracy : %.2f' % round(metrics[1][3], 2))
Explanation: <h3>SVM</h3>
<h4>Portuguese</h4>
End of explanation
train_math, test_math = scaled_features_math.randomSplit([0.7, 0.3], seed = 0)
labels = train_math.map(lambda lp: lp.label).collect()
features = train_math.map(lambda lp: lp.features).collect()
lin_clf = svm.LinearSVC()
lin_clf.fit(features, labels)
labels_and_predictions = test_math.map(lambda lp: (lin_clf.predict(lp.features), lp.label))
metrics = eval_metrics(labels_and_predictions)
print('Precision : %.2f' % round(metrics[1][0], 2))
print('Recall : %.2f' % round(metrics[1][1], 2))
print('F1 : %.2f' % round(metrics[1][2], 2))
print('Accuracy : %.2f' % round(metrics[1][3], 2))
Explanation: <h4>Math</h4>
End of explanation
from pyspark.mllib.classification import NaiveBayes
# Naive Bayes expects positive
# features, so we square them
def square(feat):
r = []
for x in feat:
r.append(x ** 2)
return r
train_port, test_port = scaled_features_port.randomSplit([0.7, 0.3], seed = 0)
squared_train_data = train_port.map(lambda lp: LabeledPoint(lp.label, square(lp.features)))
squared_test_data = test_port.map(lambda lp: LabeledPoint(lp.label, square(lp.features)))
model_nb = NaiveBayes.train(squared_train_data)
labels_and_predictions = squared_test_data.map(lambda lp: (model_nb.predict(lp.features), lp.label))
metrics = eval_metrics(labels_and_predictions)
print('Precision : %.2f' % round(metrics[1][0], 2))
print('Recall : %.2f' % round(metrics[1][1], 2))
print('F1 : %.2f' % round(metrics[1][2], 2))
print('Accuracy : %.2f' % round(metrics[1][3], 2))
Explanation: <h3>Naive Bayes</h3>
<h4>Portuguese</h4>
End of explanation
train_math, test_math = scaled_features_math.randomSplit([0.7, 0.3], seed = 0)
squared_train_data = train_math.map(lambda lp: LabeledPoint(lp.label, square(lp.features)))
squared_test_data = test_math.map(lambda lp: LabeledPoint(lp.label, square(lp.features)))
model_nb = NaiveBayes.train(squared_train_data)
labels_and_predictions = squared_test_data.map(lambda lp: (model_nb.predict(lp.features), lp.label))
metrics = eval_metrics(labels_and_predictions)
print('Precision : %.2f' % round(metrics[1][0], 2))
print('Recall : %.2f' % round(metrics[1][1], 2))
print('F1 : %.2f' % round(metrics[1][2], 2))
print('Accuracy : %.2f' % round(metrics[1][3], 2))
Explanation: <h4>Math</h4>
End of explanation |
3,516 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
In my previous blog post, we've seen how we can identify files that change together in one commit.
In this blog post, we take the analysis to an advanced level
Step1: In our case, we only want to check the modularization of our software for Java production code. So we just leave the files that are belonging to the main source code. What to keep here exactly is very specific to your own project. With Jupyter and pandas, we can make our decisions for this transparent and thus retraceable.
Step2: Analysis
We want to see which files are changing (almost) together. A good start for this is to create this view onto our dataset with the pivot_table method of the underlying pandas' DataFrame.
But before this, we need a marker column that signals that a commit occurred. We can create an additional column named hit for this easily.
Step3: Now, we can transform the data as we need it
Step4: As already mentioned in a previous blog post, we are now able to look at our problem from a mathematician' s perspective. What we have here now with the commit_matrix is a collection of n-dimensional vectors. Each vector represents a filename and the components/dimensions of such a vector are the commits with either the value 0 or 1.
Calculating similarities between such vectors is a well-known problem with a variety of solutions. In our case, we calculate the distance between the various vectors with the cosines distance metric. The machine learning library scikit-learn provides us with an easy to use implementation.
Step5: To be able to better understand the result, we add the file names from the commit_matrix as index and column index to the dissimilarity_matrix.
Step6: Now, we see the result in a better representation
Step7: Because of the alphabetically ordered filenames and the "feature-first" architecture of the software under investigation, we get the first glimpse of how changes within modules are occurring together and which are not.
To get an even better view, we can first extract the module's names with an easy string operation and use this for the indexes.
Step8: Then, we can create another heatmap that shows the name of the modules on both axes for further evaluation. We also just take a look at a subset of the data for representational reasons.
Step9: Discussion
Starting at the upper left, we see the "comment" module with a pretty dark area very clearly. This means, that files around this module changed together very often.
If we go to the middle left, we see dark areas between the "comment" module and the "framework" module as well as the "site" module further down. This shows a change dependency between the "comment" module and the other two (I'll explain later, why it is that way).
If we take a look in the middle of the heatmap, we see that the very dark area represents changes of the "mail" module. This module was pretty much changed without touching any other modules. This shows a nice separation of concerns.
For the "scheduling" module, we can also see that the changes occurred mostly cohesive within the module.
Another interesting aspect is the horizontal line within the "comment" region
Step10: The result is a 2D matrix that we can plot with matplotlib to get a first glimpse of the distribution of the calculated distances.
Step11: With the plot above, we see that the 2D transformation somehow worked. But we can't see
* which filenames are which data points
* how the modules are grouped all together
So we need to enrich the data a little bit more and search for a better, interactive visualization technique.
Let's add the filenames to the matrix as well as nice column names. We, again, add the information about the module of a source code file to the DataFrame.
Step12: Author
Step13: OK, here comes the ugly part
Step14: With this nice little data structure, we can fill pygal's XY chart and create an interactive chart. | Python Code:
from lib.ozapfdis.git_tc import log_numstat
GIT_REPO_DIR = "../../dropover_git/"
git_log = log_numstat(GIT_REPO_DIR)[['sha', 'file', 'author']]
git_log.head()
Explanation: Introduction
In my previous blog post, we've seen how we can identify files that change together in one commit.
In this blog post, we take the analysis to an advanced level:
We're using a more robust model for determining the similarity of co-changing source code files
We're checking the existing modularization of a software system and compare it to the change behavior of the development teams
We're creating a visualization that lets us determine the underlying, "hidden" modularization of our software system based on conjoint changes
We discuss the results for a concrete software system in detail (with more systems to come in the upcoming blog posts).
We're using Python and pandas as well as some algorithms from the machine learning library scikit-learn and the visualization libraries matplotlib, seaborn and pygal for these purposes.
The System under Investigation
For this analysis, we use a closed-source project that I developed with some friends of mine. It's called "DropOver", a web application that can manage events with features like events' sites, scheduling, comments, todos, file uploads, mail notifications and so on. The architecture of the software system mirrored the feature-based development process: You could quickly locate where code has to be added or changed because the software system's "screaming architecture". This architecture style lead you to the right place because of the explicit, feature-based modularization that was used for the Java packages/namespaces:
It's also important to know, that we developed the software almost strictly feature-based by feature teams (OK, one developer was one team in our case). Nevertheless, the history of this repository should perfectly fit for our analysis of checking the modularization based on co-changing source code files.
The main goal of our analysis is to see if the modules of the software system were changed independently or if they were code was changed randomly across modules boundaries. If the latter would be the case, we should reorganize the software system or the development teams to let software development activities and the surrounding more naturally fit together.
Idea
We can do this kind of analysis pretty easily by using the version control data of a software system like Git. A version control system tracks each change to a file. If more files are changed within one commit, we can assume that those files somehow have something to do with each other. This could be e. g. a direct dependency because two files depend on each other or a semantic dependency which causes an underlying concepts to change across module boundaries.
In this blog post, we take the idea further: We want to find out the degree of similarity of two co-changing files, making the analysis more robust and reliable on one side, but also enabling a better analysis of bigger software systems on the other side by comparing all files of a software system with each other regarding the co-changing properties.
Data
We use a little helper library for importing the data of our project. It's a simple git log with change statistics for each commit and file (you can see here how to retrieve it if you want to do it manually).
End of explanation
prod_code = git_log.copy()
prod_code = prod_code[prod_code.file.str.endswith(".java")]
prod_code = prod_code[prod_code.file.str.startswith("backend/src/main")]
prod_code = prod_code[~prod_code.file.str.endswith("package-info.java")]
prod_code.head()
Explanation: In our case, we only want to check the modularization of our software for Java production code. So we just leave the files that are belonging to the main source code. What to keep here exactly is very specific to your own project. With Jupyter and pandas, we can make our decisions for this transparent and thus retraceable.
End of explanation
prod_code['hit'] = 1
prod_code.head()
Explanation: Analysis
We want to see which files are changing (almost) together. A good start for this is to create this view onto our dataset with the pivot_table method of the underlying pandas' DataFrame.
But before this, we need a marker column that signals that a commit occurred. We can create an additional column named hit for this easily.
End of explanation
commit_matrix = prod_code.reset_index().pivot_table(
index='file',
columns='sha',
values='hit',
fill_value=0)
commit_matrix.iloc[0:5,50:55]
Explanation: Now, we can transform the data as we need it: For the index, we choose the filename, as columns, we choose the unique sha key of a commit. Together with the commit hits as values, we are now able to see which file changes occurred in which commit. Note that the pivoting also change the order of both indexes. They are now sorted alphabetically.
End of explanation
from sklearn.metrics.pairwise import cosine_distances
dissimilarity_matrix = cosine_distances(commit_matrix)
dissimilarity_matrix[:5,:5]
Explanation: As already mentioned in a previous blog post, we are now able to look at our problem from a mathematician' s perspective. What we have here now with the commit_matrix is a collection of n-dimensional vectors. Each vector represents a filename and the components/dimensions of such a vector are the commits with either the value 0 or 1.
Calculating similarities between such vectors is a well-known problem with a variety of solutions. In our case, we calculate the distance between the various vectors with the cosines distance metric. The machine learning library scikit-learn provides us with an easy to use implementation.
End of explanation
import pandas as pd
dissimilarity_df = pd.DataFrame(
dissimilarity_matrix,
index=commit_matrix.index,
columns=commit_matrix.index)
dissimilarity_df.iloc[:5,:2]
Explanation: To be able to better understand the result, we add the file names from the commit_matrix as index and column index to the dissimilarity_matrix.
End of explanation
%matplotlib inline
import seaborn as sns
sns.heatmap(
dissimilarity_df,
xticklabels=False,
yticklabels=False
);
Explanation: Now, we see the result in a better representation: For each file pair, we get the distance of the commit vectors. This means that we have now a distance measure that says how dissimilar two files were changed in respect to each other.
Visualization
Heatmap
To get an overview of the result's data, we can plot the matrix with a little heatmap first.
End of explanation
modules = dissimilarity_df.copy()
modules.index = modules.index.str.split("/").str[6]
modules.index.name = 'module'
modules.columns = modules.index
modules.iloc[25:30,25:30]
Explanation: Because of the alphabetically ordered filenames and the "feature-first" architecture of the software under investigation, we get the first glimpse of how changes within modules are occurring together and which are not.
To get an even better view, we can first extract the module's names with an easy string operation and use this for the indexes.
End of explanation
import matplotlib.pyplot as plt
plt.figure(figsize=[10,9])
sns.heatmap(modules.iloc[:180,:180]);
Explanation: Then, we can create another heatmap that shows the name of the modules on both axes for further evaluation. We also just take a look at a subset of the data for representational reasons.
End of explanation
from sklearn.manifold import MDS
# uses a fixed seed for random_state for reproducibility
model = MDS(dissimilarity='precomputed', random_state=0)
dissimilarity_2d = model.fit_transform(dissimilarity_df)
dissimilarity_2d[:5]
Explanation: Discussion
Starting at the upper left, we see the "comment" module with a pretty dark area very clearly. This means, that files around this module changed together very often.
If we go to the middle left, we see dark areas between the "comment" module and the "framework" module as well as the "site" module further down. This shows a change dependency between the "comment" module and the other two (I'll explain later, why it is that way).
If we take a look in the middle of the heatmap, we see that the very dark area represents changes of the "mail" module. This module was pretty much changed without touching any other modules. This shows a nice separation of concerns.
For the "scheduling" module, we can also see that the changes occurred mostly cohesive within the module.
Another interesting aspect is the horizontal line within the "comment" region: These files were changed independently from all other files within the module. These files were the code for an additional data storage technology that was added in later versions of the software system. This pattern repeats for all other modules more or less strongly.
With this visualization, we can get a first impression of how good our software architecture fits the real software development activities. In this case, I would say that you can see most clearly that the source code of the modules changed mostly within the module boundaries. But we have to take a look at the changes that occur in other modules as well when changing a particular module. These could be signs of unwanted dependencies and may lead us to an architectural problem.
Multi-dimensional Scaling
We can create another kind of visualization to check
* if the code within the modules is only changed altogether and
* if not, what other modules were changed.
Here, we can help ourselves with a technique called "multi-dimensional scaling" or "MDS" for short. With MDS, we can break down an n-dimensional space to a lower-dimensional space representation. MDS tries to keep the distance proportions of the higher-dimensional space when breaking it down to a lower-dimensional space.
In our case, we can let MDS figure out a 2D representation of our dissimilarity matrix (which is, overall, just a plain multi-dimensional vector space) to see which files get change together. With this, we'll able to see which files are changes together regardless of the modules they belong to.
The machine learning library scikit-learn gives us easy access to the algorithm that we need for this task as well. We just need to say that we have a precomputed dissimilarity matrix when initializing the algorithm and then pass our dissimilarity_df DataFrame to the fit_transform method of the algorithm.
End of explanation
plt.figure(figsize=(8,8))
x = dissimilarity_2d[:,0]
y = dissimilarity_2d[:,1]
plt.scatter(x, y);
Explanation: The result is a 2D matrix that we can plot with matplotlib to get a first glimpse of the distribution of the calculated distances.
End of explanation
dissimilarity_2d_df = pd.DataFrame(
dissimilarity_2d,
index=commit_matrix.index,
columns=["x", "y"])
dissimilarity_2d_df.head()
Explanation: With the plot above, we see that the 2D transformation somehow worked. But we can't see
* which filenames are which data points
* how the modules are grouped all together
So we need to enrich the data a little bit more and search for a better, interactive visualization technique.
Let's add the filenames to the matrix as well as nice column names. We, again, add the information about the module of a source code file to the DataFrame.
End of explanation
prod_code.groupby(['file', 'author'])['hit'].count().groupby(['file', 'author']).max()
dissimilarity_2d_df['module'] = dissimilarity_2d_df.index.str.split("/").str[6]
Explanation: Author
End of explanation
plot_data = pd.DataFrame(index=dissimilarity_2d_df['module'])
plot_data['value'] = tuple(zip(dissimilarity_2d_df['x'], dissimilarity_2d_df['y']))
plot_data['label'] = dissimilarity_2d_df.index
plot_data['data'] = plot_data[['label', 'value']].to_dict('records')
plot_dict = plot_data.groupby(plot_data.index).data.apply(list)
plot_dict
Explanation: OK, here comes the ugly part: We have to transform all the data to the format our interactive visualization library pygal needs for its XY chart. We need to
* group the data my modules
* add every distance information
* for each file as well as
* the filename itself
in a specific dictionary-like data structure.
But there is nothing that can hinder us in Python and pandas. So let's do this!
We create a separate DataFrame named plot_data with the module names as index
We join the coordinates x and y into a tuple data structure
We use the filenames from dissimilarity_2d_df's index as labels
We convert both data items to a dictionary
We append each entry for a module to only on module entry
This gives us a new DataFrame with modules as index and per module a list of dictionary-like entries with
* the filenames as labels and
* the coordinates as values.
End of explanation
import pygal
xy_chart = pygal.XY(stroke=False)
[xy_chart.add(entry[0], entry[1]) for entry in plot_dict.iteritems()]
# uncomment to create the interactive chart
# xy_chart.render_in_browser()
xy_chart
Explanation: With this nice little data structure, we can fill pygal's XY chart and create an interactive chart.
End of explanation |
3,517 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Finding trials registered on ClinicalTrials.gov that do not have reported results
Reporting of clinical trial results became mandatory for many trials in 2008. However this paper and this investigation both find that substantial numbers of clinical trials have not reported results, even for those trials where the FDAAA has made reporting mandatory.
This notebook examines how many trials on ClinicalTrials.gov have had their results publicly reported. We have a broader definition of a trial that should report its results than the FDAAA. We count a trial as eligible for our analysis if
Step1: Create summary results file
The raw XML trial summaries from ClinicalTrials.gov are supplied as a single very large zip file, containing more than 200,000 XML files. This section assumes that that these have already been downloaded and unzipped in the search_result directory.
Extract the fields of interest from the XML summaries, and save them to a CSV file, which we'll use as our source data for the rest of this exercise. ClinicalTrials.gov supplies field definitions.
Toggle REGENERATE_SUMMARY to False for the purposes of development.
Step2: Load data for analysis
Load into Pandas, normalising the date and phase fields. NB
Step3: Calculate whether trials are completed
The criteria for counting a trial as completed are defined above. Print some summary stats about completed trials.
Step4: Check for results on PubMed
If trials have reported their results on PubMed, and if it's possible to find them on PubMed using a linked NCT ID, then we count those trials as having submitted results.
So, for all trials that we regard as completed and due results, and that haven't already reported results on clinicaltrials.gov, we search PubMed, looking for the NCT ID either as a Secondary Source ID, or in the title/abstract. We look for anything published between the completion date and now, that doesn't have the words "study protocol" in the title, and that is classified as results of a trial (using the "therapy" clinical keyword, broad version).
At the time of writing, about 9,000 of the 34,000 trials have results on PubMed. An example of an NCT ID with results on PubMed
Step5: Calculate final overdue count
Now we have looked for PubMed results, we can calculate the final overdue count, and print some summary statistics.
Step6: Write to CSV
Output final results to a CSV file, which we will use in the interactive. We reshape the data so it has a row for each sponsor, and two columns for each year
Step7: For reference
Step8: Compare with our data
Now examine | Python Code:
import csv
from datetime import datetime
from dateutil.relativedelta import relativedelta
import glob
from pprint import pprint
from slugify import slugify
import sqlite3
import numpy as np
import pandas as pd
import utils
Explanation: Finding trials registered on ClinicalTrials.gov that do not have reported results
Reporting of clinical trial results became mandatory for many trials in 2008. However this paper and this investigation both find that substantial numbers of clinical trials have not reported results, even for those trials where the FDAAA has made reporting mandatory.
This notebook examines how many trials on ClinicalTrials.gov have had their results publicly reported. We have a broader definition of a trial that should report its results than the FDAAA. We count a trial as eligible for our analysis if:
it has overall status of 'Completed'
it has a study type of 'Interventional'
its completion date was after 1 Jan 2006, but is more than 24 months ago
it is phase 2 or later (or its phase is N/A, ie it's a trial of a device or a behavioural intervention)
it has no results disposition date (i.e. no application to delay results has been filed).
We then classify it as overdue if it has no summary results attached on ClinicalTrials.gov, and no results on PubMed that are linked by NCT ID (see below).
This is substantially broader than FDAAA, which covers only US-based trials of FDA-approved drugs. However, we think all trials should report their results, not just US-based trials, or FDA-approved drugs. In addition, FDAAA requires results to be reported within 12 months of completion, and we allow 24 months.
ClinicalTrials.gov supplies notes on how to find studies with results and results in general.
End of explanation
fname = './data/trials.csv'
REGENERATE_SUMMARY = False # False
if REGENERATE_SUMMARY:
files = glob.glob('./search_result/*.xml')
print len(files), 'files found'
fieldnames = ['nct_id', 'title', 'overall_status',
'study_type', 'completion_date',
'lead_sponsor', 'lead_sponsor_class',
'collaborator', 'collaborator_class',
'phase', 'locations', 'has_drug_intervention', 'drugs',
'disposition_date', 'results_date',
'enrollment']
trials = csv.DictWriter(open(fname, 'wb'), fieldnames=fieldnames)
trials.writeheader()
for i, f in enumerate(files):
if i % 50000 == 0:
print i, f
text = open(f, 'r').read()
data = utils.extract_ctgov_xml(text)
trials.writerow(data)
print 'done'
Explanation: Create summary results file
The raw XML trial summaries from ClinicalTrials.gov are supplied as a single very large zip file, containing more than 200,000 XML files. This section assumes that that these have already been downloaded and unzipped in the search_result directory.
Extract the fields of interest from the XML summaries, and save them to a CSV file, which we'll use as our source data for the rest of this exercise. ClinicalTrials.gov supplies field definitions.
Toggle REGENERATE_SUMMARY to False for the purposes of development.
End of explanation
dtype = {'has_drug_intervention': bool,
'phase': str}
converters = {'enrollment': lambda x: x and int(x) or 0}
datefields = ['completion_date', 'results_date', 'disposition_date']
df = pd.read_csv(fname,
parse_dates=datefields,
infer_datetime_format=True,
keep_default_na=False,
na_values=None,
converters=converters,
dtype=dtype)
df['phase_normalised'] = df.phase.apply(utils.normalise_phase)
df.tail()
Explanation: Load data for analysis
Load into Pandas, normalising the date and phase fields. NB: If this produces a weird EOF error (which it does intermittently), delete the last line of the file manually. We will have to live with one missing trial.
End of explanation
startdate = datetime.strptime('01 January 2006', '%d %B %Y')
cutoff = datetime.now() - relativedelta(years=2)
print 'Cutoff date', cutoff
df['is_completed'] = (df.overall_status == 'Completed') & \
(df.study_type.str.startswith('Interventional')) & \
(df.completion_date >= startdate) & \
(df.completion_date <= cutoff) & \
(df.phase_normalised >= 2) & \
(df.disposition_date.isnull())
df['is_overdue'] = (df.is_completed & \
df.results_date.isnull())
df_completed = df[df.is_completed]
df_overdue = df[df.is_completed & df.results_date.isnull()]
print len(df), 'total trials found'
print len(df[~df.disposition_date.isnull()]), 'trials have dispositions filed'
print len(df_completed), 'are completed and due results, by our definition'
print len(df[df.is_completed & ~df.results_date.isnull()]), \
'trials due results have submitted results on clinicaltrials.gov'
print len(df_overdue), \
'trials due results have not submitted results on clinicaltrials.gov'
Explanation: Calculate whether trials are completed
The criteria for counting a trial as completed are defined above. Print some summary stats about completed trials.
End of explanation
# Store results locally.
conn = sqlite3.connect('./data/trials-abstract.db')
cur = conn.cursor()
c = "CREATE TABLE IF NOT EXISTS trials(nct_id TEXT PRIMARY KEY, "
c += "pubmed_results INT, pubmed_results_broad INT, pubmed_results_narrow INT)"
cur.execute(c)
conn.commit()
REGENERATE_PUBMED_LINKS = False
count = 0
df['pubmed_results'] = False
for i, row in df_overdue.iterrows():
if count % 10000 == 0:
print count, row.nct_id
count += 1
# First, check for results stored in the local db.
c = "SELECT nct_id, pubmed_results, pubmed_results_broad, "
c += "pubmed_results_narrow FROM trials WHERE nct_id='%s'" % row.nct_id
cur.execute(c)
data = cur.fetchone()
has_results = False
if data and (not REGENERATE_PUBMED_LINKS):
has_results = bool(int(data[2]))
else:
# No local results, or we want to regenerate them: check PubMed.
broad_results = \
utils.get_pubmed_linked_articles(row.nct_id,
row.completion_date, 'broad')
# Used in the past (see note 3 above).
simple_results = \
utils.get_pubmed_linked_articles(row.nct_id,
row.completion_date, '')
narrow_results = \
utils.get_pubmed_linked_articles(row.nct_id,
row.completion_date, 'narrow')
c = "INSERT OR REPLACE INTO trials VALUES('%s', %s, %s, %s)" % \
(row.nct_id, len(simple_results), len(broad_results), len(narrow_results))
cur.execute(c)
conn.commit()
has_results = broad_results > 0
df.set_value(i, 'pubmed_results', has_results)
cur.close()
conn.close()
print 'done'
Explanation: Check for results on PubMed
If trials have reported their results on PubMed, and if it's possible to find them on PubMed using a linked NCT ID, then we count those trials as having submitted results.
So, for all trials that we regard as completed and due results, and that haven't already reported results on clinicaltrials.gov, we search PubMed, looking for the NCT ID either as a Secondary Source ID, or in the title/abstract. We look for anything published between the completion date and now, that doesn't have the words "study protocol" in the title, and that is classified as results of a trial (using the "therapy" clinical keyword, broad version).
At the time of writing, about 9,000 of the 34,000 trials have results on PubMed. An example of an NCT ID with results on PubMed: NCT02460380. (TODO: Update this).
Note 1: we know from the BMJ paper that there are trials that do have results on PubMed, but that aren't linked using the NCT ID. The BMJ authors found these using a manual search. Some examples: NCT00002762: 19487378, NCT00002879: 18470909, NCT00003134: 19066728, NCT00003596: 18430910. We regard these as invalid, because you can only find results via an exhaustive manual search. We only count results as published for our purposes if they are either (i) submitted on ClinicalTrials.gov or (ii) retrievable on PubMed using the NCT ID. See more on this below.
Note 2: we know there are some trials that have results PMIDs directly in ClinicalTrials.gov, in the results_reference field of the XML. After discussion with Jess here, and Annice at ClinicalTrials.gov, I decided that these results are too often meaningless to be useful - lots of the time they aren't truly results, but are studies from years ago.
Note 3: we also experimented with retrieving the results using the narrow version of the "therapy" clinical keyword, and using no clinical keyword at all. We evaluated these by using multiple PubMed matches as surrogate measures for false identification. At the time of writing on 2016/10/24, we examined 34677 trial registry IDs: the PubMed broad keyword yielded 7815 matches with 1706 multiple matches; the PubMed narrow keyword yielded 6448 matches with 1238 multiple matches, and using no keyword yielded 7981 matches with 1860 multiple matches. We chose the broad keyword for our final results.
End of explanation
# Reset dataframes now we have the results from PubMed.
df['is_overdue'] = (df.is_completed & df.results_date.isnull() & ~df.pubmed_results)
print 'How many of the unreported trials were found on PubMed:'
print df[df.is_completed & df.results_date.isnull()].pubmed_results.value_counts()
df_completed = df[df.is_completed]
df_overdue = df[df.is_overdue]
# Print summary stats for the entire dataset.
print len(df_completed), 'trials should have published results'
print len(df_overdue), 'trials have not published results'
percent_submitted = (1 - (len(df_overdue) / float(len(df_completed)))) * 100
print '%s%% of completed trials have published results' % \
'{:,.2f}'.format(percent_submitted)
print int(df_overdue.enrollment.sum()), 'total patients are enrolled in overdue trials'
# Print summary stats for major trial sponsors only.
NUM_TRIALS = 30
df_major = df_completed[
df_completed.groupby('lead_sponsor').nct_id.transform(len) >= NUM_TRIALS]
print len(df_major), 'trials by major sponsors should have published results'
print len(df_major[df_major.is_overdue]), 'trials by major sponsors have not published results'
percent_submitted = (1 - (len(df_major[df_major.is_overdue]) / float(len(df_major)))) * 100
print '%s%% of completed trials by major sponsors have published results' % \
'{:,.2f}'.format(percent_submitted)
print int(df_major[df_major.is_overdue].enrollment.sum()), 'total patients are enrolled in overdue trials'
df_completed.groupby('lead_sponsor_class').sum()[['is_overdue', 'is_completed']]
# Calculate publication rates by sector (raw data)
df_by_sector = df_completed.groupby('lead_sponsor_class').sum()[['is_overdue', 'is_completed']]
df_by_sector['percent_overdue'] = df_by_sector.is_overdue / df_by_sector.is_completed * 100
df_by_sector
# Calculate publication rates by sector (major sponsors only)
df_major_gp = df_major.groupby('lead_sponsor_class').sum()[['is_overdue', 'is_completed']]
df_major_gp['percent_overdue'] = df_major_gp.is_overdue / df_major_gp.is_completed * 100
df_major_gp
Explanation: Calculate final overdue count
Now we have looked for PubMed results, we can calculate the final overdue count, and print some summary statistics.
End of explanation
df_completed['year_completed'] = df_completed['completion_date'].dt.year.dropna().astype(int)
df_completed['year_completed'] = df_completed.year_completed.astype(int)
# Drop all sponsors with fewer than N completed trials.
df_final = df_completed[
df_completed.groupby('lead_sponsor').nct_id.transform(len) >= NUM_TRIALS]
# Now reshape the data: a row for each sponsor, columns by year:
# lead_sponsor,2008_overdue,2008_total,2009_overdue,2009_total...
df_temp = df_final.set_index(['lead_sponsor', 'lead_sponsor_class', 'year_completed'])
gb = df_temp.groupby(level=[0, 1, 2]).is_overdue
df2 = gb.agg({'overdue': 'sum', 'total': 'count'}) \
.unstack().swaplevel(0, 1, 1).sort_index(1)
df2.columns = df2.columns.to_series().apply(lambda x: '{}_{}'.format(*x))
df3 = df2.reset_index()
df3['lead_sponsor_slug'] = df3.lead_sponsor.apply(slugify)
df3.to_csv('./data/completed.csv', index=None)
print len(df3), 'sponsors found with cutoff point at %s trials' % NUM_TRIALS
# Write the raw output to a full spreadsheet.
df.to_csv('./data/all.csv', index=None)
Explanation: Write to CSV
Output final results to a CSV file, which we will use in the interactive. We reshape the data so it has a row for each sponsor, and two columns for each year: one column for the number of overdue results, and one for the total trials.
Also, write all the raw data to a single CSV file.
End of explanation
from openpyxl import load_workbook
import sys
bmj_results = load_workbook(filename = './data/chen-bmj.xlsx')
nct_ids = {}
count = 0
has_pmid = 0
# The Excel data has multiple worksheets, sigh.
# And NCT IDs can occur more than once with different results, sigh.
# We only care about where there's at least one result.
# Fiddle about and reshape the data so that we know whether
# each NCT ID has a result.
for sheet in bmj_results.worksheets:
for i, row in enumerate(sheet.rows):
if i == 0:
continue
if row[0].value:
count += 1
if isinstance(row[6].value, long):
val = str(row[6].value)
else:
val = row[6].value
if val:
has_pmid += 1
# Always set val if it exists.
# Otherwise, only set val if there's no current value
# for this NCT ID.
if val:
nct_ids[row[0].value] = val
else:
if not row[0].value in nct_ids:
nct_ids[row[0].value] = val
print count, 'rows found in total'
print has_pmid, 'of those rows have a PMID'
print has_pmid / float(count) * 100, 'per cent of their NCT IDs have a PMID, including duplicates'
print
unique_nct_ids = len(nct_ids.keys())
print unique_nct_ids, '*unique* NCT IDs found in all rows'
pmids_found = sum(1 for x in nct_ids.values() if x)
print pmids_found, 'of these have PMIDs'
print pmids_found / float(unique_nct_ids) * 100, 'per cent of unique NCT IDs have a PMID'
Explanation: For reference: Compare our results with BMJ authors
TODO: Make this a separate notebook?
A 2016 BMJ paper found that around 65% of papers reprted results. "Overall, 2892 of the 4347 clinical trials (66.5%) had been published or reported results as of July 2014."
Excellently, the BMJ authors publish their raw data on DataDryad so we can compare our results with theirs, to get an idea of the difference between our automated strategy and their partially manual strategy. (However, in their reported data it looks to me like the matched PMID rate is 59.9% of all NCT IDs.)
The BMJ authors were looking at a much smaller set of papers than us, because they focussed on academic medical centres. Their set is slightly different, because they include pre-Phase-2 trials, and 'Terminated' as well as 'Completed' trials. They also used a manual search strategy which involved searching Scopus and manually comparing results.
End of explanation
df_bmj = pd.Series(nct_ids).to_frame(name='pmid')
df_bmj['pubmed_results'] = ~df_bmj.pmid.isnull()
df_bmj.index.name = 'nct_id'
df_bmj.reset_index(inplace=True)
print len(df_bmj), 'NCT IDs in the full BMJ dataset'
# df_bmj.head(20)
merged_results = \
pd.merge(df_bmj, df_completed, #[['nct_id', 'pubmed_results']],
on='nct_id', how='inner', suffixes=('_bmj', '_ours'))
# NB I tried this first with a left join: but 1521 out of the 4500 papers
# don't appear in our dataset, because the BMJ authors' inclusion criteria are
# different from ours. To get a sample after a left join...
# merged_results[merged_results.we_have_results.isnull()].head()
merged_results['we_have_results'] = ~merged_results.is_overdue
merged_results.we_have_results.value_counts(dropna=False)
# merged_results.head()
print len(merged_results), 'NCT IDs are in both the BMJ dataset and ours'
papers_both_find_pm_results = \
merged_results[merged_results.pubmed_results_bmj & merged_results.we_have_results]
papers_both_find_pm_results.head()
print len(papers_both_find_pm_results), 'we both find results for'
papers_only_they_find_results = \
merged_results[merged_results.pubmed_results_bmj & ~merged_results.we_have_results]
print len(papers_only_they_find_results), 'only they find results for'
papers_only_we_find_results = \
merged_results[~merged_results.pubmed_results_bmj & merged_results.we_have_results]
print len(papers_only_we_find_results), 'only we find results for'
noone_finds_results = \
merged_results[~merged_results.pubmed_results_bmj & ~merged_results.we_have_results]
print len(noone_finds_results), 'neither of us find results for'
# Examine a sample of the papers only they find results for.
cols = ['nct_id', 'title', 'pubmed_results_bmj', 'pmid', 'we_have_results']
papers_only_they_find_results.sample(10)[cols]
# Papers only we find results for. If the `results_date` field exists, it
# means that the results are published on ClinicalTrials.gov. Otherwise
# we found results on PubMed but they did not - perhaps because
# it's been a couple of years since they did their search.
# We find 43 papers on PubMed that the BMJ authors don't:
print len(papers_only_we_find_results), 'papers for which only we find results'
print len(papers_only_we_find_results[papers_only_we_find_results.results_date.isnull()]),\
'of those we find on PubMed, the rest on ClinicalTrials.gov'
cols = ['nct_id', 'title', 'completion_date', 'pubmed_results_bmj',
'pmid', 'we_have_results', 'results_date']
# papers_only_we_find_results.sample(20)[cols]
papers_only_we_find_results[papers_only_we_find_results.results_date.isnull()].sample(10)[cols]
Explanation: Compare with our data
Now examine:
of the NCT IDs for which BMJ authors find PubMed results, how many we also find PubMed results for
of the same dataset, how many only BMJ find results for
of the NCT IDs for which BMJ authors do not find PubMed results, how many we do find PubMed results
End of explanation |
3,518 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
What is a dataset?
A dataset is a collection of information (or data) that can be used by a computer. A dataset typically has some number of examples, where each example has features associated with it. Some datasets also include labels, which is an identifying piece of information that is of interest.
What is an example?
An example is a single element of a dataset, typically a row (similar to a row in a table). Multiple examples are used to generalize trends about the dataset as a whole. When predicting the list price of a house, each house would be considered a single example.
Examples are often referred to with the letter $x$.
What is a feature?
A feature is a measurable characteristic that describes an example in a dataset. Features make up the information that a computer can use to learn and make predictions. If your examples are houses, your features might be
Step1: Import the dataset
Import the dataset and store it to a variable called diabetes. This dataset is similar to a python dictionary, with the keys
Step2: Visualizing the data
Visualizing the data can help us better understand the data and make use of it. The following block of code will create a plot of serum measurement 1 (x-axis) vs serum measurement 6 (y-axis). The level of diabetes progression has been mapped to fit in the [0,1] range and is shown as a color scale.
Step3: Make your own plot
Below, try making your own plots. First, modify the previous code to create a similar plot, comparing different pairs of features. You can start by copying and pasting the previous block of code to the cell below, and modifying it to work.
Step4: Training and Testing Sets
In order to evaluate our data properly, we need to divide our dataset into training and testing sets.
* Training Set - Portion of the data used to train a machine learning algorithm. These are the examples that the computer will learn from in order to try to predict data labels.
* Testing Set - Portion of the data (usually 10-30%) not used in training, used to evaluate performance. The computer does not "see" this data while learning, but tries to guess the data labels. We can then determine the accuracy of our method by determining how many examples it got correct.
* Validation Set - (Optional) A third section of data used for parameter tuning or classifier selection. When selecting among many classifiers, or when a classifier parameter must be adjusted (tuned), a this data is used like a test set to select the best parameter value(s). The final performance is then evaluated on the remaining, previously unused, testing set.
Creating training and testing sets
Below, we create a training and testing set from the iris dataset using using the train_test_split() function.
Step5: Create validation set using crossvalidation
Crossvalidation allows us to use as much of our data as possible for training without training on our test data. We use it to split our training set into training and validation sets.
* Divide data into multiple equal sections (called folds)
* Hold one fold out for validation and train on the other folds
* Repeat using each fold as validation
The KFold() function returns an iterable with pairs of indices for training and testing data. | Python Code:
# Print figures in the notebook
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets # Import datasets from scikit-learn
import matplotlib.cm as cm
from matplotlib.colors import Normalize
Explanation: What is a dataset?
A dataset is a collection of information (or data) that can be used by a computer. A dataset typically has some number of examples, where each example has features associated with it. Some datasets also include labels, which is an identifying piece of information that is of interest.
What is an example?
An example is a single element of a dataset, typically a row (similar to a row in a table). Multiple examples are used to generalize trends about the dataset as a whole. When predicting the list price of a house, each house would be considered a single example.
Examples are often referred to with the letter $x$.
What is a feature?
A feature is a measurable characteristic that describes an example in a dataset. Features make up the information that a computer can use to learn and make predictions. If your examples are houses, your features might be: the square footage, the number of bedrooms, or the number of bathrooms. Some features are more useful than others. When predicting the list price of a house the number of bedrooms is a useful feature while the color of the walls is not, even though they both describe the house.
Features are sometimes specified as a single element of an example, $x_i$
What is a label?
A label identifies a piece of information about an example that is of particular interest. In machine learning, the label is the information we want the computer to learn to predict. In our housing example, the label would be the list price of the house.
Labels can be continuous (e.g. price, length, width) or they can be a category label (e.g. color, species of plant/animal). They are typically specified by the letter $y$.
The Diabetes Dataset
Here, we use the Diabetes dataset, available through scikit-learn. This dataset contains information related to specific patients and disease progression of diabetes.
Examples
The datasets consists of 442 examples, each representing an individual diabetes patient.
Features
The dataset contains 10 features: Age, sex, body mass index, average blood pressure, and 6 blood serum measurements.
Target
The target is a quantitative measure of disease progression after one year.
Our goal
The goal, for this dataset, is to train a computer to predict the progression of diabetes after one year.
Setup
Tell matplotlib to print figures in the notebook. Then import numpy (for numerical data), pyplot (for plotting figures), and datasets (to download the iris dataset from scikit-learn). Also import colormaps to customize plot coloring and Normalize to normalize data for use with colormaps.
End of explanation
# Import some data to play with
diabetes = datasets.load_diabetes()
# List the data keys
print('Keys: ' + str(diabetes.keys()))
print('Feature names: ' + str(diabetes.feature_names))
print('')
# Store the labels (y), features (X), and feature names
y = diabetes.target # Labels are stored in y as numbers
X = diabetes.data
featureNames = diabetes.feature_names
# Show the first five examples
X[:5,:]
Explanation: Import the dataset
Import the dataset and store it to a variable called diabetes. This dataset is similar to a python dictionary, with the keys: ['DESCR', 'target', 'data', 'feature_names']
The data features are stored in diabetes.data, where each row is an example from a single patient, and each column is a single feature. The feature names are stored in diabetes.feature_names. Target values are stored in diabetes.target.
End of explanation
norm = Normalize(vmin=y.min(), vmax=y.max()) # need to normalize target to [0,1] range for use with colormap
plt.scatter(X[:, 4], X[:, 9], c=norm(y), cmap=cm.bone_r)
plt.colorbar()
plt.xlabel('Serum Measurement 1 (s1)')
plt.ylabel('Serum Measurement 6 (s6)')
plt.show()
Explanation: Visualizing the data
Visualizing the data can help us better understand the data and make use of it. The following block of code will create a plot of serum measurement 1 (x-axis) vs serum measurement 6 (y-axis). The level of diabetes progression has been mapped to fit in the [0,1] range and is shown as a color scale.
End of explanation
# Put your code here!
Explanation: Make your own plot
Below, try making your own plots. First, modify the previous code to create a similar plot, comparing different pairs of features. You can start by copying and pasting the previous block of code to the cell below, and modifying it to work.
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
print('Original dataset size: ' + str(X.shape))
print('Training dataset size: ' + str(X_train.shape))
print('Test dataset size: ' + str(X_test.shape))
Explanation: Training and Testing Sets
In order to evaluate our data properly, we need to divide our dataset into training and testing sets.
* Training Set - Portion of the data used to train a machine learning algorithm. These are the examples that the computer will learn from in order to try to predict data labels.
* Testing Set - Portion of the data (usually 10-30%) not used in training, used to evaluate performance. The computer does not "see" this data while learning, but tries to guess the data labels. We can then determine the accuracy of our method by determining how many examples it got correct.
* Validation Set - (Optional) A third section of data used for parameter tuning or classifier selection. When selecting among many classifiers, or when a classifier parameter must be adjusted (tuned), a this data is used like a test set to select the best parameter value(s). The final performance is then evaluated on the remaining, previously unused, testing set.
Creating training and testing sets
Below, we create a training and testing set from the iris dataset using using the train_test_split() function.
End of explanation
from sklearn.model_selection import KFold
# Older versions of scikit learn used n_folds instead of n_splits
kf = KFold(n_splits=5)
for trainInd, valInd in kf.split(X_train):
X_tr = X_train[trainInd,:]
y_tr = y_train[trainInd]
X_val = X_train[valInd,:]
y_val = y_train[valInd]
print("%s %s" % (X_tr.shape, X_val.shape))
Explanation: Create validation set using crossvalidation
Crossvalidation allows us to use as much of our data as possible for training without training on our test data. We use it to split our training set into training and validation sets.
* Divide data into multiple equal sections (called folds)
* Hold one fold out for validation and train on the other folds
* Repeat using each fold as validation
The KFold() function returns an iterable with pairs of indices for training and testing data.
End of explanation |
3,519 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Steps to use the TF Experiment APIs
Define dataset metadata
Define data input function to read the data from csv files + feature processing
Create TF feature columns based on metadata + extended feature columns
Define an estimator (DNNRegressor) creation function with the required feature columns & parameters
Define a serving function to export the model
Run an Experiment with learn_runner to train, evaluate, and export the model
Evaluate the model using test data
Perform predictions
Step1: 1. Define Dataset Metadata
CSV file header and defaults
Numeric and categorical feature names
Target feature name
Unused columns
Step2: 2. Define Data Input Function
Input csv files name pattern
Use TF Dataset APIs to read and process the data
Parse CSV lines to feature tensors
Apply feature processing
Return (features, target) tensors
a. parsing and preprocessing logic
Step3: b. data pipeline input function
Step4: 3. Define Feature Columns
The input numeric columns are assumed to be normalized (or have the same scale). Otherise, a normlizer_fn, along with the normlisation params (mean, stdv) should be passed to tf.feature_column.numeric_column() constructor.
Step5: 4. Define an Estimator Creation Function
Get dense (numeric) columns from the feature columns
Convert categorical columns to indicator columns
Create Instantiate a DNNRegressor estimator given dense + indicator feature columns + params
Step6: 5. Define Serving Funcion
Step7: 6. Run Experiment
a. Define Experiment Function
Step8: b. Set HParam and RunConfig
Step9: c. Run Experiment via learn_runner
Step10: 7. Evaluate the Model
Step11: 8. Prediction | Python Code:
MODEL_NAME = 'reg-model-03'
TRAIN_DATA_FILES_PATTERN = 'data/train-*.csv'
VALID_DATA_FILES_PATTERN = 'data/valid-*.csv'
TEST_DATA_FILES_PATTERN = 'data/test-*.csv'
RESUME_TRAINING = False
PROCESS_FEATURES = True
EXTEND_FEATURE_COLUMNS = True
MULTI_THREADING = True
Explanation: Steps to use the TF Experiment APIs
Define dataset metadata
Define data input function to read the data from csv files + feature processing
Create TF feature columns based on metadata + extended feature columns
Define an estimator (DNNRegressor) creation function with the required feature columns & parameters
Define a serving function to export the model
Run an Experiment with learn_runner to train, evaluate, and export the model
Evaluate the model using test data
Perform predictions
End of explanation
HEADER = ['key','x','y','alpha','beta','target']
HEADER_DEFAULTS = [[0], [0.0], [0.0], ['NA'], ['NA'], [0.0]]
NUMERIC_FEATURE_NAMES = ['x', 'y']
CATEGORICAL_FEATURE_NAMES_WITH_VOCABULARY = {'alpha':['ax01', 'ax02'], 'beta':['bx01', 'bx02']}
CATEGORICAL_FEATURE_NAMES = list(CATEGORICAL_FEATURE_NAMES_WITH_VOCABULARY.keys())
FEATURE_NAMES = NUMERIC_FEATURE_NAMES + CATEGORICAL_FEATURE_NAMES
TARGET_NAME = 'target'
UNUSED_FEATURE_NAMES = list(set(HEADER) - set(FEATURE_NAMES) - {TARGET_NAME})
print("Header: {}".format(HEADER))
print("Numeric Features: {}".format(NUMERIC_FEATURE_NAMES))
print("Categorical Features: {}".format(CATEGORICAL_FEATURE_NAMES))
print("Target: {}".format(TARGET_NAME))
print("Unused Features: {}".format(UNUSED_FEATURE_NAMES))
Explanation: 1. Define Dataset Metadata
CSV file header and defaults
Numeric and categorical feature names
Target feature name
Unused columns
End of explanation
def parse_csv_row(csv_row):
columns = tf.decode_csv(csv_row, record_defaults=HEADER_DEFAULTS)
features = dict(zip(HEADER, columns))
for column in UNUSED_FEATURE_NAMES:
features.pop(column)
target = features.pop(TARGET_NAME)
return features, target
def process_features(features):
features["x_2"] = tf.square(features['x'])
features["y_2"] = tf.square(features['y'])
features["xy"] = tf.multiply(features['x'], features['y']) # features['x'] * features['y']
features['dist_xy'] = tf.sqrt(tf.squared_difference(features['x'],features['y']))
return features
Explanation: 2. Define Data Input Function
Input csv files name pattern
Use TF Dataset APIs to read and process the data
Parse CSV lines to feature tensors
Apply feature processing
Return (features, target) tensors
a. parsing and preprocessing logic
End of explanation
def csv_input_fn(files_name_pattern, mode=tf.estimator.ModeKeys.EVAL,
skip_header_lines=0,
num_epochs=None,
batch_size=200):
shuffle = True if mode == tf.estimator.ModeKeys.TRAIN else False
print("")
print("* data input_fn:")
print("================")
print("Input file(s): {}".format(files_name_pattern))
print("Batch size: {}".format(batch_size))
print("Epoch Count: {}".format(num_epochs))
print("Mode: {}".format(mode))
print("Shuffle: {}".format(shuffle))
print("================")
print("")
file_names = tf.matching_files(files_name_pattern)
dataset = data.TextLineDataset(filenames=file_names)
dataset = dataset.skip(skip_header_lines)
if shuffle:
dataset = dataset.shuffle(buffer_size=2 * batch_size + 1)
#useful for distributed training when training on 1 data file, so it can be shareded
#dataset = dataset.shard(num_workers, worker_index)
dataset = dataset.batch(batch_size)
dataset = dataset.map(lambda csv_row: parse_csv_row(csv_row))
if PROCESS_FEATURES:
dataset = dataset.map(lambda features, target: (process_features(features), target))
#dataset = dataset.batch(batch_size) #??? very long time
dataset = dataset.repeat(num_epochs)
iterator = dataset.make_one_shot_iterator()
features, target = iterator.get_next()
return features, target
features, target = csv_input_fn(files_name_pattern="")
print("Feature read from CSV: {}".format(list(features.keys())))
print("Target read from CSV: {}".format(target))
Explanation: b. data pipeline input function
End of explanation
def extend_feature_columns(feature_columns):
# crossing, bucketizing, and embedding can be applied here
feature_columns['alpha_X_beta'] = tf.feature_column.crossed_column(
[feature_columns['alpha'], feature_columns['beta']], 4)
return feature_columns
def get_feature_columns():
CONSTRUCTED_NUMERIC_FEATURES_NAMES = ['x_2', 'y_2', 'xy', 'dist_xy']
all_numeric_feature_names = NUMERIC_FEATURE_NAMES.copy()
if PROCESS_FEATURES:
all_numeric_feature_names += CONSTRUCTED_NUMERIC_FEATURES_NAMES
numeric_columns = {feature_name: tf.feature_column.numeric_column(feature_name)
for feature_name in all_numeric_feature_names}
categorical_column_with_vocabulary = \
{item[0]: tf.feature_column.categorical_column_with_vocabulary_list(item[0], item[1])
for item in CATEGORICAL_FEATURE_NAMES_WITH_VOCABULARY.items()}
feature_columns = {}
if numeric_columns is not None:
feature_columns.update(numeric_columns)
if categorical_column_with_vocabulary is not None:
feature_columns.update(categorical_column_with_vocabulary)
if EXTEND_FEATURE_COLUMNS:
feature_columns = extend_feature_columns(feature_columns)
return feature_columns
feature_columns = get_feature_columns()
print("Feature Columns: {}".format(feature_columns))
Explanation: 3. Define Feature Columns
The input numeric columns are assumed to be normalized (or have the same scale). Otherise, a normlizer_fn, along with the normlisation params (mean, stdv) should be passed to tf.feature_column.numeric_column() constructor.
End of explanation
def create_estimator(run_config, hparams):
feature_columns = list(get_feature_columns().values())
dense_columns = list(
filter(lambda column: isinstance(column, feature_column._NumericColumn),
feature_columns
)
)
categorical_columns = list(
filter(lambda column: isinstance(column, feature_column._VocabularyListCategoricalColumn) |
isinstance(column, feature_column._BucketizedColumn),
feature_columns)
)
indicator_columns = list(
map(lambda column: tf.feature_column.indicator_column(column),
categorical_columns)
)
estimator = tf.estimator.DNNRegressor(
feature_columns= dense_columns + indicator_columns ,
hidden_units= hparams.hidden_units,
optimizer= tf.train.AdamOptimizer(),
activation_fn= tf.nn.elu,
dropout= hparams.dropout_prob,
config= run_config
)
print("")
print("Estimator Type: {}".format(type(estimator)))
print("")
return estimator
Explanation: 4. Define an Estimator Creation Function
Get dense (numeric) columns from the feature columns
Convert categorical columns to indicator columns
Create Instantiate a DNNRegressor estimator given dense + indicator feature columns + params
End of explanation
def csv_serving_input_fn():
SERVING_HEADER = ['x','y','alpha','beta']
SERVING_HEADER_DEFAULTS = [[0.0], [0.0], ['NA'], ['NA']]
rows_string_tensor = tf.placeholder(dtype=tf.string,
shape=[None],
name='csv_rows')
receiver_tensor = {'csv_rows': rows_string_tensor}
row_columns = tf.expand_dims(rows_string_tensor, -1)
columns = tf.decode_csv(row_columns, record_defaults=SERVING_HEADER_DEFAULTS)
features = dict(zip(SERVING_HEADER, columns))
return tf.estimator.export.ServingInputReceiver(
process_features(features), receiver_tensor)
Explanation: 5. Define Serving Funcion
End of explanation
def generate_experiment_fn(**experiment_args):
def _experiment_fn(run_config, hparams):
train_input_fn = lambda: csv_input_fn(
files_name_pattern=TRAIN_DATA_FILES_PATTERN,
mode = tf.contrib.learn.ModeKeys.TRAIN,
num_epochs=hparams.num_epochs,
batch_size=hparams.batch_size
)
eval_input_fn = lambda: csv_input_fn(
files_name_pattern=VALID_DATA_FILES_PATTERN,
mode=tf.contrib.learn.ModeKeys.EVAL,
num_epochs=1,
batch_size=hparams.batch_size
)
estimator = create_estimator(run_config, hparams)
return tf.contrib.learn.Experiment(
estimator,
train_input_fn=train_input_fn,
eval_input_fn=eval_input_fn,
eval_steps=None,
**experiment_args
)
return _experiment_fn
Explanation: 6. Run Experiment
a. Define Experiment Function
End of explanation
TRAIN_SIZE = 12000
NUM_EPOCHS = 1000
BATCH_SIZE = 500
NUM_EVAL = 10
CHECKPOINT_STEPS = int((TRAIN_SIZE/BATCH_SIZE) * (NUM_EPOCHS/NUM_EVAL))
hparams = tf.contrib.training.HParams(
num_epochs = NUM_EPOCHS,
batch_size = BATCH_SIZE,
hidden_units=[8, 4],
dropout_prob = 0.0)
model_dir = 'trained_models/{}'.format(MODEL_NAME)
run_config = tf.contrib.learn.RunConfig(
save_checkpoints_steps=CHECKPOINT_STEPS,
tf_random_seed=19830610,
model_dir=model_dir
)
print(hparams)
print("Model Directory:", run_config.model_dir)
print("")
print("Dataset Size:", TRAIN_SIZE)
print("Batch Size:", BATCH_SIZE)
print("Steps per Epoch:",TRAIN_SIZE/BATCH_SIZE)
print("Total Steps:", (TRAIN_SIZE/BATCH_SIZE)*NUM_EPOCHS)
print("Required Evaluation Steps:", NUM_EVAL)
print("That is 1 evaluation step after each",NUM_EPOCHS/NUM_EVAL," epochs")
print("Save Checkpoint After",CHECKPOINT_STEPS,"steps")
Explanation: b. Set HParam and RunConfig
End of explanation
if not RESUME_TRAINING:
print("Removing previous artifacts...")
shutil.rmtree(model_dir, ignore_errors=True)
else:
print("Resuming training...")
tf.logging.set_verbosity(tf.logging.INFO)
time_start = datetime.utcnow()
print("Experiment started at {}".format(time_start.strftime("%H:%M:%S")))
print(".......................................")
learn_runner.run(
experiment_fn=generate_experiment_fn(
export_strategies=[make_export_strategy(
csv_serving_input_fn,
exports_to_keep=1
)]
),
run_config=run_config,
schedule="train_and_evaluate",
hparams=hparams
)
time_end = datetime.utcnow()
print(".......................................")
print("Experiment finished at {}".format(time_end.strftime("%H:%M:%S")))
print("")
time_elapsed = time_end - time_start
print("Experiment elapsed time: {} seconds".format(time_elapsed.total_seconds()))
Explanation: c. Run Experiment via learn_runner
End of explanation
TRAIN_SIZE = 12000
VALID_SIZE = 3000
TEST_SIZE = 5000
train_input_fn = lambda: csv_input_fn(files_name_pattern= TRAIN_DATA_FILES_PATTERN,
mode= tf.estimator.ModeKeys.EVAL,
batch_size= TRAIN_SIZE)
valid_input_fn = lambda: csv_input_fn(files_name_pattern= VALID_DATA_FILES_PATTERN,
mode= tf.estimator.ModeKeys.EVAL,
batch_size= VALID_SIZE)
test_input_fn = lambda: csv_input_fn(files_name_pattern= TEST_DATA_FILES_PATTERN,
mode= tf.estimator.ModeKeys.EVAL,
batch_size= TEST_SIZE)
estimator = create_estimator(run_config, hparams)
train_results = estimator.evaluate(input_fn=train_input_fn, steps=1)
train_rmse = round(math.sqrt(train_results["average_loss"]),5)
print()
print("############################################################################################")
print("# Train RMSE: {} - {}".format(train_rmse, train_results))
print("############################################################################################")
valid_results = estimator.evaluate(input_fn=valid_input_fn, steps=1)
valid_rmse = round(math.sqrt(valid_results["average_loss"]),5)
print()
print("############################################################################################")
print("# Valid RMSE: {} - {}".format(valid_rmse,valid_results))
print("############################################################################################")
test_results = estimator.evaluate(input_fn=test_input_fn, steps=1)
test_rmse = round(math.sqrt(test_results["average_loss"]),5)
print()
print("############################################################################################")
print("# Test RMSE: {} - {}".format(test_rmse, test_results))
print("############################################################################################")
Explanation: 7. Evaluate the Model
End of explanation
import itertools
predict_input_fn = lambda: csv_input_fn(files_name_pattern=TEST_DATA_FILES_PATTERN,
mode= tf.estimator.ModeKeys.PREDICT,
batch_size= 5)
predictions = estimator.predict(input_fn=predict_input_fn)
values = list(map(lambda item: item["predictions"][0],list(itertools.islice(predictions, 5))))
print()
print("Predicted Values: {}".format(values))
Explanation: 8. Prediction
End of explanation |
3,520 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab 06
Step1: Next, let's load the data. This week, we're going to load the Auto MPG data set, which is available online at the UC Irvine Machine Learning Repository. The dataset is in fixed width format, but fortunately this is supported out of the box by pandas' read_fwf function
Step2: Exploratory data analysis
According to its documentation, the Auto MPG dataset consists of eight explantory variables (i.e. features), each describing a single car model, which are related to the given target variable
Step3: As the car name is unique for each instance (according to the dataset documentation), it cannot be used to predict the MPG by itself so let's drop it as a feature and use it as the index instead
Step4: According to the documentation, the horsepower column contains a small number of missing values, each of which is denoted by the string '?'. Again, for simplicity, let's just drop these from the data set
Step5: Usually, pandas is smart enough to recognise that a column is numeric and will convert it to the appropriate data type automatically. However, in this case, because there were strings present initially, the value type of the horsepower column isn't numeric
Step6: We can correct this by converting the column values numbers manually, using pandas' to_numeric function
Step7: As can be seen, the data type of the horsepower column is now float64, i.e. a 64 bit floating point value.
According to the documentation, the origin variable is categoric (i.e. origin = 1 is not "less than" origin = 2) and so we should encode it via one hot encoding so that our model can make sense of it. This is easy with pandas
Step8: As can be seen, one hot encoding converts the origin column into separate binary columns, each representing the presence or absence of the given category. Because we're going to use a linear regression model, to avoid introducing multicollinearity, we must also drop the first of the encoded columns by setting the drop_first keyword argument to True.
Next, let's take a look at the distribution of the variables in the data frame. We can start by computing some descriptive statistics
Step9: Print a matrix of pairwise Pearson correlation values
Step10: Let's also create a scatter plot matrix
Step11: Based on the above information, we can conclude the following
Step12: The dummy model predicts the MPG with an average error of approximately $\pm 6.57$ (although, as can be seen from the distribution of errors the spread is much larger than this). Let's see if we can do better with a linear regression model.
Linear regression model
scikit-learn supports linear regression via its linear_model subpackage. This subpackage supports least squares regression, lasso regression and ridge regression, as well as many other varieties. Let's use least squares to build our model. We can do this using the LinearRegression class, which supports the following options
Step13: Our linear regression model predicts the MPG with an average error of approximately $\pm 2.59$ and a significantly smaller standard deviation too - this is a big improvement over the dummy model!
But we can do better! Earlier, we noted that several of the features had non-linear relationships with the target variable - if we could transform these variables, we might be able to make this relationship more linear. Let's consider the displacement, horsepower and weight variables
Step14: The relationship between the target and these predictors appears to be an exponentially decreasing one
Step15: Now, the relationship between the predictors and the target is much more linear
Step16: Let's run the analysis a second time and see the effect this has had
Step17: As can be seen, the average error has now decreased to $\pm 2.33$ and the standard deviation of the error to 3.12. Further reductions in error might be achieved by experimenting with feature selection, given the high degree of correlation between some of the predictors, or with a more sophisticated model, such as ridge regression.
Building the final model
Once we have identified an approach that satisfies our requirements (e.g. accuracy), we should build a final model using all of the data.
Step18: We can examine the values of the intercept (if we chose to fit one) and coefficients of our final model by printing its intercept_ and coef_ attributes, as follows | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
from sklearn.dummy import DummyRegressor
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import GridSearchCV, KFold, cross_val_predict
Explanation: Lab 06: Linear regression
Introduction
This week's lab focuses on data modelling using linear regression. At the end of the lab, you should be able to use scikit-learn to:
Create a linear regression model using the least squares technique.
Use the model to predict new values.
Measure the accuracy of the model.
Engineer new features to optimise model performance.
Getting started
Let's start by importing the packages we need. This week, we're going to use the linear_model subpackage from scikit-learn to build linear regression models using the least squares technique. We're also going to need the dummy subpackage to create a baseline regression model, to which we can compare.
End of explanation
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data'
df = pd.read_fwf(url, header=None, names=['mpg', 'cylinders', 'displacement', 'horsepower', 'weight',
'acceleration', 'model year', 'origin', 'car name'])
Explanation: Next, let's load the data. This week, we're going to load the Auto MPG data set, which is available online at the UC Irvine Machine Learning Repository. The dataset is in fixed width format, but fortunately this is supported out of the box by pandas' read_fwf function:
End of explanation
df.head()
Explanation: Exploratory data analysis
According to its documentation, the Auto MPG dataset consists of eight explantory variables (i.e. features), each describing a single car model, which are related to the given target variable: the number of miles per gallon (MPG) of fuel of the given car. The following attribute information is given:
mpg: continuous
cylinders: multi-valued discrete
displacement: continuous
horsepower: continuous
weight: continuous
acceleration: continuous
model year: multi-valued discrete
origin: multi-valued discrete
car name: string (unique for each instance)
Let's start by taking a quick peek at the data:
End of explanation
df = df.set_index('car name')
df.head()
Explanation: As the car name is unique for each instance (according to the dataset documentation), it cannot be used to predict the MPG by itself so let's drop it as a feature and use it as the index instead:
Note: It seems plausible that MPG efficiency might vary from manufacturer to manufacturer, so we could generate a new feature by converting the car names into manufacturer names, but for simplicity lets just drop them here.
End of explanation
df = df[df['horsepower'] != '?']
Explanation: According to the documentation, the horsepower column contains a small number of missing values, each of which is denoted by the string '?'. Again, for simplicity, let's just drop these from the data set:
End of explanation
df.dtypes
Explanation: Usually, pandas is smart enough to recognise that a column is numeric and will convert it to the appropriate data type automatically. However, in this case, because there were strings present initially, the value type of the horsepower column isn't numeric:
End of explanation
df['horsepower'] = pd.to_numeric(df['horsepower'])
# Check the data types again
df.dtypes
Explanation: We can correct this by converting the column values numbers manually, using pandas' to_numeric function:
End of explanation
df = pd.get_dummies(df, columns=['origin'], drop_first=True)
df.head()
Explanation: As can be seen, the data type of the horsepower column is now float64, i.e. a 64 bit floating point value.
According to the documentation, the origin variable is categoric (i.e. origin = 1 is not "less than" origin = 2) and so we should encode it via one hot encoding so that our model can make sense of it. This is easy with pandas: all we need to do is use the get_dummies method, as follows:
End of explanation
df.describe()
Explanation: As can be seen, one hot encoding converts the origin column into separate binary columns, each representing the presence or absence of the given category. Because we're going to use a linear regression model, to avoid introducing multicollinearity, we must also drop the first of the encoded columns by setting the drop_first keyword argument to True.
Next, let's take a look at the distribution of the variables in the data frame. We can start by computing some descriptive statistics:
End of explanation
df.corr()
Explanation: Print a matrix of pairwise Pearson correlation values:
End of explanation
pd.plotting.scatter_matrix(df, s=50, hist_kwds={'bins': 10}, figsize=(16, 16));
Explanation: Let's also create a scatter plot matrix:
End of explanation
X = df.drop('mpg', axis='columns') # X = features
y = df['mpg'] # y = prediction target
model = DummyRegressor() # Predicts the target as the average of the features
outer_cv = KFold(n_splits=5, shuffle=True, random_state=0) # 5 fold cross validation
y_pred = cross_val_predict(model, X, y, cv=outer_cv) # Make predictions via cross validation
print('Mean absolute error: %f' % mean_absolute_error(y, y_pred))
print('Standard deviation of the error: %f' % (y - y_pred).std())
ax = (y - y_pred).hist()
ax.set(
title='Distribution of errors for the dummy model',
xlabel='Error'
);
Explanation: Based on the above information, we can conclude the following:
Based on a quick visual inspection, there don't appear to be significant numbers of outliers in the data set. (We could make boxplots for each variable - but let's save time and skip it here.)
Most of the explanatory variables appear to have a non-linear relationship with the target.
There is a high degree of correlation ($r > 0.9$) between cylinders and displacement and, also, between weight and displacement.
The following variables appear to be left-skewed: mpg, displacement, horsepower, weight.
The acceleration variable appears to be normally distributed.
The model year follows a rough uniform distributed.
The cylinders and origin variables have few unique values.
For now, we'll just note this information, but we'll come back to it later when improving our model.
Data Modelling
Dummy model
Let's start our analysis by building a dummy regression model that makes very naive (often incorrect) predictions about the target variable. This is a good first step as it gives us a benchmark to compare our later models to.
Creating a dummy regression model with scikit-learn is easy: first, we create an instance of DummyRegressor, and then we evaluate its performance on the data using cross validation, just like last week.
Note: Our dummy model has no hyperparameters, so we don't need to do an inner cross validation or grid search - just the outer cross validation to estimate the model accuracy.
End of explanation
X = df.drop('mpg', axis='columns') # X = features
y = df['mpg'] # y = prediction target
model = LinearRegression(fit_intercept=True, normalize=False) # Use least squares linear regression
outer_cv = KFold(n_splits=5, shuffle=True, random_state=0) # 5-fold cross validation
y_pred = cross_val_predict(model, X, y, cv=outer_cv) # Make predictions via cross validation
print('Mean absolute error: %f' % mean_absolute_error(y, y_pred))
print('Standard deviation of the error: %f' % (y - y_pred).std())
ax = (y - y_pred).hist()
ax.set(
title='Distribution of errors for the linear regression model',
xlabel='Error'
);
Explanation: The dummy model predicts the MPG with an average error of approximately $\pm 6.57$ (although, as can be seen from the distribution of errors the spread is much larger than this). Let's see if we can do better with a linear regression model.
Linear regression model
scikit-learn supports linear regression via its linear_model subpackage. This subpackage supports least squares regression, lasso regression and ridge regression, as well as many other varieties. Let's use least squares to build our model. We can do this using the LinearRegression class, which supports the following options:
fit_intercept: If True, prepend an all-ones predictor to the feature matrix before fitting the regression model; otherwise, use the feature matrix as is. By default, this is True if not specified.
normalize: If True, standardize the input features before fitting the regression model; otherwise use the unscaled features. By default, this is False if not specified.
Generally, it makes sense to fit an intercept term when building regression models, the exception being in cases where it is known that the target variable is zero when all the feature values are zero. In our case, it seems unlikely that an all-zero feature vector would correspond to a zero MPG target value (for instance, consider the meaning of model year = 0 and weight = 0 in the context of the analysis). Consequently, we can set fit_intercept=True below.
Whether to standardize the input features or not depends on a number of factors:
Standardization can mitigate against multicollinearity - but only in cases where supplemental new features have been generated based on a combination of one or more existing features, i.e. where both the new feature and the features it was dervied from are all included as input features.
Standardizing the input data ensures that the resulting model coefficients indicate the relative importance of their corresponding feature - but only in cases where the features are all approximately normally distributed.
In our case, as we are not generating supplmental new features and several of the features are not normally distributed (see the scatter plot matrix above), we can choose not to standardize them (normalize=False) with no loss in advantage.
Note: In cases where there is uncertainty as to whether an intercept should be fit or not, or whether the input features should be standardized or not, or both, we can use a grid search with nested cross validation (i.e. model selection) to determine the correct answer.
End of explanation
pd.plotting.scatter_matrix(df[['mpg', 'displacement', 'horsepower', 'weight']], s=50, figsize=(9, 9));
Explanation: Our linear regression model predicts the MPG with an average error of approximately $\pm 2.59$ and a significantly smaller standard deviation too - this is a big improvement over the dummy model!
But we can do better! Earlier, we noted that several of the features had non-linear relationships with the target variable - if we could transform these variables, we might be able to make this relationship more linear. Let's consider the displacement, horsepower and weight variables:
End of explanation
df['displacement'] = df['displacement'].map(np.log)
df['horsepower'] = df['horsepower'].map(np.log)
df['weight'] = df['weight'].map(np.log)
Explanation: The relationship between the target and these predictors appears to be an exponentially decreasing one: as the predictors increase in value, there is an exponential decrease in the target value. Log-transforming the variables should help to remove this effect (logarithms are the inverse mathematical operation to exponentials):
End of explanation
pd.plotting.scatter_matrix(df[['mpg', 'displacement', 'horsepower', 'weight']], s=50, figsize=(9, 9));
Explanation: Now, the relationship between the predictors and the target is much more linear:
End of explanation
X = df.drop('mpg', axis='columns')
y = df['mpg']
model = LinearRegression(fit_intercept=True, normalize=False)
outer_cv = KFold(n_splits=5, shuffle=True, random_state=0)
y_pred = cross_val_predict(model, X, y, cv=outer_cv)
print('Mean absolute error: %f' % mean_absolute_error(y, y_pred))
print('Standard deviation of the error: %f' % (y - y_pred).std())
ax = (y - y_pred).hist()
ax.set(
title='Distribution of errors for the linear regression model with transformed features',
xlabel='Error'
);
Explanation: Let's run the analysis a second time and see the effect this has had:
End of explanation
X = df.drop('mpg', axis='columns')
y = df['mpg']
model = LinearRegression(fit_intercept=True, normalize=False)
model.fit(X, y) # Fit the model using all of the data
Explanation: As can be seen, the average error has now decreased to $\pm 2.33$ and the standard deviation of the error to 3.12. Further reductions in error might be achieved by experimenting with feature selection, given the high degree of correlation between some of the predictors, or with a more sophisticated model, such as ridge regression.
Building the final model
Once we have identified an approach that satisfies our requirements (e.g. accuracy), we should build a final model using all of the data.
End of explanation
print(model.intercept_)
print(model.coef_) # Coefficients are printed in the same order as the columns in the feature matrix, X
Explanation: We can examine the values of the intercept (if we chose to fit one) and coefficients of our final model by printing its intercept_ and coef_ attributes, as follows:
End of explanation |
3,521 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data analytics and machine learning with Python
I - Acquiring data
A simple HTTP request
Step1: Communicating with APIs
Step2: Parsing websites
Step3: Reading local files (CSV/JSON)
Step4: Analyzing the dataframe
Step5: II - Exploring data
Step6: Machine learning
Feature extraction
Step7: Extracting features from text
Step9: Dict vectorizer
Step10: Pre-processing
Scaling
Step11: Dimensionality reduction
Step12: Machine learning models
Classification (SVM)
Step13: Regression (linear regression)
Step14: Clustering (DBScan)
Step15: Cross-validation
Step16: A more complex Machine Learning pipeline | Python Code:
import requests
print(requests.get("http://example.com").text)
Explanation: Data analytics and machine learning with Python
I - Acquiring data
A simple HTTP request
End of explanation
response = requests.get("https://www.googleapis.com/books/v1/volumes", params={"q":"machine learning"})
raw_data = response.json()
titles = [item['volumeInfo']['title'] for item in raw_data['items']]
titles
Explanation: Communicating with APIs
End of explanation
import lxml.html
page = lxml.html.parse("http://www.blocket.se/stockholm?q=apple")
# ^ This is probably illegal. Blocket, please don't sue me!
items_data = []
for el in page.getroot().find_class("item_row"):
links = el.find_class("item_link")
images = el.find_class("item_image")
prices = el.find_class("list_price")
if links and images and prices and prices[0].text:
items_data.append({"name": links[0].text,
"image": images[0].attrib['src'],
"price": int(prices[0].text.split(":")[0].replace(" ", ""))})
items_data
Explanation: Parsing websites
End of explanation
import pandas
df = pandas.read_csv('sample.csv')
# Display the DataFrame
df
# DataFrame's columns
df.columns
# Values of a given column
df.Model
Explanation: Reading local files (CSV/JSON)
End of explanation
# Any missing values?
df['Price']
df['Description']
# Fill missing prices by a linear interpolation
df['Description'] = df['Description'].fillna("No description is available.")
df['Price'] = df['Price'].interpolate()
df
Explanation: Analyzing the dataframe
End of explanation
import matplotlib.pyplot as plt
df = pandas.read_csv('sample2.csv')
df
# This table has 3 columns: Office, Year, Sales
df.columns
# It's really easy to query data with Pandas:
df[(df['Office'] == 'Stockholm') & (df['Sales'] > 260)]
# It's also easy to do aggregations...
aggregated_stockholm_sales = df[df.Office == 'Stockholm'].groupby('Year').sum()
aggregated_stockholm_sales
aggregated_ny_sales = df[df.Office == 'New York'].groupby('Year').sum()
# ... and generate plots
aggregated_stockholm_sales.plot(kind='bar')
aggregated_ny_sales.plot(kind='bar', color='g')
Explanation: II - Exploring data
End of explanation
from sklearn import feature_extraction
Explanation: Machine learning
Feature extraction
End of explanation
corpus = ['Cats? I love cats!',
'I love dogs.',
'I hate cats :(',
'I love trains',
]
tfidf = feature_extraction.text.TfidfVectorizer()
print(tfidf.fit_transform(corpus).toarray())
print(tfidf.get_feature_names())
Explanation: Extracting features from text
End of explanation
import json
data = [json.loads({"weight": 194.0, "sex": "female", "student": true}),
{"weight": 60., "sex": 'female', "student": True},
{"weight": 80.1, "sex": 'male', "student": False},
{"weight": 65.3, "sex": 'male', "student": True},
{"weight": 58.5, "sex": 'female', "student": False}]
vectorizer = feature_extraction.DictVectorizer(sparse=False)
vectors = vectorizer.fit_transform(data)
print(vectors)
print(vectorizer.get_feature_names())
Explanation: Dict vectorizer
End of explanation
from sklearn import preprocessing
data = [[10., 2345., 0., 2.],
[3., -3490., 0.1, 1.99],
[13., 3903., -0.2, 2.11]]
preprocessing.normalize(data)
Explanation: Pre-processing
Scaling
End of explanation
from sklearn import decomposition
data = [[0.3, 0.2, 0.4, 0.32],
[0.3, 0.5, 1.0, 0.19],
[0.3, -0.4, -0.8, 0.22]]
pca = decomposition.PCA()
print(pca.fit_transform(data))
print(pca.explained_variance_ratio_)
Explanation: Dimensionality reduction
End of explanation
from sklearn import datasets
from sklearn import svm
iris = datasets.load_iris()
X = iris.data[:, :2]
y = iris.target
plt.scatter(X[:, 0], X[:, 1], color=['rgb'[v] for v in y])
to_predict = np.array([[4.35, 3.1], [5.61, 2.42]])
plt.scatter(to_predict[:, 0], to_predict[:, 1], color='purple')
# Training the model
clf = svm.SVC(kernel='rbf')
clf.fit(X, y)
# Doing predictions
print(clf.predict(to_predict))
Explanation: Machine learning models
Classification (SVM)
End of explanation
import numpy as np
from sklearn import linear_model
import matplotlib.pyplot as plt
def f(x):
return x + np.random.random() * 3.
X = np.arange(0, 5, 0.5)
X = X.reshape((len(X), 1))
y = list(map(f, X))
clf = linear_model.LinearRegression()
clf.fit(X, y)
new_X = np.arange(0.2, 5.2, 0.3)
new_X = new_X.reshape((len(new_X), 1))
new_y = clf.predict(new_X)
plt.scatter(X, y, color='g', label='Training data')
plt.plot(new_X, new_y, '.-', label='Predicted')
plt.legend()
Explanation: Regression (linear regression)
End of explanation
from sklearn.cluster import DBSCAN
from sklearn.datasets.samples_generator import make_blobs
from sklearn.preprocessing import StandardScaler
# Generate sample data
centers = [[1, 1], [-1, -1], [1, -1]]
X, labels_true = make_blobs(n_samples=200, centers=centers, cluster_std=0.3,
random_state=0)
plt.scatter(X[:, 0], X[:, 1])
# Compute DBSCAN
db = DBSCAN(eps=0.3, min_samples=10).fit(X)
db.labels_
import matplotlib.pyplot as plt
plt.scatter(X[:, 0], X[:, 1], c=['rgbw'[v] for v in db.labels_])
Explanation: Clustering (DBScan)
End of explanation
from sklearn import svm, cross_validation, datasets
iris = datasets.load_iris()
X, y = iris.data, iris.target
model = svm.SVC()
print(cross_validation.cross_val_score(model, X, y, scoring='precision_weighted'))
print(cross_validation.cross_val_score(model, X, y, scoring='mean_squared_error'))
Explanation: Cross-validation
End of explanation
from collections import Counter
import json
import pandas as pd
import scipy.sparse
import sklearn.pipeline
import sklearn.cross_validation
import sklearn.feature_extraction
import sklearn.naive_bayes
def open_dataset(path):
with open(path) as file:
data = json.load(file)
df = pd.DataFrame(data).set_index('id')
return df
df = open_dataset('train.json')
pipeline = sklearn.pipeline.make_pipeline(sklearn.feature_extraction.DictVectorizer(), sklearn.feature_extraction.text.TfidfTransformer(sublinear_tf=True))
pipeline_bis = sklearn.pipeline.make_pipeline(sklearn.feature_extraction.DictVectorizer(), sklearn.feature_extraction.text.TfidfTransformer(sublinear_tf=True))
def map_term_count(ingredients):
return Counter(sum((i.split(' ') for i in ingredients), []))
X = pipeline.fit_transform(df.ingredients.apply(Counter))
X = scipy.sparse.hstack([X, pipeline_bis.fit_transform(df.ingredients.apply(map_term_count))])
y = df.cuisine.values
model = sklearn.naive_bayes.MultinomialNB(alpha=0.1)
# Cross-validation
score = sklearn.cross_validation.cross_val_score(model, X, y, cv=2)
print(score)
# Running on the test dataset
t_df = open_dataset('test.json')
X_test = pipeline.transform(t_df.ingredients.apply(Counter))
X_test = scipy.sparse.hstack([X_test, pipeline_bis.transform(t_df.ingredients.apply(map_term_count))])
model.fit(X, y)
predictions = model.predict(X_test)
result_df = pd.DataFrame(index=t_df.index)
result_df['cuisine'] = pd.Series(predictions, index=result_df.index)
result_df['ingredients'] = t_df['ingredients']
result_df
Explanation: A more complex Machine Learning pipeline: "what's cooking?"
This is a basic solution I wrote for the Kaggle competition "What's cooking?" where the goal is to predict to which type of cuisine a meal belongs to based on a list of ingredients.
You'll need more advanced features and methods to win a Kaggle competition, but this already gets you 90% there.
End of explanation |
3,522 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-1', 'sandbox-2', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: TEST-INSTITUTE-1
Source ID: SANDBOX-2
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:43
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation |
3,523 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
conversion, drawing, saving, analysis
copy of dan's thing
converts .csv to .gml and .net
draws graph, saves graph.png
try to combine into this
Step1: degree centrality
for a node v is the fraction of nodes it is connected to
Step2: closeness centrality
of a node u is the reciprocal of the sum of the shortest path distances from u to all n-1 other nodes. Since the sum of distances depends on the number of nodes in the graph, closeness is normalized by the sum of minimum possible distances n-1. Notice that higher values of closeness indicate higher centrality.
Step3: betweenness centrality
of a node v is the sum of the fraction of all-pairs shortest paths that pass through v
Step4: degree assortativity coefficient
Assortativity measures the similarity of connections in the graph with respect to the node degree. | Python Code:
import pandas as pd
import numpy as np
import networkx as nx
from copy import deepcopy
import matplotlib.pyplot as plt
%matplotlib inline
from matplotlib.backends.backend_pdf import PdfPages
from glob import glob
fileName = 'article0'
def getFiles(fileName):
matches = glob('*'+fileName+'*')
bigFile = matches[0]
data = pd.DataFrame.from_csv(bigFile)
return clearSource(data)
def clearSource(data):
columns = ['source','target']
pre = len(data)
for column in columns:
data = data[pd.notnull(data[column])]
post = len(data)
print "Filtered %s rows to %s rows by removing rows with blank values in columns %s" % (pre,post,columns)
return data
#data = getFiles(fileName)
def getStuff(data,labels):
forEdges = labels == ['edge']
columns = list(data.columns.values)
items = dict()
nameFunc = {True: lambda x,y: '%s - %s - %s' % (x['source'],x['edge'],x['target']),
False: lambda x,y: x[y]}[forEdges]
extra = ['source','target'] * forEdges
for label in labels:
relevant = [col for col in columns if label+'-' in col] + extra
#relevant = extra
print "Extracting %s data from %s" % (label,relevant)
for i in data.index:
row = data.ix[i]
for col in relevant:
if str(row[col]).lower() != 'nan':
name = nameFunc(row,label)
if name not in items:
items[name] = dict()
items[name][col.replace(label+'-','')] = row[col]
return items
def getNodes(data):
return getStuff(data,['source','target'])
def getEdges(data):
return getStuff(data,['edge'])
#allNodes = getNodes(data); allEdges = getEdges(data)
def addNodes(graph,nodes):
for key,value in nodes.iteritems():
graph.add_node(key,attr_dict=value)
return graph
def addEdges(graph,edges):
for key,value in edges.iteritems():
value['label'] = key
value['edge'] = key.split(' - ')[1]
graph.add_edge(value['source'],value['target'],attr_dict = value)
return graph
#########
def createNetwork(edges,nodes):
graph = nx.MultiGraph()
graph = addNodes(graph,nodes)
graph = addEdges(graph,edges)
return graph
#fullGraph = createNetwork(allEdges,allNodes)
def drawIt(graph,what='graph', save_plot=None):
style=nx.spring_layout(graph)
size = graph.number_of_nodes()
print "Drawing %s of size %s:" % (what,size)
if size > 20:
plt.figure(figsize=(10,10))
if size > 40:
nx.draw(graph,style,node_size=60,font_size=8)
if save_plot is not None:
print('saving: {}'.format(save_plot))
plt.savefig(save_plot)
else:
nx.draw(graph,style)
if save_plot is not None:
print('saving: {}'.format(save_plot))
plt.savefig(save_plot)
else:
nx.draw(graph,style)
if save_plot is not None:
print('saving: {}'.format(save_plot))
plt.savefig(save_plot)
plt.show()
def describeGraph(graph, save_plot=None):
components = nx.connected_components(graph)
components = list(components)
isolated = [entry[0] for entry in components if len(entry)==1]
params = (graph.number_of_edges(),graph.number_of_nodes(),len(components),len(isolated))
print "Graph has %s nodes, %s edges, %s connected components, and %s isolated nodes\n" % params
drawIt(graph, save_plot=save_plot)
for idx, sub in enumerate(components):
drawIt(graph.subgraph(sub),what='component', save_plot='{}-{}.png'.format('component', idx))
print "Isolated nodes:", isolated
def getGraph(fileRef, save_plot=None):
data = getFiles(fileName)
nodes = getNodes(data)
edges = getEdges(data)
graph = createNetwork(edges,nodes)
fileOut = fileRef.split('.')[0]+'.gml'
print "Writing GML file to %s" % fileOut
nx.write_gml(graph, fileOut)
fileOutNet = fileRef.split('.')[0]+'.net'
print "Writing net file to %s" % fileOutNet
nx.write_pajek(graph, fileOutNet)
describeGraph(graph, save_plot)
return graph, nodes, edges
fileName = 'data/csv/article1'
graph, nodes, edges = getGraph(fileName, save_plot='graph.png')
plt.figure(figsize=(12, 12))
nx.draw_spring(graph, node_color='g', with_labels=True, arrows=True)
plt.show()
# return a dictionary of centrality values for each node
nx.degree_centrality(graph)
Explanation: conversion, drawing, saving, analysis
copy of dan's thing
converts .csv to .gml and .net
draws graph, saves graph.png
try to combine into this
End of explanation
# the type of degree centrality is a dictionary
type(nx.degree_centrality(graph))
# get all the values of the dictionary, this returns a list of centrality scores
# turn the list into a numpy array
# take the mean of the numpy array
np.array(nx.degree_centrality(graph).values()).mean()
Explanation: degree centrality
for a node v is the fraction of nodes it is connected to
End of explanation
nx.closeness_centrality(graph)
Explanation: closeness centrality
of a node u is the reciprocal of the sum of the shortest path distances from u to all n-1 other nodes. Since the sum of distances depends on the number of nodes in the graph, closeness is normalized by the sum of minimum possible distances n-1. Notice that higher values of closeness indicate higher centrality.
End of explanation
nx.betweenness_centrality(graph)
np.array(nx.betweenness_centrality(graph).values()).mean()
Explanation: betweenness centrality
of a node v is the sum of the fraction of all-pairs shortest paths that pass through v
End of explanation
nx.degree_assortativity_coefficient(graph)
Explanation: degree assortativity coefficient
Assortativity measures the similarity of connections in the graph with respect to the node degree.
End of explanation |
3,524 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Unit 3
Step1: 1. Describe the results.
Now run time.time() again below.
2. Describe the results and compare them to the first time.time() call.
Read the info on the time module here
Step2: But we digress. Back to random numbers...
The default seed for the first Python random number generator is actually set by the computer (based on the computer time). Then with each subsequent call for a new random number, the previous random number produced is used as the seed for the next number in the sequence.
4. If the next random number is generated using the one before it, why isn't the series the same every time? (answer the question, then write and run some code to demonstrate what you mean) | Python Code:
import time
time.time()
Explanation: Unit 3: Simulation
Lesson 18: Non-uniform distributions
Notebook Authors
(fill in your two names here)
Facilitator: (fill in name)
Spokesperson: (fill in name)
Process Analyst: (fill in name)
Quality Control: (fill in name)
If there are only three people in your group, have one person serve as both spokesperson and process analyst for the rest of this activity.
At the end of this Lesson, you will be asked to record how long each Model required for your team. The Facilitator should keep track of time for your team.
Computational Focus: Non-uniform distributions
As we saw in the previous lesson, a uniform random number distribution can be easily generated and shifted appropriately using Python. Some applications, however, may require a series of random numbers that are not distributed uniformly. For example, the probability for a system to be at particular energy level as a function of temperature is given by an exponential distribution, the velocities of an ideal gas at a particular temperature follow a Gaussian (normal) distribution, and many environmental, behavioral, and genetic data sets are modeled using these and other non-uniform distributions. This lesson will use two different procedures for creating any desired distribution of random numbers using a uniform random-number generator.
Model 1: Random seed
Random number functions are really an equation that takes a number input (referred to as the seed) and produces a new number as output. When you do not specify the seed (as we have not so far) Python the uses the current system time to set the seed.
Let's look at this briefly. Run the code below.
End of explanation
## gets the time, still not very human readable
time.localtime()
## formats the time nicely
time.asctime(time.localtime())
Explanation: 1. Describe the results.
Now run time.time() again below.
2. Describe the results and compare them to the first time.time() call.
Read the info on the time module here: http://www.tutorialspoint.com/python3/python_date_time.htm
3. Explain the output of the time.time() function calls (what is that number, how and why are the results of the two calls different, etc.).
Just in case you ever want to know what time it is, computers can give you a more human readable format (and if you're ever really interested, Python also has the datetime library that has a lot of super useful tools).
Run the code below.
End of explanation
## series of random numbers doesn't repeat
Explanation: But we digress. Back to random numbers...
The default seed for the first Python random number generator is actually set by the computer (based on the computer time). Then with each subsequent call for a new random number, the previous random number produced is used as the seed for the next number in the sequence.
4. If the next random number is generated using the one before it, why isn't the series the same every time? (answer the question, then write and run some code to demonstrate what you mean)
End of explanation |
3,525 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pandas 5
Step1: <a id=weo></a>
WEO data on government debt
We use the IMF's data on government debt again, specifically its World Economic Outlook database, commonly referred to as the WEO. We focus on government debt expressed as a percentage of GDP, variable code GGXWDG_NGDP.
The central question here is how the debt of Argentina, which defaulted in 2001, compared to other countries. Was it a matter of too much debt or something else?
Load data
First step
Step2: Clean and shape
Second step
Step3: Example. Let's try a simple graph of the dataframe dbt. The goal is to put Argentina in perspective by plotting it along with many other countries.
Step4: Exercise.
What do you take away from this graph?
What would you change to make it look better?
To make it mnore informative?
To put Argentina's debt in context?
Exercise. Do the same graph with Greece (GRC) as the country of interest. How does it differ? Why do you think that is?
<a id=describe></a>
Describing numerical data
Let's step back a minute. What we're trying to do is compare Argentina to other countries. What's the best way to do that? This isn't a question with an obvious best answer, but we can try some things, see how they look. One thing we could do is compare Argentina to the mean or median. Or to some other feature of the distribution.
We work up to this by looking first at some features of the distribution of government debt numbers across countries. Some of this we've seen, so is new.
What's (not) there?
Let's check out the data first. How many non-missing values do we have at each date? We can do that with the count method. The argument axis=1 says to do this by date, counting across columns (axis number 1).
Step5: Describing series
Let's take the data for 2001 -- the year of Argentina's default -- and see what how Argentina compares. Was its debt high compare to other countries?
which leads to more questions. How would we compare? Compare Argentina to the mean or median? Something else?
Let's see how that works.
Step6: Comment. If we add enough quantiles, we might as well plot the whole distribution. The easiest way to do this is with a histogram.
Step7: Describing dataframes
We can compute the same statistics for dataframes. Here we hve a choice
Step8: Example. Let's add the mean to our graph. We make it a dashed line with linestyle='dashed'.
Step9: Question. Do you think this looks better when the mean varies with time, or when we use a constant mean? Let's try it and see.
Step10: Exercise. Which do we like better?
Exercise. Replace the (constant) mean with the (constant) median? Which do you prefer?
<a id=value-counts></a>
Describing categorical data
A categorical variable is one that takes on a small number of values. States take on one of fifty values. University students are either grad or undergrad. Students select majors and concentrations.
We're going to do two things with categorical data
Step11: <a id=groupby></a>
Grouping data
Next up
Step12: Now that we have a groupby object, what can we do with it?
Step13: Add this via Spencer.
Comment. Note that the combination of groupby and count created a dataframe with
Its index is the variable we grouped by. If we group by more than one, we get a multi-index.
Its columns are the other variables.
Exercise. Take the code
python
counts = ml.groupby(['title', 'movieId')
Without running it, what is the index of counts? What are its columns? | Python Code:
import sys # system module
import pandas as pd # data package
import matplotlib.pyplot as plt # graphics module
import datetime as dt # date and time module
import numpy as np # foundation for Pandas
%matplotlib inline
# check versions (overkill, but why not?)
print('Python version:', sys.version)
print('Pandas version: ', pd.__version__)
print('Today: ', dt.date.today())
Explanation: Pandas 5: Summarizing data
Another in a series of notebooks that describe Pandas' powerful data management tools. In this one we summarize our data in a variety of ways. Which is more interesting than it sounds.
Outline:
WEO government debt data. Something to work with. How does Argentina's government debt compare to the debt of other countries? How did it compare when it defaulted in 2001?
Describing numerical data. Descriptive statistics: numbers of non-missing values, mean, median, quantiles.
Describing catgorical data. The excellent value_counts method.
Grouping data. An incredibly useful collection of tools based on grouping data based on a variable: men and woman, grads and undergrads, and so on.
Note: requires internet access to run.
This IPython notebook was created by Dave Backus, Chase Coleman, and Spencer Lyon for the NYU Stern course Data Bootcamp.
<a id=prelims></a>
Preliminaries
Import packages, etc.
End of explanation
url1 = 'http://www.imf.org/external/pubs/ft/weo/2015/02/weodata/'
url2 = 'WEOOct2015all.xls'
url = url1 + url2
weo = pd.read_csv(url, sep='\t',
usecols=[1,2] + list(range(19,45)),
thousands=',',
na_values=['n/a', '--'])
print('Variable dtypes:\n', weo.dtypes.head(6), sep='')
Explanation: <a id=weo></a>
WEO data on government debt
We use the IMF's data on government debt again, specifically its World Economic Outlook database, commonly referred to as the WEO. We focus on government debt expressed as a percentage of GDP, variable code GGXWDG_NGDP.
The central question here is how the debt of Argentina, which defaulted in 2001, compared to other countries. Was it a matter of too much debt or something else?
Load data
First step: load the data and extract a single variable: government debt (code GGXWDG_NGDP) expressed as a percentage of GDP.
End of explanation
# select debt variable
variables = ['GGXWDG_NGDP']
db = weo[weo['WEO Subject Code'].isin(variables)]
# drop variable code column (they're all the same)
db = db.drop('WEO Subject Code', axis=1)
# set index to country code
db = db.set_index('ISO')
# name columns
db.columns.name = 'Year'
# transpose
dbt = db.T
# see what we have
dbt.head()
Explanation: Clean and shape
Second step: select the variable we want and generate the two dataframes.
End of explanation
fig, ax = plt.subplots()
dbt.plot(ax=ax,
legend=False, color='blue', alpha=0.3,
ylim=(0,150)
)
ax.set_ylabel('Percent of GDP')
ax.set_xlabel('')
ax.set_title('Government debt', fontsize=14, loc='left')
dbt['ARG'].plot(ax=ax, color='black', linewidth=1.5)
Explanation: Example. Let's try a simple graph of the dataframe dbt. The goal is to put Argentina in perspective by plotting it along with many other countries.
End of explanation
dbt.shape
# count non-missing values
dbt.count(axis=1).plot()
Explanation: Exercise.
What do you take away from this graph?
What would you change to make it look better?
To make it mnore informative?
To put Argentina's debt in context?
Exercise. Do the same graph with Greece (GRC) as the country of interest. How does it differ? Why do you think that is?
<a id=describe></a>
Describing numerical data
Let's step back a minute. What we're trying to do is compare Argentina to other countries. What's the best way to do that? This isn't a question with an obvious best answer, but we can try some things, see how they look. One thing we could do is compare Argentina to the mean or median. Or to some other feature of the distribution.
We work up to this by looking first at some features of the distribution of government debt numbers across countries. Some of this we've seen, so is new.
What's (not) there?
Let's check out the data first. How many non-missing values do we have at each date? We can do that with the count method. The argument axis=1 says to do this by date, counting across columns (axis number 1).
End of explanation
# 2001 data
db01 = db['2001']
db01['ARG']
db01.mean()
db01.median()
db01.describe()
db01.quantile(q=[0.25, 0.5, 0.75])
Explanation: Describing series
Let's take the data for 2001 -- the year of Argentina's default -- and see what how Argentina compares. Was its debt high compare to other countries?
which leads to more questions. How would we compare? Compare Argentina to the mean or median? Something else?
Let's see how that works.
End of explanation
fig, ax = plt.subplots()
db01.hist(bins=15, ax=ax, alpha=0.35)
ax.set_xlabel('Government Debt (Percent of GDP)')
ax.set_ylabel('Number of Countries')
darg = db01['ARG']
ymin, ymax = ax.get_ylim()
ax.vlines(darg, ymin, ymax, color='blue', lw=2)
Explanation: Comment. If we add enough quantiles, we might as well plot the whole distribution. The easiest way to do this is with a histogram.
End of explanation
# here we compute the mean across countries at every date
dbt.mean(axis=1).head()
# or we could do the median
dbt.median(axis=1).head()
# or a bunch of stats at once
# NB: db not dbt (there's no axix argument here)
db.describe()
# the other way
dbt.describe()
Explanation: Describing dataframes
We can compute the same statistics for dataframes. Here we hve a choice: we can compute (say) the mean down rows (axis=0) or across columns (axis=1). If we use the dataframe dbt, computing the mean across countries (columns) calls for axis=1.
End of explanation
fig, ax = plt.subplots()
dbt.plot(ax=ax,
legend=False, color='blue', alpha=0.3,
ylim=(0,150)
)
dbt['ARG'].plot(ax=ax, color='black', linewidth=1.5)
ax.set_ylabel('Percent of GDP')
ax.set_xlabel('')
ax.set_title('Government debt', fontsize=14, loc='left')
dbt.mean(axis=1).plot(ax=ax, color='black', linewidth=2, linestyle='dashed')
Explanation: Example. Let's add the mean to our graph. We make it a dashed line with linestyle='dashed'.
End of explanation
dbar = dbt.mean().mean()
dbar
fig, ax = plt.subplots()
dbt.plot(ax=ax,
legend=False, color='blue', alpha=0.3,
ylim=(0,150)
)
dbt['ARG'].plot(ax=ax, color='black', linewidth=1.5)
ax.set_ylabel('Percent of GDP')
ax.set_xlabel('')
ax.set_title('Government debt', fontsize=14, loc='left')
xmin, xmax = ax.get_xlim()
ax.hlines(dbar, xmin, xmax, linewidth=2, linestyle='dashed')
Explanation: Question. Do you think this looks better when the mean varies with time, or when we use a constant mean? Let's try it and see.
End of explanation
url = 'http://pages.stern.nyu.edu/~dbackus/Data/mlcombined.csv'
ml = pd.read_csv(url)
print('Dimensions:', ml.shape)
ml.head(10)
# which movies have the most ratings?
ml['title'].value_counts().head(10)
ml['title'].value_counts().head(10).plot.barh(alpha=0.5)
# which people have rated the most movies?
ml['userId'].value_counts().head(10)
Explanation: Exercise. Which do we like better?
Exercise. Replace the (constant) mean with the (constant) median? Which do you prefer?
<a id=value-counts></a>
Describing categorical data
A categorical variable is one that takes on a small number of values. States take on one of fifty values. University students are either grad or undergrad. Students select majors and concentrations.
We're going to do two things with categorical data:
In this section, we count the number of observations in each category using the value_counts method. This is a series method, we apply it to one series/variable at a time.
In the next section, we go on to describe how other variables differ across catagories. How do students who major in finance differ from those who major in English? And so on.
We start with the combined MovieLens data we constructed in the previous notebook.
End of explanation
# group
g = ml[['title', 'rating']].groupby('title')
type(g)
Explanation: <a id=groupby></a>
Grouping data
Next up: group data by some variable. As an example, how would we compute the average rating of each movie? If you think for a minute, you might think of these steps:
Group the data by movie: Put all the "Pulp Fiction" ratings in one bin, all the "Shawshank" ratings in another. We do that with the groupby method.
Compute a statistic (the mean, for example) for each group.
Pandas has tools that make that relatively easy.
End of explanation
# the number in each category
g.count().head(10)
# what type of object have we created?
type(g.count())
Explanation: Now that we have a groupby object, what can we do with it?
End of explanation
gm = g.mean()
gm.head(10)
# we can put them together
grouped = g.count()
grouped = grouped.rename(columns={'rating': 'Number'})
grouped['Mean'] = g.mean()
grouped.head(10)
grouped.plot.scatter(x='Number', y='Mean')
Explanation: Add this via Spencer.
Comment. Note that the combination of groupby and count created a dataframe with
Its index is the variable we grouped by. If we group by more than one, we get a multi-index.
Its columns are the other variables.
Exercise. Take the code
python
counts = ml.groupby(['title', 'movieId')
Without running it, what is the index of counts? What are its columns?
End of explanation |
3,526 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Week 5
Step1: Load in house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
Step2: Create new features
Step3: As in Week 2, we consider features that are some transformations of inputs.
Step4: Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this variable will mostly affect houses with many bedrooms.
On the other hand, taking square root of sqft_living will decrease the separation between big house and small house. The owner may not be exactly twice as happy for getting a house that is twice as big.
Learn regression weights with L1 penalty
Let us fit a model with all the features available, plus the features we just created above.
Step5: Applying L1 penalty requires adding an extra parameter (l1_penalty) to the linear regression call in GraphLab Create. (Other tools may have separate implementations of LASSO.) Note that it's important to set l2_penalty=0 to ensure we don't introduce an additional L2 penalty.
Step6: Find what features had non-zero weight.
Step7: Note that a majority of the weights have been set to zero. So by setting an L1 penalty that's large enough, we are performing a subset selection.
QUIZ QUESTION
Step8: Next, we write a loop that does the following
Step9: QUIZ QUESTIONS
1. What was the best value for the l1_penalty?
2. What is the RSS on TEST data of the model with the best l1_penalty?
Step10: QUIZ QUESTION
Also, using this value of L1 penalty, how many nonzero weights do you have?
Step11: Limit the number of nonzero weights
What if we absolutely wanted to limit ourselves to, say, 7 features? This may be important if we want to derive "a rule of thumb" --- an interpretable model that has only a few features in them.
In this section, you are going to implement a simple, two phase procedure to achive this goal
Step12: Exploring the larger range of values to find a narrow range with the desired sparsity
Let's define a wide range of possible l1_penalty_values
Step13: Now, implement a loop that search through this space of possible l1_penalty values
Step14: Out of this large range, we want to find the two ends of our desired narrow range of l1_penalty. At one end, we will have l1_penalty values that have too few non-zeros, and at the other end, we will have an l1_penalty that has too many non-zeros.
More formally, find
Step15: QUIZ QUESTIONS
What values did you find for l1_penalty_min andl1_penalty_max?
Exploring the narrow range of values to find the solution with the right number of non-zeros that has lowest RSS on the validation set
We will now explore the narrow region of l1_penalty values we found
Step16: For l1_penalty in np.linspace(l1_penalty_min,l1_penalty_max,20)
Step17: QUIZ QUESTIONS
1. What value of l1_penalty in our narrow range has the lowest RSS on the VALIDATION set and has sparsity equal to max_nonzeros?
2. What features in this model have non-zero coefficients? | Python Code:
import graphlab
Explanation: Regression Week 5: Feature Selection and LASSO (Interpretation)
In this notebook, you will use LASSO to select features, building on a pre-implemented solver for LASSO (using GraphLab Create, though you can use other solvers). You will:
* Run LASSO with different L1 penalties.
* Choose best L1 penalty using a validation set.
* Choose best L1 penalty using a validation set, with additional constraint on the size of subset.
In the second notebook, you will implement your own LASSO solver, using coordinate descent.
Fire up graphlab create
End of explanation
sales = graphlab.SFrame('kc_house_data.gl/')
Explanation: Load in house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
End of explanation
sales.head()
Explanation: Create new features
End of explanation
from math import log, sqrt
sales['sqft_living_sqrt'] = sales['sqft_living'].apply(sqrt)
sales['sqft_lot_sqrt'] = sales['sqft_lot'].apply(sqrt)
sales['bedrooms_square'] = sales['bedrooms']*sales['bedrooms']
# In the dataset, 'floors' was defined with type string,
# so we'll convert them to float, before creating a new feature.
sales['floors'] = sales['floors'].astype(float)
sales['floors_square'] = sales['floors']*sales['floors']
Explanation: As in Week 2, we consider features that are some transformations of inputs.
End of explanation
all_features = ['bedrooms', 'bedrooms_square',
'bathrooms',
'sqft_living', 'sqft_living_sqrt',
'sqft_lot', 'sqft_lot_sqrt',
'floors', 'floors_square',
'waterfront', 'view', 'condition', 'grade',
'sqft_above',
'sqft_basement',
'yr_built', 'yr_renovated']
Explanation: Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this variable will mostly affect houses with many bedrooms.
On the other hand, taking square root of sqft_living will decrease the separation between big house and small house. The owner may not be exactly twice as happy for getting a house that is twice as big.
Learn regression weights with L1 penalty
Let us fit a model with all the features available, plus the features we just created above.
End of explanation
model_all = graphlab.linear_regression.create(sales, target='price', features=all_features,
validation_set=None,
l2_penalty=0., l1_penalty=1e10)
Explanation: Applying L1 penalty requires adding an extra parameter (l1_penalty) to the linear regression call in GraphLab Create. (Other tools may have separate implementations of LASSO.) Note that it's important to set l2_penalty=0 to ensure we don't introduce an additional L2 penalty.
End of explanation
model_all.get('coefficients')[model_all.get('coefficients')['value'] > 0.0]
Explanation: Find what features had non-zero weight.
End of explanation
(training_and_validation, testing) = sales.random_split(.9,seed=1) # initial train/test split
(training, validation) = training_and_validation.random_split(0.5, seed=1) # split training into train and validate
Explanation: Note that a majority of the weights have been set to zero. So by setting an L1 penalty that's large enough, we are performing a subset selection.
QUIZ QUESTION:
According to this list of weights, which of the features have been chosen?
Selecting an L1 penalty
To find a good L1 penalty, we will explore multiple values using a validation set. Let us do three way split into train, validation, and test sets:
* Split our sales data into 2 sets: training and test
* Further split our training data into two sets: train, validation
Be very careful that you use seed = 1 to ensure you get the same answer!
End of explanation
validation_rss_avg_list = []
best_l1_penalty = 1
min_rss = float("inf")
import numpy as np
for l1_penalty in np.logspace(1, 7, num=13):
model = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set=None,
l2_penalty=0., l1_penalty=l1_penalty, verbose=False)
# find validation error
prediction = model.predict(validation[all_features])
error = prediction - validation['price']
error_squared = error * error
rss = error_squared.sum()
print "L1 penalty " + str(l1_penalty) + " validation rss = " + str(rss)
if (rss < min_rss):
min_rss = rss
best_l1_penalty = l1_penalty
validation_rss_avg_list.append(rss)
print "Best L1 penalty " + str(best_l1_penalty) + " validation rss = " + str(min_rss)
validation_rss_avg_list
np.logspace(1, 7, num=13)
Explanation: Next, we write a loop that does the following:
* For l1_penalty in [10^1, 10^1.5, 10^2, 10^2.5, ..., 10^7] (to get this in Python, type np.logspace(1, 7, num=13).)
* Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list.
* Compute the RSS on VALIDATION data (here you will want to use .predict()) for that l1_penalty
* Report which l1_penalty produced the lowest RSS on validation data.
When you call linear_regression.create() make sure you set validation_set = None.
Note: you can turn off the print out of linear_regression.create() with verbose = False
End of explanation
best_l1_penalty
model_best = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set=None,
l2_penalty=0., l1_penalty=best_l1_penalty, verbose=False)
Explanation: QUIZ QUESTIONS
1. What was the best value for the l1_penalty?
2. What is the RSS on TEST data of the model with the best l1_penalty?
End of explanation
len(model_best.get('coefficients')[model_best.get('coefficients')['value'] > 0.0])
Explanation: QUIZ QUESTION
Also, using this value of L1 penalty, how many nonzero weights do you have?
End of explanation
max_nonzeros = 7
Explanation: Limit the number of nonzero weights
What if we absolutely wanted to limit ourselves to, say, 7 features? This may be important if we want to derive "a rule of thumb" --- an interpretable model that has only a few features in them.
In this section, you are going to implement a simple, two phase procedure to achive this goal:
1. Explore a large range of l1_penalty values to find a narrow region of l1_penalty values where models are likely to have the desired number of non-zero weights.
2. Further explore the narrow region you found to find a good value for l1_penalty that achieves the desired sparsity. Here, we will again use a validation set to choose the best value for l1_penalty.
End of explanation
l1_penalty_values = np.logspace(8, 10, num=20)
Explanation: Exploring the larger range of values to find a narrow range with the desired sparsity
Let's define a wide range of possible l1_penalty_values:
End of explanation
nnz_list = []
for l1_penalty in np.logspace(8, 10, num=20):
model = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set=None,
l2_penalty=0., l1_penalty=l1_penalty, verbose=False)
# extract number of nnz
nnz = model['coefficients']['value'].nnz()
print "L1 penalty " + str(l1_penalty) + " : # nnz = " + str(nnz)
nnz_list.append(nnz)
nnz_list
Explanation: Now, implement a loop that search through this space of possible l1_penalty values:
For l1_penalty in np.logspace(8, 10, num=20):
Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list. When you call linear_regression.create() make sure you set validation_set = None
Extract the weights of the model and count the number of nonzeros. Save the number of nonzeros to a list.
Hint: model['coefficients']['value'] gives you an SArray with the parameters you learned. If you call the method .nnz() on it, you will find the number of non-zero parameters!
End of explanation
l1_penalty_min = 2976351441.63
l1_penalty_max = 3792690190.73
Explanation: Out of this large range, we want to find the two ends of our desired narrow range of l1_penalty. At one end, we will have l1_penalty values that have too few non-zeros, and at the other end, we will have an l1_penalty that has too many non-zeros.
More formally, find:
* The largest l1_penalty that has more non-zeros than max_nonzero (if we pick a penalty smaller than this value, we will definitely have too many non-zero weights)
* Store this value in the variable l1_penalty_min (we will use it later)
* The smallest l1_penalty that has fewer non-zeros than max_nonzero (if we pick a penalty larger than this value, we will definitely have too few non-zero weights)
* Store this value in the variable l1_penalty_max (we will use it later)
Hint: there are many ways to do this, e.g.:
* Programmatically within the loop above
* Creating a list with the number of non-zeros for each value of l1_penalty and inspecting it to find the appropriate boundaries.
End of explanation
l1_penalty_values = np.linspace(l1_penalty_min,l1_penalty_max,20)
Explanation: QUIZ QUESTIONS
What values did you find for l1_penalty_min andl1_penalty_max?
Exploring the narrow range of values to find the solution with the right number of non-zeros that has lowest RSS on the validation set
We will now explore the narrow region of l1_penalty values we found:
End of explanation
nnz_list = []
validation_rss_avg_list = []
best_l1_penalty = 1
min_rss = float("inf")
import numpy as np
for l1_penalty in np.linspace(l1_penalty_min,l1_penalty_max,20):
model = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set=None,
l2_penalty=0., l1_penalty=l1_penalty, verbose=False)
# find validation error
prediction = model.predict(validation[all_features])
error = prediction - validation['price']
error_squared = error * error
rss = error_squared.sum()
print "L1 penalty " + str(l1_penalty) + " validation rss = " + str(rss)
# extract number of nnz
nnz = model['coefficients']['value'].nnz()
print "L1 penalty " + str(l1_penalty) + " : # nnz = " + str(nnz)
nnz_list.append(nnz)
print "----------------------------------------------------------"
if (nnz == max_nonzeros and rss < min_rss):
min_rss = rss
best_l1_penalty = l1_penalty
validation_rss_avg_list.append(rss)
print "Best L1 penalty " + str(best_l1_penalty) + " validation rss = " + str(min_rss)
Explanation: For l1_penalty in np.linspace(l1_penalty_min,l1_penalty_max,20):
Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list. When you call linear_regression.create() make sure you set validation_set = None
Measure the RSS of the learned model on the VALIDATION set
Find the model that the lowest RSS on the VALIDATION set and has sparsity equal to max_nonzero.
End of explanation
model_best = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set=None,
l2_penalty=0., l1_penalty=best_l1_penalty, verbose=False)
model_best.get('coefficients')[model_best.get('coefficients')['value'] > 0.0]
Explanation: QUIZ QUESTIONS
1. What value of l1_penalty in our narrow range has the lowest RSS on the VALIDATION set and has sparsity equal to max_nonzeros?
2. What features in this model have non-zero coefficients?
End of explanation |
3,527 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Onset detection
In this tutorial, we will look at how to perform onset detection and mark onset positions in the audio.
Onset detection consists of two steps
Step1: We can now listen to the resulting audio files to see which of the two onset detection functions works better for our audio example.
Step2: Finally, let's plot the onset detection functions we computed and the audio with onsets marked by vertical lines. Inspecting these plots, we can easily see how the hfc method picked up the hi-hats, while the complex method also detected the kicks. | Python Code:
from essentia.standard import *
from tempfile import TemporaryDirectory
# Load audio file.
audio = MonoLoader(filename='../../../test/audio/recorded/hiphop.mp3')()
# 1. Compute the onset detection function (ODF).
# The OnsetDetection algorithm provides various ODFs.
od_hfc = OnsetDetection(method='hfc')
od_complex = OnsetDetection(method='complex')
# We need the auxilary algorithms to compute magnitude and phase.
w = Windowing(type='hann')
fft = FFT() # Outputs a complex FFT vector.
c2p = CartesianToPolar() # Converts it into a pair of magnitude and phase vectors.
# Compute both ODF frame by frame. Store results to a Pool.
pool = essentia.Pool()
for frame in FrameGenerator(audio, frameSize=1024, hopSize=512):
magnitude, phase = c2p(fft(w(frame)))
pool.add('odf.hfc', od_hfc(magnitude, phase))
pool.add('odf.complex', od_complex(magnitude, phase))
# 2. Detect onset locations.
onsets = Onsets()
onsets_hfc = onsets(# This algorithm expects a matrix, not a vector.
essentia.array([pool['odf.hfc']]),
# You need to specify weights, but if we use only one ODF
# it doesn't actually matter which weight to give it
[1])
onsets_complex = onsets(essentia.array([pool['odf.complex']]), [1])
# Add onset markers to the audio and save it to a file.
# We use beeps instead of white noise and stereo signal as it's more distinctive.
# We want to keep beeps in a separate audio channel.
# Add them to a silent audio and use the original audio as another channel. Mux both into a stereo signal.
silence = [0.] * len(audio)
beeps_hfc = AudioOnsetsMarker(onsets=onsets_hfc, type='beep')(silence)
beeps_complex = AudioOnsetsMarker(onsets=onsets_complex, type='beep')(silence)
audio_hfc = StereoMuxer()(audio, beeps_hfc)
audio_complex = StereoMuxer()(audio, beeps_complex)
# Write audio to files in a temporary directory.
temp_dir = TemporaryDirectory()
AudioWriter(filename=temp_dir.name + '/hiphop_onsets_hfc_stereo.mp3', format='mp3')(audio_hfc)
AudioWriter(filename=temp_dir.name + '/hiphop_onsets_complex_stereo.mp3', format='mp3')(audio_complex)
Explanation: Onset detection
In this tutorial, we will look at how to perform onset detection and mark onset positions in the audio.
Onset detection consists of two steps:
Compute an onset detection function (ODF). ODFs describe changes in the audio signal capturing frame-to-frame spectral energy or phase differences. The peaks of an ODF correspond to abrupt changes, and they may represent occurring onsets.
Decide onset locations in the signal based on the peaks in the computed ODF. Depending on your application, you can try combining different ODFs for more refined results.
OnsetDetection estimates various ODFs for an audio frame given its spectrum. It should be called iteratively on consequent frames one by one as it remembers the previously seen frame to compute the difference. OnsetDetectionGlobal allows to compute a few more ODFs, and instead, it works on the entire audio signal as an input.
Onsets detects onsets given a matrix with ODF values in each frame. It can be used with a single or multiple ODFs.
In case you want to sonify detected onsets, use AudioOnsetsMarker to add beeps or pulses to the mono audio at onset positions. Alternatively, we can store both the original sound and the beeps in a stereo signal putting them separately into left and right channels using StereoMuxer. It is useful when you want to avoid masking the audio with the added markers (e.g., added beeps are masking hi-hats).
To save the audio to file, use MonoWriter or AudioWriter.
Let's use two ODFs as an example and compare the detected onsets.
End of explanation
import IPython
IPython.display.Audio('../../../test/audio/recorded/hiphop.mp3')
IPython.display.Audio(temp_dir.name + '/hiphop_onsets_hfc_stereo.mp3')
IPython.display.Audio(temp_dir.name + '/hiphop_onsets_complex_stereo.mp3')
Explanation: We can now listen to the resulting audio files to see which of the two onset detection functions works better for our audio example.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
import numpy
n_frames = len(pool['odf.hfc'])
frames_position_samples = numpy.array(range(n_frames)) * 512
fig, ((ax1, ax2, ax3, ax4)) = plt.subplots(4, 1, sharex=True, sharey=False, figsize=(15, 16))
ax1.set_title('HFC ODF')
ax1.plot(frames_position_samples, pool['odf.hfc'], color='magenta')
ax2.set_title('Complex ODF')
ax2.plot(frames_position_samples, pool['odf.complex'], color='red')
ax3.set_title('Audio waveform and the estimated onset positions (HFC ODF)')
ax3.plot(audio)
for onset in onsets_hfc:
ax3.axvline(x=onset*44100, color='magenta')
ax4.set_title('Audio waveform and the estimated onset positions (complex ODF)')
ax4.plot(audio)
for onset in onsets_complex:
ax4.axvline(x=onset*44100, color='red')
Explanation: Finally, let's plot the onset detection functions we computed and the audio with onsets marked by vertical lines. Inspecting these plots, we can easily see how the hfc method picked up the hi-hats, while the complex method also detected the kicks.
End of explanation |
3,528 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This is a sketch for Adversarial images in MNIST
Step1: recreate the network structure
Step2: Load previous model
Step3: Extract some "2" images from test set
Step4: one Adversarial vs one image
Step5: Method 1
Step6: Method 2
Step7: Take a look at individual image | Python Code:
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('/tmp/tensorflow/mnist/input_data', one_hot=True)
import seaborn as sns
sns.set_style('white')
colors_list = sns.color_palette("Paired", 10)
Explanation: This is a sketch for Adversarial images in MNIST
End of explanation
x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME')
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
x_image = tf.reshape(x, [-1,28,28,1])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
y_pred = tf.nn.softmax(y_conv)
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
Explanation: recreate the network structure
End of explanation
model_path = './MNIST.ckpt'
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
tf.train.Saver().restore(sess, model_path)
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
Explanation: Load previous model
End of explanation
index_mask = np.where(mnist.test.labels[:, 2])[0]
subset_mask = np.random.choice(index_mask, 10)
subset_mask
origin_images = mnist.test.images[subset_mask]
origin_labels = mnist.test.labels[subset_mask]
origin_labels
prediction=tf.argmax(y_pred,1)
prediction_val = prediction.eval(feed_dict={x: origin_images, keep_prob: 1.0}, session=sess)
print("predictions", prediction_val)
probabilities=y_pred
probabilities_val = probabilities.eval(feed_dict={x: origin_images, keep_prob: 1.0}, session=sess)
print ("probabilities", probabilities_val)
for i in range(0, 10):
print('correct label:', np.argmax(origin_labels[i]))
print('predict label:', prediction_val[i])
print('Confidence:', np.max(probabilities_val[i]))
plt.figure(figsize=(2, 2))
plt.axis('off')
plt.imshow(origin_images[i].reshape([28, 28]), interpolation=None, cmap=plt.cm.gray)
plt.show()
target_number = 6
target_labels = np.zeros(origin_labels.shape)
target_labels[:, target_number] = 1
origin_labels
target_labels
img_gradient = tf.gradients(cross_entropy, x)[0]
Explanation: Extract some "2" images from test set
End of explanation
eta = 0.5
iter_num = 10
Explanation: one Adversarial vs one image
End of explanation
adversarial_img = origin_images.copy()
for i in range(0, iter_num):
gradient = img_gradient.eval({x: adversarial_img, y_: target_labels, keep_prob: 1.0})
adversarial_img = adversarial_img - eta * gradient
prediction=tf.argmax(y_pred,1)
prediction_val = prediction.eval(feed_dict={x: adversarial_img, keep_prob: 1.0}, session=sess)
print("predictions", prediction_val)
probabilities=y_pred
probabilities_val = probabilities.eval(feed_dict={x: adversarial_img, keep_prob: 1.0}, session=sess)
print('Confidence 2:', probabilities_val[:, 2])
print('Confidence 6:', probabilities_val[:, 6])
print('-----------------------------------')
Explanation: Method 1: update using the info in gradient
This means we will update the image based on the value of gradient, ideally, this will give us a adversarial image with less wiggle, as we only need to add a little wiggle when the gradient at that point is large.
End of explanation
eta = 0.02
iter_num = 10
adversarial_img = origin_images.copy()
for i in range(0, iter_num):
gradient = img_gradient.eval({x: adversarial_img, y_: target_labels, keep_prob: 1.0})
adversarial_img = adversarial_img - eta * np.sign(gradient)
prediction=tf.argmax(y_pred,1)
prediction_val = prediction.eval(feed_dict={x: adversarial_img, keep_prob: 1.0}, session=sess)
print("predictions", prediction_val)
probabilities=y_pred
probabilities_val = probabilities.eval(feed_dict={x: adversarial_img, keep_prob: 1.0}, session=sess)
print('Confidence 2:', probabilities_val[:, 2])
print('Confidence 6:', probabilities_val[:, 6])
print('-----------------------------------')
Explanation: Method 2: update using the sign of gradient
perform some step size for each pixel
End of explanation
threshold = 0.99
eta = 0.001
prediction=tf.argmax(y_pred,1)
probabilities=y_pred
adversarial_img = origin_images[1: 2].copy()
adversarial_label = target_labels[1: 2]
start_img = adversarial_img.copy()
confidence = 0
iter_num = 0
prob_history = list()
while confidence < threshold:
gradient = img_gradient.eval({x: adversarial_img, y_: adversarial_label, keep_prob: 1.0})
adversarial_img -= eta * np.sign(gradient)
probabilities_val = probabilities.eval(feed_dict={x: adversarial_img, keep_prob: 1.0}, session=sess)
confidence = probabilities_val[:, 6]
prob_history.append(probabilities_val[0])
iter_num += 1
print(iter_num)
sns.set_style('whitegrid')
prob_history = np.array(prob_history)
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(111)
for i, record in enumerate(prob_history.T):
plt.plot(record, color=colors_list[i])
ax.legend([str(x) for x in range(0, 10)],
loc='center left', bbox_to_anchor=(1.05, 0.5), fontsize=14)
ax.set_xlabel('Iteration')
ax.set_ylabel('Prediction Confidence')
sns.set_style('white')
fig = plt.figure(figsize=(9, 4))
ax1 = fig.add_subplot(1,3,1)
ax1.axis('off')
ax1.imshow(start_img.reshape([28, 28]), interpolation=None, cmap=plt.cm.gray)
ax1.title.set_text('Confidence for 2: ' + '{:.4f}'.format(prob_history[0][2])
+ '\nConfidence for 6: ' + '{:.4f}'.format(prob_history[0][6]))
ax2 = fig.add_subplot(1,3,2)
ax2.axis('off')
ax2.imshow((adversarial_img - start_img).reshape([28, 28]), interpolation=None, cmap=plt.cm.gray)
ax2.title.set_text('Delta')
ax3 = fig.add_subplot(1,3,3)
ax3.axis('off')
ax3.imshow((adversarial_img).reshape([28, 28]), interpolation=None, cmap=plt.cm.gray)
ax3.title.set_text('Confidence for 2: ' + '{:.4f}'.format(prob_history[-1][2])
+ '\nConfidence for 6: ' + '{:.4f}'.format(prob_history[-1][6]))
plt.show()
print("Difference Measure:", np.sum((adversarial_img - start_img) ** 2))
eta = 0.01
prediction=tf.argmax(y_pred,1)
probabilities=y_pred
adversarial_img = origin_images[1: 2].copy()
adversarial_label = target_labels[1: 2]
start_img = adversarial_img.copy()
confidence = 0
iter_num = 0
prob_history = list()
while confidence < threshold:
gradient = img_gradient.eval({x: adversarial_img, y_: adversarial_label, keep_prob: 1.0})
adversarial_img -= eta * gradient
probabilities_val = probabilities.eval(feed_dict={x: adversarial_img, keep_prob: 1.0}, session=sess)
confidence = probabilities_val[:, 6]
prob_history.append(probabilities_val[0])
iter_num += 1
print(iter_num)
sns.set_style('white')
fig = plt.figure(figsize=(9, 4))
ax1 = fig.add_subplot(1,3,1)
ax1.axis('off')
ax1.imshow(start_img.reshape([28, 28]), interpolation=None, cmap=plt.cm.gray)
ax1.title.set_text('Confidence for 2: ' + '{:.4f}'.format(prob_history[0][2])
+ '\nConfidence for 6: ' + '{:.4f}'.format(prob_history[0][6]))
ax2 = fig.add_subplot(1,3,2)
ax2.axis('off')
ax2.imshow((adversarial_img - start_img).reshape([28, 28]), interpolation=None, cmap=plt.cm.gray)
ax2.title.set_text('Delta')
ax3 = fig.add_subplot(1,3,3)
ax3.axis('off')
ax3.imshow((adversarial_img).reshape([28, 28]), interpolation=None, cmap=plt.cm.gray)
ax3.title.set_text('Confidence for 2: ' + '{:.4f}'.format(prob_history[-1][2])
+ '\nConfidence for 6: ' + '{:.4f}'.format(prob_history[-1][6]))
plt.show()
print("Difference Measure:", np.sum((adversarial_img - start_img) ** 2))
sns.set_style('whitegrid')
prob_history = np.array(prob_history)
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(111)
for i, record in enumerate(prob_history.T):
plt.plot(record, color=colors_list[i])
ax.legend([str(x) for x in range(0, 10)],
loc='center left', bbox_to_anchor=(1.05, 0.5), fontsize=14)
ax.set_xlabel('Iteration')
ax.set_ylabel('Prediction Confidence')
Explanation: Take a look at individual image
End of explanation |
3,529 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
IPython extension for drawing circuit diagrams with LaTeX/Circuitikz
Robert Johansson
http
Step1: Load the extension
Step2: Example
Step3: Example | Python Code:
%install_ext http://raw.github.com/jrjohansson/ipython-circuitikz/master/circuitikz.py
Explanation: IPython extension for drawing circuit diagrams with LaTeX/Circuitikz
Robert Johansson
http://github.com/jrjohansson/ipython-circuitikz
Requirements
This IPython magic command uses the following external dependencies: pdflatex, pdfcrop, the Circuitikz package and
for PNG output: convert from ImageMagick
for SVG output: pdf2svg
Installation
End of explanation
%reload_ext circuitikz
Explanation: Load the extension
End of explanation
%%circuitikz filename=squid dpi=125
\begin{circuitikz}[scale=1]
\draw ( 0, 0) [short, *-] node[anchor=south] {$\Phi_J$} to (0, -1);
% right
\draw ( 0, -1) to (2, -1) to node[anchor=west] {$\Phi_{J}^2$} (2, -2) to (3, -2)
to [barrier, l=$E_J^2$] (3, -4) to (2, -4)to (2, -5) to (0, -5) node[ground] {};
\draw ( 2, -2) to (1, -2) to [capacitor, l=$C_J^2$] (1, -4) to (1, -4) to (2, -4);
% left
\draw ( 0, -1) to (-2, -1) to node[anchor=west] {$\Phi_{J}^1$} (-2, -2) to (-3, -2)
to [capacitor, l=$C_J^1$] (-3, -4) to (-2, -4) to (-2, -5) to (0, -5);
\draw (-2, -2) to (-1, -2) to [barrier, l=$E_J^1$] (-1, -4) to (-1, -4) to (-2, -4);
\end{circuitikz}
Explanation: Example: SQUID
End of explanation
%%circuitikz filename=tm dpi=150
\begin{circuitikz}[scale=1.25]
\draw (-1,0) node[anchor=east] {} to [short, *-*] (1,0);
\draw (-1,2) node[anchor=east] {} to [inductor, *-*, l=$\Delta x L$] (1,2);
\draw (-1,0) to [open, l=$\cdots$] (-1,2);
\draw (3, 0) to (1, 0) to [capacitor, l=$\Delta x C$, *-*] (1, 2) to [inductor, *-*, l=$\Delta x L$] (3, 2);
\draw (5, 0) to (3, 0) to [capacitor, l=$\Delta x C$, *-*] (3, 2) to [inductor, *-*, l=$\Delta x L$] (5, 2);
\draw (7, 0) to (5, 0) to [capacitor, l=$\Delta x C$, *-*] (5, 2) to [inductor, *-*, l=$\Delta x L$] (7, 2);
\draw (9, 0) to (7, 0) to [capacitor, l=$\Delta x C$, *-*] (7, 2) to [inductor, *-*, l=$\Delta x L$] (9, 2);
\draw (9,0) node[anchor=east] {} to [short, *-*] (9,0);
\draw (10,0) to [open, l=$\cdots$] (10,2);
\end{circuitikz}
Explanation: Example: Transmission line
End of explanation |
3,530 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear models with CNN features
Step1: Introduction
We need to find a way to convert the imagenet predictions to a probability of being a cat or a dog, since that is what the Kaggle competition requires us to submit. We could use the imagenet hierarchy to download a list of all the imagenet categories in each of the dog and cat groups, and could then solve our problem in various ways, such as
Step2: Linear models in keras
It turns out that each of the Dense() layers is just a linear model, followed by a simple activation function. We'll learn about the activation function later - first, let's review how linear models work.
A linear model is (as I'm sure you know) simply a model where each row is calculated as sum(row * weights), where weights needs to be learnt from the data, and will be the same for every row. For example, let's create some data that we know is linearly related
Step3: We can use keras to create a simple linear model (Dense() - with no activation - in Keras) and optimize it using SGD to minimize mean squared error (mse)
Step4: (See the Optim Tutorial notebook and associated Excel spreadsheet to learn all about SGD and related optimization algorithms.)
This has now learnt internal weights inside the lm model, which we can use to evaluate the loss function (MSE).
Step5: And, of course, we can also take a look at the weights - after fitting, we should see that they are close to the weights we used to calculate y (2.0, 3.0, and 1.0).
Step6: Train linear model on predictions
Using a Dense() layer in this way, we can easily convert the 1,000 predictions given by our model into a probability of dog vs cat--simply train a linear model to take the 1,000 predictions as input, and return dog or cat as output, learning from the Kaggle data. This should be easier and more accurate than manually creating a map from imagenet categories to one dog/cat category.
Training the model
We start with some basic config steps. We copy a small amount of our data into a 'sample' directory, with the exact same structure as our 'train' directory--this is always a good idea in all machine learning, since we should do all of our initial testing using a dataset small enough that we never have to wait for it.
Step7: We will process as many images at a time as our graphics card allows. This is a case of trial and error to find the max batch size - the largest size that doesn't give an out of memory error.
Step8: We need to start with our VGG 16 model, since we'll be using its predictions and features.
Step9: Our overall approach here will be
Step10: Loading and resizing the images every time we want to use them isn't necessary - instead we should save the processed arrays. By far the fastest way to save and load numpy arrays is using bcolz. This also compresses the arrays, so we save disk space. Here are the functions we'll use to save and load using bcolz.
Step11: We have provided a simple function that joins the arrays from all the batches - let's use this to grab the training and validation data
Step12: We can load our training and validation data later without recalculating them
Step13: Keras returns classes as a single column, so we convert to one hot encoding
Step14: ...and their 1,000 imagenet probabilties from VGG16--these will be the features for our linear model
Step15: We can load our training and validation features later without recalculating them
Step16: Now we can define our linear model, just like we did earlier
Step17: We're ready to fit the model!
Step18: Viewing model prediction examples
Keras' fit() function conveniently shows us the value of the loss function, and the accuracy, after every epoch ("epoch" refers to one full run through all training examples). The most important metrics for us to look at are for the validation set, since we want to check for over-fitting.
Tip
Step19: Get the filenames for the validation set, so we can view images
Step20: Helper function to plot images by index in the validation set
Step21: Perhaps the most common way to analyze the result of a classification model is to use a confusion matrix. Scikit-learn has a convenient function we can use for this purpose
Step22: We can just print out the confusion matrix, or we can show a graphical view (which is mainly useful for dependents with a larger number of categories).
Step23: About activation functions
Do you remember how we defined our linear model? Here it is again for reference
Step24: Careful! Now that we've modified the definition of model, be careful not to rerun any code in the previous sections, without first recreating the model from scratch! (Yes, I made that mistake myself, which is why I'm warning you about it now...)
Now we're ready to add our new final layer...
Step25: ...and compile our updated model, and set up our batches to use the preprocessed images (note that now we will also shuffle the training batches, to add more randomness when using multiple epochs)
Step26: We'll define a simple function for fitting models, just to save a little typing...
Step27: ...and now we can use it to train the last layer of our model!
(It runs quite slowly, since it still has to calculate all the previous layers in order to know what input to pass to the new final layer. We could precalculate the output of the penultimate layer, like we did for the final layer earlier - but since we're only likely to want one or two iterations, it's easier to follow this alternative approach.)
Step28: Before moving on, go back and look at how little code we had to write in this section to finetune the model. Because this is such an important and common operation, keras is set up to make it as easy as possible. We didn't even have to use any external helper functions in this section.
It's a good idea to save weights of all your models, so you can re-use them later. Be sure to note the git log number of your model when keeping a research journal of your results.
Step29: We can look at the earlier prediction examples visualizations by redefining probs and preds and re-using our earlier code.
Step30: Retraining more layers
Now that we've fine-tuned the new final layer, can we, and should we, fine-tune all the dense layers? The answer to both questions, it turns out, is
Step31: The key insight is that the stacking of linear functions and non-linear activations we learnt about in the last section is simply defining a function of functions (of functions, of functions...). Each layer is taking the output of the previous layer's function, and using it as input into its function. Therefore, we can calculate the derivative at any layer by simply multiplying the gradients of that layer and all of its following layers together! This use of the chain rule to allow us to rapidly calculate the derivatives of our model at any layer is referred to as back propagation.
The good news is that you'll never have to worry about the details of this yourself, since libraries like Theano and Tensorflow (and therefore wrappers like Keras) provide automatic differentiation (or AD). TODO
Training multiple layers in Keras
The code below will work on any model that contains dense layers; it's not just for this VGG model.
NB
Step32: Since we haven't changed our architecture, there's no need to re-compile the model - instead, we just set the learning rate. Since we're training more layers, and since we've already optimized the last layer, we should use a lower learning rate than previously.
Step33: This is an extraordinarily powerful 5 lines of code. We have fine-tuned all of our dense layers to be optimized for our specific data set. This kind of technique has only become accessible in the last year or two - and we can already do it in just 5 lines of python!
Step34: There's generally little room for improvement in training the convolutional layers, if you're using the model on natural images (as we are). However, there's no harm trying a few of the later conv layers, since it may give a slight improvement, and can't hurt (and we can always load the previous weights if the accuracy decreases).
Step35: You can always load the weights later and use the model to do whatever you need | Python Code:
# Rather than importing everything manually, we'll make things easy
# and load them all in utils.py, and just import them from there.
%matplotlib inline
import utils; reload(utils)
from utils import *
Explanation: Linear models with CNN features
End of explanation
%matplotlib inline
from __future__ import division,print_function
import os, json
from glob import glob
import numpy as np
import scipy
from sklearn.preprocessing import OneHotEncoder
from sklearn.metrics import confusion_matrix
np.set_printoptions(precision=4, linewidth=100)
from matplotlib import pyplot as plt
import utils; reload(utils)
from utils import plots, get_batches, plot_confusion_matrix, get_data
from numpy.random import random, permutation
from scipy import misc, ndimage
from scipy.ndimage.interpolation import zoom
import keras
from keras import backend as K
from keras.utils.data_utils import get_file
from keras.models import Sequential
from keras.layers import Input
from keras.layers.core import Flatten, Dense, Dropout, Lambda
from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D
from keras.optimizers import SGD, RMSprop
from keras.preprocessing import image
Explanation: Introduction
We need to find a way to convert the imagenet predictions to a probability of being a cat or a dog, since that is what the Kaggle competition requires us to submit. We could use the imagenet hierarchy to download a list of all the imagenet categories in each of the dog and cat groups, and could then solve our problem in various ways, such as:
Finding the largest probability that's either a cat or a dog, and using that label
Averaging the probability of all the cat categories and comparing it to the average of all the dog categories.
But these approaches have some downsides:
They require manual coding for something that we should be able to learn from the data
They ignore information available in the predictions; for instance, if the models predicts that there is a bone in the image, it's more likely to be a dog than a cat.
A very simple solution to both of these problems is to learn a linear model that is trained using the 1,000 predictions from the imagenet model for each image as input, and the dog/cat label as target.
End of explanation
x = random((30,2))
y = np.dot(x, [2., 3.]) + 1.
x[:5]
y[:5]
Explanation: Linear models in keras
It turns out that each of the Dense() layers is just a linear model, followed by a simple activation function. We'll learn about the activation function later - first, let's review how linear models work.
A linear model is (as I'm sure you know) simply a model where each row is calculated as sum(row * weights), where weights needs to be learnt from the data, and will be the same for every row. For example, let's create some data that we know is linearly related:
End of explanation
lm = Sequential([ Dense(1, input_shape=(2,)) ])
lm.compile(optimizer=SGD(lr=0.1), loss='mse')
Explanation: We can use keras to create a simple linear model (Dense() - with no activation - in Keras) and optimize it using SGD to minimize mean squared error (mse):
End of explanation
lm.evaluate(x, y, verbose=0)
lm.fit(x, y, nb_epoch=5, batch_size=1)
lm.evaluate(x, y, verbose=0)
Explanation: (See the Optim Tutorial notebook and associated Excel spreadsheet to learn all about SGD and related optimization algorithms.)
This has now learnt internal weights inside the lm model, which we can use to evaluate the loss function (MSE).
End of explanation
lm.get_weights()
Explanation: And, of course, we can also take a look at the weights - after fitting, we should see that they are close to the weights we used to calculate y (2.0, 3.0, and 1.0).
End of explanation
path = "data/dogscats/sample/"
# path = "data/dogscats/"
model_path = path + 'models/'
if not os.path.exists(model_path): os.mkdir(model_path)
Explanation: Train linear model on predictions
Using a Dense() layer in this way, we can easily convert the 1,000 predictions given by our model into a probability of dog vs cat--simply train a linear model to take the 1,000 predictions as input, and return dog or cat as output, learning from the Kaggle data. This should be easier and more accurate than manually creating a map from imagenet categories to one dog/cat category.
Training the model
We start with some basic config steps. We copy a small amount of our data into a 'sample' directory, with the exact same structure as our 'train' directory--this is always a good idea in all machine learning, since we should do all of our initial testing using a dataset small enough that we never have to wait for it.
End of explanation
# batch_size=100
batch_size=4
Explanation: We will process as many images at a time as our graphics card allows. This is a case of trial and error to find the max batch size - the largest size that doesn't give an out of memory error.
End of explanation
from vgg16 import Vgg16
vgg = Vgg16()
model = vgg.model
Explanation: We need to start with our VGG 16 model, since we'll be using its predictions and features.
End of explanation
# Use batch size of 1 since we're just doing preprocessing on the CPU
val_batches = get_batches(path+'valid', shuffle=False, batch_size=1)
batches = get_batches(path+'train', shuffle=False, batch_size=1)
Explanation: Our overall approach here will be:
Get the true labels for every image
Get the 1,000 imagenet category predictions for every image
Feed these predictions as input to a simple linear model.
Let's start by grabbing training and validation batches.
End of explanation
import bcolz
def save_array(fname, arr): c=bcolz.carray(arr, rootdir=fname, mode='w'); c.flush()
def load_array(fname): return bcolz.open(fname)[:]
Explanation: Loading and resizing the images every time we want to use them isn't necessary - instead we should save the processed arrays. By far the fastest way to save and load numpy arrays is using bcolz. This also compresses the arrays, so we save disk space. Here are the functions we'll use to save and load using bcolz.
End of explanation
val_data = get_data(path+'valid')
trn_data = get_data(path+'train')
trn_data.shape
save_array(model_path+'train_data.bc', trn_data)
save_array(model_path+'valid_data.bc', val_data)
Explanation: We have provided a simple function that joins the arrays from all the batches - let's use this to grab the training and validation data:
End of explanation
trn_data = load_array(model_path+'train_data.bc')
val_data = load_array(model_path+'valid_data.bc')
val_data.shape
Explanation: We can load our training and validation data later without recalculating them:
End of explanation
def onehot(x): return np.array(OneHotEncoder().fit_transform(x.reshape(-1,1)).todense())
val_classes = val_batches.classes
trn_classes = batches.classes
val_labels = onehot(val_classes)
trn_labels = onehot(trn_classes)
trn_labels.shape
trn_classes[:4]
trn_labels[:4]
Explanation: Keras returns classes as a single column, so we convert to one hot encoding
End of explanation
trn_features = model.predict(trn_data, batch_size=batch_size)
val_features = model.predict(val_data, batch_size=batch_size)
trn_features.shape
save_array(model_path+'train_lastlayer_features.bc', trn_features)
save_array(model_path+'valid_lastlayer_features.bc', val_features)
Explanation: ...and their 1,000 imagenet probabilties from VGG16--these will be the features for our linear model:
End of explanation
trn_features = load_array(model_path+'train_lastlayer_features.bc')
val_features = load_array(model_path+'valid_lastlayer_features.bc')
Explanation: We can load our training and validation features later without recalculating them:
End of explanation
# 1000 inputs, since that's the saved features, and 2 outputs, for dog and cat
lm = Sequential([ Dense(2, activation='softmax', input_shape=(1000,)) ])
lm.compile(optimizer=RMSprop(lr=0.1), loss='categorical_crossentropy', metrics=['accuracy'])
Explanation: Now we can define our linear model, just like we did earlier:
End of explanation
batch_size=64
batch_size=4
lm.fit(trn_features, trn_labels, nb_epoch=3, batch_size=batch_size,
validation_data=(val_features, val_labels))
lm.summary()
Explanation: We're ready to fit the model!
End of explanation
# We want both the classes...
preds = lm.predict_classes(val_features, batch_size=batch_size)
# ...and the probabilities of being a cat
probs = lm.predict_proba(val_features, batch_size=batch_size)[:,0]
probs[:8]
preds[:8]
Explanation: Viewing model prediction examples
Keras' fit() function conveniently shows us the value of the loss function, and the accuracy, after every epoch ("epoch" refers to one full run through all training examples). The most important metrics for us to look at are for the validation set, since we want to check for over-fitting.
Tip: with our first model we should try to overfit before we start worrying about how to handle that - there's no point even thinking about regularization, data augmentation, etc if you're still under-fitting! (We'll be looking at these techniques shortly).
As well as looking at the overall metrics, it's also a good idea to look at examples of each of:
1. A few correct labels at random
2. A few incorrect labels at random
3. The most correct labels of each class (ie those with highest probability that are correct)
4. The most incorrect labels of each class (ie those with highest probability that are incorrect)
5. The most uncertain labels (ie those with probability closest to 0.5).
Let's see what we, if anything, we can from these (in general, these are particularly useful for debugging problems in the model; since this model is so simple, there may not be too much to learn at this stage.)
Calculate predictions on validation set, so we can find correct and incorrect examples:
End of explanation
filenames = val_batches.filenames
# Number of images to view for each visualization task
n_view = 4
Explanation: Get the filenames for the validation set, so we can view images:
End of explanation
def plots_idx(idx, titles=None):
plots([image.load_img(path + 'valid/' + filenames[i]) for i in idx], titles=titles)
#1. A few correct labels at random
correct = np.where(preds==val_labels[:,1])[0]
idx = permutation(correct)[:n_view]
plots_idx(idx, probs[idx])
#2. A few incorrect labels at random
incorrect = np.where(preds!=val_labels[:,1])[0]
idx = permutation(incorrect)[:n_view]
plots_idx(idx, probs[idx])
#3. The images we most confident were cats, and are actually cats
correct_cats = np.where((preds==0) & (preds==val_labels[:,1]))[0]
most_correct_cats = np.argsort(probs[correct_cats])[::-1][:n_view]
plots_idx(correct_cats[most_correct_cats], probs[correct_cats][most_correct_cats])
# as above, but dogs
correct_dogs = np.where((preds==1) & (preds==val_labels[:,1]))[0]
most_correct_dogs = np.argsort(probs[correct_dogs])[:n_view]
plots_idx(correct_dogs[most_correct_dogs], 1-probs[correct_dogs][most_correct_dogs])
#3. The images we were most confident were cats, but are actually dogs
incorrect_cats = np.where((preds==0) & (preds!=val_labels[:,1]))[0]
most_incorrect_cats = np.argsort(probs[incorrect_cats])[::-1][:n_view]
if len(most_incorrect_cats):
plots_idx(incorrect_cats[most_incorrect_cats], probs[incorrect_cats][most_incorrect_cats])
else:
print('No incorrect cats!')
#3. The images we were most confident were dogs, but are actually cats
incorrect_dogs = np.where((preds==1) & (preds!=val_labels[:,1]))[0]
most_incorrect_dogs = np.argsort(probs[incorrect_dogs])[:n_view]
if len(most_incorrect_dogs):
plots_idx(incorrect_dogs[most_incorrect_dogs], 1-probs[incorrect_dogs][most_incorrect_dogs])
else:
print('No incorrect dogs!')
#5. The most uncertain labels (ie those with probability closest to 0.5).
most_uncertain = np.argsort(np.abs(probs-0.5))
plots_idx(most_uncertain[:n_view], probs[most_uncertain])
Explanation: Helper function to plot images by index in the validation set:
End of explanation
cm = confusion_matrix(val_classes, preds)
Explanation: Perhaps the most common way to analyze the result of a classification model is to use a confusion matrix. Scikit-learn has a convenient function we can use for this purpose:
End of explanation
plot_confusion_matrix(cm, val_batches.class_indices)
Explanation: We can just print out the confusion matrix, or we can show a graphical view (which is mainly useful for dependents with a larger number of categories).
End of explanation
vgg.model.summary()
model.pop()
for layer in model.layers: layer.trainable=False
Explanation: About activation functions
Do you remember how we defined our linear model? Here it is again for reference:
python
lm = Sequential([ Dense(2, activation='softmax', input_shape=(1000,)) ])
And do you remember the definition of a fully connected layer in the original VGG?:
python
model.add(Dense(4096, activation='relu'))
You might we wondering, what's going on with that activation parameter? Adding an 'activation' parameter to a layer in Keras causes an additional function to be called after the layer is calculated. You'll recall that we had no such parameter in our most basic linear model at the start of this lesson - that's because a simple linear model has no activation function. But nearly all deep model layers have an activation function - specifically, a non-linear activation function, such as tanh, sigmoid (1/(1+exp(x))), or relu (max(0,x), called the rectified linear function). Why?
The reason for this is that if you stack purely linear layers on top of each other, then you just end up with a linear layer! For instance, if your first layer was 2*x, and your second was -2*x, then the combination is: -2*(2*x) = -4*x. If that's all we were able to do with deep learning, it wouldn't be very deep! But what if we added a relu activation after our first layer? Then the combination would be: -2 * max(0, 2*x). As you can see, that does not simplify to just a linear function like the previous example--and indeed we can stack as many of these on top of each other as we wish, to create arbitrarily complex functions.
And why would we want to do that? Because it turns out that such a stack of linear functions and non-linear activations can approximate any other function just as close as we want. So we can use it to model anything! This extraordinary insight is known as the universal approximation theorem. For a visual understanding of how and why this works, I strongly recommend you read Michael Nielsen's excellent interactive visual tutorial.
The last layer generally needs a different activation function to the other layers--because we want to encourage the last layer's output to be of an appropriate form for our particular problem. For instance, if our output is a one hot encoded categorical variable, we want our final layer's activations to add to one (so they can be treated as probabilities) and to have generally a single activation much higher than the rest (since with one hot encoding we have just a single 'one', and all other target outputs are zero). Our classication problems will always have this form, so we will introduce the activation function that has these properties: the softmax function. Softmax is defined as (for the i'th output activation): exp(x[i]) / sum(exp(x)).
I suggest you try playing with that function in a spreadsheet to get a sense of how it behaves.
We will see other activation functions later in this course - but relu (and minor variations) for intermediate layers and softmax for output layers will be by far the most common.
Modifying the model
Retrain last layer's linear model
Since the original VGG16 network's last layer is Dense (i.e. a linear model) it seems a little odd that we are adding an additional linear model on top of it. This is especially true since the last layer had a softmax activation, which is an odd choice for an intermediate layer--and by adding an extra layer on top of it, we have made it an intermediate layer. What if we just removed the original final layer and replaced it with one that we train for the purpose of distinguishing cats and dogs? It turns out that this is a good idea - as we'll see!
We start by removing the last layer, and telling Keras that we want to fix the weights in all the other layers (since we aren't looking to learn new parameters for those other layers).
End of explanation
model.add(Dense(2, activation='softmax'))
??vgg.finetune
Explanation: Careful! Now that we've modified the definition of model, be careful not to rerun any code in the previous sections, without first recreating the model from scratch! (Yes, I made that mistake myself, which is why I'm warning you about it now...)
Now we're ready to add our new final layer...
End of explanation
gen=image.ImageDataGenerator()
batches = gen.flow(trn_data, trn_labels, batch_size=batch_size, shuffle=True)
val_batches = gen.flow(val_data, val_labels, batch_size=batch_size, shuffle=False)
Explanation: ...and compile our updated model, and set up our batches to use the preprocessed images (note that now we will also shuffle the training batches, to add more randomness when using multiple epochs):
End of explanation
def fit_model(model, batches, val_batches, nb_epoch=1):
model.fit_generator(batches, samples_per_epoch=batches.n, nb_epoch=nb_epoch,
validation_data=val_batches, nb_val_samples=val_batches.n)
Explanation: We'll define a simple function for fitting models, just to save a little typing...
End of explanation
opt = RMSprop(lr=0.1)
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
fit_model(model, batches, val_batches, nb_epoch=2)
Explanation: ...and now we can use it to train the last layer of our model!
(It runs quite slowly, since it still has to calculate all the previous layers in order to know what input to pass to the new final layer. We could precalculate the output of the penultimate layer, like we did for the final layer earlier - but since we're only likely to want one or two iterations, it's easier to follow this alternative approach.)
End of explanation
model.save_weights(model_path+'finetune1.h5')
model.load_weights(model_path+'finetune1.h5')
model.evaluate(val_data, val_labels)
Explanation: Before moving on, go back and look at how little code we had to write in this section to finetune the model. Because this is such an important and common operation, keras is set up to make it as easy as possible. We didn't even have to use any external helper functions in this section.
It's a good idea to save weights of all your models, so you can re-use them later. Be sure to note the git log number of your model when keeping a research journal of your results.
End of explanation
preds = model.predict_classes(val_data, batch_size=batch_size)
probs = model.predict_proba(val_data, batch_size=batch_size)[:,0]
probs[:8]
cm = confusion_matrix(val_classes, preds)
plot_confusion_matrix(cm, {'cat':0, 'dog':1})
Explanation: We can look at the earlier prediction examples visualizations by redefining probs and preds and re-using our earlier code.
End of explanation
# sympy let's us do symbolic differentiation (and much more!) in python
import sympy as sp
# we have to define our variables
x = sp.var('x')
# then we can request the derivative or any expression of that variable
pow(2*x,2).diff()
Explanation: Retraining more layers
Now that we've fine-tuned the new final layer, can we, and should we, fine-tune all the dense layers? The answer to both questions, it turns out, is: yes! Let's start with the "can we" question...
An introduction to back-propagation
The key to training multiple layers of a model, rather than just one, lies in a technique called "back-propagation" (or backprop to its friends). Backprop is one of the many words in deep learning parlance that is creating a new word for something that already exists - in this case, backprop simply refers to calculating gradients using the chain rule. (But we will still introduce the deep learning terms during this course, since it's important to know them when reading about or discussing deep learning.)
As you (hopefully!) remember from high school, the chain rule is how you calculate the gradient of a "function of a function"--something of the form f(u), where u=g(x). For instance, let's say your function is pow((2*x), 2). Then u is 2*x, and f(u) is power(u, 2). The chain rule tells us that the derivative of this is simply the product of the derivatives of f() and g(). Using f'(x) to refer to the derivative, we can say that: f'(x) = f'(u) * g'(x) = 2*u * 2 = 2*(2*x) * 2 = 8*x.
Let's check our calculation:
End of explanation
layers = model.layers
# Get the index of the first dense layer...
first_dense_idx = [index for index,layer in enumerate(layers) if type(layer) is Dense][0]
# ...and set this and all subsequent layers to trainable
for layer in layers[first_dense_idx:]: layer.trainable=True
Explanation: The key insight is that the stacking of linear functions and non-linear activations we learnt about in the last section is simply defining a function of functions (of functions, of functions...). Each layer is taking the output of the previous layer's function, and using it as input into its function. Therefore, we can calculate the derivative at any layer by simply multiplying the gradients of that layer and all of its following layers together! This use of the chain rule to allow us to rapidly calculate the derivatives of our model at any layer is referred to as back propagation.
The good news is that you'll never have to worry about the details of this yourself, since libraries like Theano and Tensorflow (and therefore wrappers like Keras) provide automatic differentiation (or AD). TODO
Training multiple layers in Keras
The code below will work on any model that contains dense layers; it's not just for this VGG model.
NB: Don't skip the step of fine-tuning just the final layer first, since otherwise you'll have one layer with random weights, which will cause the other layers to quickly move a long way from their optimized imagenet weights.
End of explanation
K.set_value(opt.lr, 0.01)
fit_model(model, batches, val_batches, 3)
Explanation: Since we haven't changed our architecture, there's no need to re-compile the model - instead, we just set the learning rate. Since we're training more layers, and since we've already optimized the last layer, we should use a lower learning rate than previously.
End of explanation
model.save_weights(model_path+'finetune2.h5')
Explanation: This is an extraordinarily powerful 5 lines of code. We have fine-tuned all of our dense layers to be optimized for our specific data set. This kind of technique has only become accessible in the last year or two - and we can already do it in just 5 lines of python!
End of explanation
for layer in layers[12:]: layer.trainable=True
K.set_value(opt.lr, 0.001)
fit_model(model, batches, val_batches, 4)
model.save_weights(model_path+'finetune3.h5')
Explanation: There's generally little room for improvement in training the convolutional layers, if you're using the model on natural images (as we are). However, there's no harm trying a few of the later conv layers, since it may give a slight improvement, and can't hurt (and we can always load the previous weights if the accuracy decreases).
End of explanation
model.load_weights(model_path+'finetune2.h5')
model.evaluate_generator(get_batches(path+'valid', gen, False, batch_size*2), val_batches.n)
Explanation: You can always load the weights later and use the model to do whatever you need:
End of explanation |
3,531 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Dropout
Dropout [1] is a technique for regularizing neural networks by randomly setting some features to zero during the forward pass. In this exercise you will implement a dropout layer and modify your fully-connected network to optionally use dropout.
[1] Geoffrey E. Hinton et al, "Improving neural networks by preventing co-adaptation of feature detectors", arXiv 2012
Step2: Dropout forward pass
In the file neural_network/layers.py, implement the forward pass for dropout. Since dropout behaves differently during training and testing, make sure to implement the operation for both modes.
Once you have done so, run the cell below to test your implementation.
Step3: Dropout backward pass
In the file neural_network/layers.py, implement the backward pass for dropout. After doing so, run the following cell to numerically gradient-check your implementation.
Step4: Fully-connected nets with Dropout
In the file neural_network/classifiers/fc_net.py, modify your implementation to use dropout. Specificially, if the constructor the the net receives a nonzero value for the dropout parameter, then the net should add dropout immediately after every ReLU nonlinearity. After doing so, run the following to numerically gradient-check your implementation.
Step5: Regularization experiment
As an experiment, we will train a pair of two-layer networks on 500 training examples | Python Code:
# As usual, a bit of setup
import time
import numpy as np
import matplotlib.pyplot as plt
from skynet.neural_network.classifiers.fc_net import *
from skynet.utils.data_utils import get_CIFAR10_data
from skynet.utils.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from skynet.solvers.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.items():
print('%s: ' % k, v.shape)
Explanation: Dropout
Dropout [1] is a technique for regularizing neural networks by randomly setting some features to zero during the forward pass. In this exercise you will implement a dropout layer and modify your fully-connected network to optionally use dropout.
[1] Geoffrey E. Hinton et al, "Improving neural networks by preventing co-adaptation of feature detectors", arXiv 2012
End of explanation
x = np.random.randn(500, 500) + 10
for p in [0.3, 0.6, 0.75]:
out, _ = dropout_forward(x, {'mode': 'train', 'p': p})
out_test, _ = dropout_forward(x, {'mode': 'test', 'p': p})
print('Running tests with p = ', p)
print('Mean of input: ', x.mean())
print('Mean of train-time output: ', out.mean())
print('Mean of test-time output: ', out_test.mean())
print('Fraction of train-time output set to zero: ', (out == 0).mean())
print('Fraction of test-time output set to zero: ', (out_test == 0).mean())
print()
Explanation: Dropout forward pass
In the file neural_network/layers.py, implement the forward pass for dropout. Since dropout behaves differently during training and testing, make sure to implement the operation for both modes.
Once you have done so, run the cell below to test your implementation.
End of explanation
x = np.random.randn(10, 10) + 10
dout = np.random.randn(*x.shape)
dropout_param = {'mode': 'train', 'p': 0.8, 'seed': 123}
out, cache = dropout_forward(x, dropout_param)
dx = dropout_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda xx: dropout_forward(xx, dropout_param)[0], x, dout)
print('dx relative error: ', rel_error(dx, dx_num))
Explanation: Dropout backward pass
In the file neural_network/layers.py, implement the backward pass for dropout. After doing so, run the following cell to numerically gradient-check your implementation.
End of explanation
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for dropout in [0, 0.25, 0.5]:
print('Running check with dropout = ', dropout)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
weight_scale=5e-2, dtype=np.float64,
dropout=dropout, seed=123)
loss, grads = model.loss(X, y)
print('Initial loss: ', loss)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
print()
Explanation: Fully-connected nets with Dropout
In the file neural_network/classifiers/fc_net.py, modify your implementation to use dropout. Specificially, if the constructor the the net receives a nonzero value for the dropout parameter, then the net should add dropout immediately after every ReLU nonlinearity. After doing so, run the following to numerically gradient-check your implementation.
End of explanation
# Train two identical nets, one with dropout and one without
num_train = 500
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
solvers = {}
dropout_choices = [0, 0.75]
for dropout in dropout_choices:
model = FullyConnectedNet([500], dropout=dropout)
print(dropout)
solver = Solver(model, small_data,
num_epochs=25, batch_size=100,
update_rule='adam',
optim_config={
'learning_rate': 5e-4,
},
verbose=True, print_every=100)
solver.train()
solvers[dropout] = solver
# Plot train and validation accuracies of the two models
train_accs = []
val_accs = []
for dropout in dropout_choices:
solver = solvers[dropout]
train_accs.append(solver.train_acc_history[-1])
val_accs.append(solver.val_acc_history[-1])
plt.subplot(3, 1, 1)
for dropout in dropout_choices:
plt.plot(solvers[dropout].train_acc_history, 'o', label='%.2f dropout' % dropout)
plt.title('Train accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
for dropout in dropout_choices:
plt.plot(solvers[dropout].val_acc_history, 'o', label='%.2f dropout' % dropout)
plt.title('Val accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.gcf().set_size_inches(15, 15)
plt.show()
Explanation: Regularization experiment
As an experiment, we will train a pair of two-layer networks on 500 training examples: one will use no dropout, and one will use a dropout probability of 0.75. We will then visualize the training and validation accuracies of the two networks over time.
End of explanation |
3,532 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Determining initial $T_{\rm eff}$ and luminosity for DMESTAR seed polytropes
Currently, we are having difficulty with models in the mass range of $0.14 M_{\odot}$ -- $0.22 M_{\odot}$ not converging after an initial relaxation. There are several potential candidates for why the models are not converging. The first is FreeEOS is running with a set of plasma properties (pressure, temperature) that are outside of it's typical working range. I suspect this is not the case, as lower mass models converge properly, despite having cooler temperatures. Other potential candidates are the seed luminosity and $T_{\rm eff}$ supplied to $\texttt{newpoly}$ for computation of an initial polytrope model that DMESTAR then relaxes before a full stellar evolution run. To test this idea, we can compare model properties for the seed polytropes with the final relaxed quantities determined by DMESTAR.
Step1: Current seed values
Scripts used to generate a new polytrope for DMESTAR models rely on a piece-wise function to generate an appropriate combination of $T_{\rm eff}$ and luminosity for a model based on the requested stellar mass and solar composition. That piece-wise function is
\begin{align}
\log(T) & = 3.64 & M \ge 3.9 \
\log(L) & = 0.2\cdot (M - 5.0) + 2.6 & \
& \
\log(T) & = -0.028\cdot M + 3.875 & 3.9 > M \ge 3.0 \
\log(L) & = 0.55 \cdot M + 0.1 & \
& \
\log(T) & = 0.039\cdot M + 3.5765 & 3.0 > M \ge 1.5 \
\log(L) & = 1.7 & \
& \
\log(T) & = 0.039\cdot M + 3.5765 & 1.5 > M \ge 0.23 \
\log(L) & = 0.85\cdot M + 0.4 & \
& \
\log(T) & = 0.614\cdot M + 3.3863 & 0.23 > M \
\log(L) & = -0.16877\cdot M - 0.117637 & \
\end{align}
While models with masses below $0.23 M$ are found to converge, the greatest issues occur right in the vicinity of the final piecewise condition. We can view this graphically,
Step2: Relaxed model values
We can compare the relationship(s) quoted above with model values for temperature and luminosity after the model has relaxed to a stable configuration. This takes only a couple time steps to achieve, so we will look at the model relationship during the third time step for all models with masses between 0.08 and 5.0 Msun. Models are taken from a recent study where we used the most up-to-date version of the Dartmouth models for young stars (Feiden 2016).
Step3: To select which model time step is most representative of a relaxed model, we can step through the first 50 iterations to find if there are any noticable jumps in model properties.
Step4: We can now iterate through these filenames and save the third timestep to an array.
Step5: Plotting these two relations, we can compare against the function used to generate the polytrope seed model.
Step6: There are clear discrepancies, particularly in the low-mass regime. However, we note there are significant differences in relaxed effective temperatures starting around 1.5 solar masses. Luminosities tend to trace the relaxed models quite well until approximately 0.4 Msun. Since these are logarithmic values, noticeable differences are quite sizeable when it comes to model adjustments during runtime. It's quite likely that corrections will exceed tolerances in the allowed parameter adjustments during a model's evolution.
Effective temperature
Step7: Luminosity
Above 1.5 Msun, there appear to be very little deviations of the true model sequence from the initial seed model sequence. We can thus leave this parameteriztion alone. Below 1.5 Msun, we can alter the shape of the relationship down to 0.23 Msun. In addition, we can prescribe a new shape to the relationship for objects with masses below 0.23 Msun.
Step8: Implementation
These new fits better represent the relaxed models, but will they work when implemented as seed values? | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: Determining initial $T_{\rm eff}$ and luminosity for DMESTAR seed polytropes
Currently, we are having difficulty with models in the mass range of $0.14 M_{\odot}$ -- $0.22 M_{\odot}$ not converging after an initial relaxation. There are several potential candidates for why the models are not converging. The first is FreeEOS is running with a set of plasma properties (pressure, temperature) that are outside of it's typical working range. I suspect this is not the case, as lower mass models converge properly, despite having cooler temperatures. Other potential candidates are the seed luminosity and $T_{\rm eff}$ supplied to $\texttt{newpoly}$ for computation of an initial polytrope model that DMESTAR then relaxes before a full stellar evolution run. To test this idea, we can compare model properties for the seed polytropes with the final relaxed quantities determined by DMESTAR.
End of explanation
fig, ax = plt.subplots(2, 1, figsize=(8, 8))
masses = np.arange(0.08, 5.0, 0.02)
# compute and plot temperature relationship
p1 = [3.64 for m in masses if m >= 3.9]
p2 = [-0.028*m + 3.875 for m in masses if 3.9 > m >= 3.0]
p3 = [0.039*m + 3.5765 for m in masses if 3.0 > m >= 0.23]
p4 = [0.614*m + 3.3863 for m in masses if m < 0.23]
tr = p4 + p3 + p2 + p1
ax[0].set_xlabel("initial mass [Msun]")
ax[0].set_ylabel("log(T / K)")
ax[0].plot(masses, tr, '-', c='#dc143c', lw=3)
# plot luminosity relationship
# compute and plot temperature relationship
p1 = [0.2*(m - 5.0) + 2.6 for m in masses if m >= 3.9]
p2 = [0.55*m + 0.1 for m in masses if 3.9 > m >= 3.0]
p3 = [1.7 for m in masses if 3.0 > m >= 1.5]
p4 = [0.85*m + 0.4 for m in masses if 1.5 > m >= 0.23]
p5 = [-0.16877*m - 0.117637 for m in masses if m < 0.23]
lr = p5 + p4 + p3 + p2 + p1
ax[1].set_xlabel("initial mass [Msun]")
ax[1].set_ylabel("log(L / Lsun)")
ax[1].plot(masses, lr, '-', c='#dc143c', lw=3)
Explanation: Current seed values
Scripts used to generate a new polytrope for DMESTAR models rely on a piece-wise function to generate an appropriate combination of $T_{\rm eff}$ and luminosity for a model based on the requested stellar mass and solar composition. That piece-wise function is
\begin{align}
\log(T) & = 3.64 & M \ge 3.9 \
\log(L) & = 0.2\cdot (M - 5.0) + 2.6 & \
& \
\log(T) & = -0.028\cdot M + 3.875 & 3.9 > M \ge 3.0 \
\log(L) & = 0.55 \cdot M + 0.1 & \
& \
\log(T) & = 0.039\cdot M + 3.5765 & 3.0 > M \ge 1.5 \
\log(L) & = 1.7 & \
& \
\log(T) & = 0.039\cdot M + 3.5765 & 1.5 > M \ge 0.23 \
\log(L) & = 0.85\cdot M + 0.4 & \
& \
\log(T) & = 0.614\cdot M + 3.3863 & 0.23 > M \
\log(L) & = -0.16877\cdot M - 0.117637 & \
\end{align}
While models with masses below $0.23 M$ are found to converge, the greatest issues occur right in the vicinity of the final piecewise condition. We can view this graphically,
End of explanation
model_directory = "../../papers/MagneticUpperSco/models/trk/std/"
# get all file names
from os import listdir
all_fnames = listdir(model_directory)
# sort out only those file names that end in .trk
fnames = [f for f in all_fnames if f[-4:] == ".trk"]
# sort numerically
fnames = sorted(fnames)
Explanation: Relaxed model values
We can compare the relationship(s) quoted above with model values for temperature and luminosity after the model has relaxed to a stable configuration. This takes only a couple time steps to achieve, so we will look at the model relationship during the third time step for all models with masses between 0.08 and 5.0 Msun. Models are taken from a recent study where we used the most up-to-date version of the Dartmouth models for young stars (Feiden 2016).
End of explanation
fig, ax = plt.subplots(2, 1, figsize=(8, 8))
model_props = np.empty((len(fnames), 3))
for j in range(0, 50):
for i, f in enumerate(fnames):
model_props[i, 0] = float(f[1:5])/1000.0
try:
trk = np.genfromtxt(model_directory + f, usecols=(0, 1, 2, 3))
except ValueError:
model_props[i, 1] = 0.0 # temperature
model_props[i, 2] = 0.0 # luminosity
continue
model_props[i, 1] = trk[j, 1] # temperature
model_props[i, 2] = trk[j, 3] # luminosity
ax[0].semilogx(model_props[:,0], model_props[:,1], '-', c='#008b8b', lw=3)
ax[1].semilogx(model_props[:,0], model_props[:,2], '-', c='#008b8b', lw=3)
Explanation: To select which model time step is most representative of a relaxed model, we can step through the first 50 iterations to find if there are any noticable jumps in model properties.
End of explanation
model_props = np.empty((len(fnames), 3))
for i, f in enumerate(fnames):
model_props[i, 0] = float(f[1:5])/1000.0
try:
trk = np.genfromtxt(model_directory + f, usecols=(0, 1, 2, 3))
except ValueError:
model_props[i, 1] = 0.0 # temperature
model_props[i, 2] = 0.0 # luminosity
continue
model_props[i, 1] = trk[1, 1] # temperature
model_props[i, 2] = trk[1, 3] # luminosity
Explanation: We can now iterate through these filenames and save the third timestep to an array.
End of explanation
fig, ax = plt.subplots(2, 1, figsize=(8, 8))
masses = np.arange(0.08, 5.0, 0.02)
ax[0].set_xlabel("initial mass [Msun]")
ax[0].set_ylabel("log(T / K)")
ax[0].semilogx(model_props[:,0], model_props[:,1], '-', c='#008b8b', lw=3)
ax[0].semilogx(masses, tr, '-', c='#dc143c', lw=3)
ax[1].set_xlabel("initial mass [Msun]")
ax[1].set_ylabel("log(L / Lsun)")
ax[1].semilogx(model_props[:,0], model_props[:,2], '-', c='#008b8b', lw=3)
ax[1].semilogx(masses, lr, '-', c='#dc143c', lw=3)
Explanation: Plotting these two relations, we can compare against the function used to generate the polytrope seed model.
End of explanation
tp1 = np.array([line for line in model_props if line[0] < 0.23])
tp2 = np.array([line for line in model_props if 0.23 <= line[0] < 1.5])
tpoly1 = np.polyfit(tp1[:,0], tp1[:,1], 2)
tpoly2 = np.polyfit(tp2[:,0], tp2[:,1], 3)
fig, ax = plt.subplots(1, 1, figsize=(8, 4))
ax.semilogx(tp1[:,0], tp1[:,1], '-', c='#008b8b', lw=3)
ax.semilogx(tp2[:,0], tp2[:,1], '-', c='#008b8b', lw=3)
ax.semilogx(tp1[:,0], tpoly1[0]*tp1[:,0]**2 + tpoly1[1]*tp1[:,0] + tpoly1[2], '--', c='black', lw=3)
ax.semilogx(tp2[:,0], tpoly2[0]*tp2[:,0]**3 + tpoly2[1]*tp2[:,0]**2 + tpoly2[2]*tp2[:,0] + tpoly2[3], '--', c='black', lw=3)
Explanation: There are clear discrepancies, particularly in the low-mass regime. However, we note there are significant differences in relaxed effective temperatures starting around 1.5 solar masses. Luminosities tend to trace the relaxed models quite well until approximately 0.4 Msun. Since these are logarithmic values, noticeable differences are quite sizeable when it comes to model adjustments during runtime. It's quite likely that corrections will exceed tolerances in the allowed parameter adjustments during a model's evolution.
Effective temperature
End of explanation
p1 = np.array([line for line in model_props if line[0] < 0.23])
p2 = np.array([line for line in model_props if 0.23 <= line[0] < 1.5])
poly1 = np.polyfit(p1[:,0], p1[:,2], 2)
poly2 = np.polyfit(p2[:,0], p2[:,2], 2)
fig, ax = plt.subplots(1, 1, figsize=(8, 4))
ax.semilogx(p1[:,0], p1[:,2], '-', c='#008b8b', lw=3)
ax.semilogx(p2[:,0], p2[:,2], '-', c='#008b8b', lw=3)
ax.semilogx(p1[:,0], poly1[0]*p1[:,0]**2 + poly1[1]*p1[:,0] + poly1[2], '--', c='black', lw=3)
ax.semilogx(p2[:,0], poly2[0]*p2[:,0]**2 + poly2[1]*p2[:,0] + poly2[2], '--', c='black', lw=3)
Explanation: Luminosity
Above 1.5 Msun, there appear to be very little deviations of the true model sequence from the initial seed model sequence. We can thus leave this parameteriztion alone. Below 1.5 Msun, we can alter the shape of the relationship down to 0.23 Msun. In addition, we can prescribe a new shape to the relationship for objects with masses below 0.23 Msun.
End of explanation
print "log(T) and log(L) Coefficients for the lowest mass objects: \n", tpoly1, poly1
print "\n\nlog(T) and log(L) Coefficients for low mass objects: \n", tpoly2, poly2
Explanation: Implementation
These new fits better represent the relaxed models, but will they work when implemented as seed values?
End of explanation |
3,533 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plot sensor denoising using oversampled temporal projection
This demonstrates denoising using the OTP algorithm
Step1: Plot the phantom data, lowpassed to get rid of high-frequency artifacts.
We also crop to a single 10-second segment for speed.
Notice that there are two large flux jumps on channel 1522 that could
spread to other channels when performing subsequent spatial operations
(e.g., Maxwell filtering, SSP, or ICA).
Step2: Now we can clean the data with OTP, lowpass, and plot. The flux jumps have
been suppressed alongside the random sensor noise.
Step3: We can also look at the effect on single-trial phantom localization.
See the tut-brainstorm-elekta-phantom
for more information. Here we use a version that does single-trial
localization across the 17 trials are in our 10-second window | Python Code:
# Author: Eric Larson <[email protected]>
#
# License: BSD (3-clause)
import os.path as op
import mne
import numpy as np
from mne import find_events, fit_dipole
from mne.datasets.brainstorm import bst_phantom_elekta
from mne.io import read_raw_fif
print(__doc__)
Explanation: Plot sensor denoising using oversampled temporal projection
This demonstrates denoising using the OTP algorithm :footcite:LarsonTaulu2018
on data with with sensor artifacts (flux jumps) and random noise.
End of explanation
dipole_number = 1
data_path = bst_phantom_elekta.data_path()
raw = read_raw_fif(
op.join(data_path, 'kojak_all_200nAm_pp_no_chpi_no_ms_raw.fif'))
raw.crop(40., 50.).load_data()
order = list(range(160, 170))
raw.copy().filter(0., 40.).plot(order=order, n_channels=10)
Explanation: Plot the phantom data, lowpassed to get rid of high-frequency artifacts.
We also crop to a single 10-second segment for speed.
Notice that there are two large flux jumps on channel 1522 that could
spread to other channels when performing subsequent spatial operations
(e.g., Maxwell filtering, SSP, or ICA).
End of explanation
raw_clean = mne.preprocessing.oversampled_temporal_projection(raw)
raw_clean.filter(0., 40.)
raw_clean.plot(order=order, n_channels=10)
Explanation: Now we can clean the data with OTP, lowpass, and plot. The flux jumps have
been suppressed alongside the random sensor noise.
End of explanation
def compute_bias(raw):
events = find_events(raw, 'STI201', verbose=False)
events = events[1:] # first one has an artifact
tmin, tmax = -0.2, 0.1
epochs = mne.Epochs(raw, events, dipole_number, tmin, tmax,
baseline=(None, -0.01), preload=True, verbose=False)
sphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=None,
verbose=False)
cov = mne.compute_covariance(epochs, tmax=0, method='oas',
rank=None, verbose=False)
idx = epochs.time_as_index(0.036)[0]
data = epochs.get_data()[:, :, idx].T
evoked = mne.EvokedArray(data, epochs.info, tmin=0.)
dip = fit_dipole(evoked, cov, sphere, n_jobs=1, verbose=False)[0]
actual_pos = mne.dipole.get_phantom_dipoles()[0][dipole_number - 1]
misses = 1000 * np.linalg.norm(dip.pos - actual_pos, axis=-1)
return misses
bias = compute_bias(raw)
print('Raw bias: %0.1fmm (worst: %0.1fmm)'
% (np.mean(bias), np.max(bias)))
bias_clean = compute_bias(raw_clean)
print('OTP bias: %0.1fmm (worst: %0.1fmm)'
% (np.mean(bias_clean), np.max(bias_clean),))
Explanation: We can also look at the effect on single-trial phantom localization.
See the tut-brainstorm-elekta-phantom
for more information. Here we use a version that does single-trial
localization across the 17 trials are in our 10-second window:
End of explanation |
3,534 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multi-spot Gamma Fitting
Step1: Load Data
Multispot
Load the leakage coefficient from disk (computed in Multi-spot 5-Samples analyis - Leakage coefficient fit)
Step2: Load the direct excitation coefficient ($d_{dirT}$) from disk (computed in usALEX - Corrections - Direct excitation physical parameter)
Step3: Multispot PR for FRET population
Step4: usALEX
Corrected $E$ from μs-ALEX data
Step5: Multi-spot gamma fitting
Step6: Plot FRET vs distance | Python Code:
from fretbursts import fretmath
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
from cycler import cycler
import seaborn as sns
%matplotlib inline
%config InlineBackend.figure_format='retina' # for hi-dpi displays
import matplotlib as mpl
from cycler import cycler
bmap = sns.color_palette("Set1", 9)
colors = np.array(bmap)[(1,0,2,3,4,8,6,7), :]
mpl.rcParams['axes.prop_cycle'] = cycler('color', colors)
colors_labels = ['blue', 'red', 'green', 'violet', 'orange', 'gray', 'brown', 'pink', ]
for c, cl in zip(colors, colors_labels):
locals()[cl] = tuple(c) # assign variables with color names
sns.palplot(colors)
sns.set_style('whitegrid')
Explanation: Multi-spot Gamma Fitting
End of explanation
leakage_coeff_fname = 'results/Multi-spot - leakage coefficient KDE wmean DexDem.csv'
leakageM = float(np.loadtxt(leakage_coeff_fname, ndmin=1))
print('Multispot Leakage Coefficient:', leakageM)
Explanation: Load Data
Multispot
Load the leakage coefficient from disk (computed in Multi-spot 5-Samples analyis - Leakage coefficient fit):
End of explanation
dir_ex_coeff_fname = 'results/usALEX - direct excitation coefficient dir_ex_t beta.csv'
dir_ex_t = float(np.loadtxt(dir_ex_coeff_fname, ndmin=1))
print('Direct excitation coefficient (dir_ex_t):', dir_ex_t)
Explanation: Load the direct excitation coefficient ($d_{dirT}$) from disk (computed in usALEX - Corrections - Direct excitation physical parameter):
End of explanation
mspot_filename = 'results/Multi-spot - dsDNA - PR - all_samples all_ch.csv'
E_pr_fret = pd.read_csv(mspot_filename, index_col=0)
E_pr_fret
Explanation: Multispot PR for FRET population:
End of explanation
data_file = 'results/usALEX-5samples-E-corrected-all-ph.csv'
data_alex = pd.read_csv(data_file).set_index('sample')#[['E_pr_fret_kde']]
data_alex.round(6)
E_alex = data_alex.E_gauss_w
E_alex
Explanation: usALEX
Corrected $E$ from μs-ALEX data:
End of explanation
import lmfit
def residuals(params, E_raw, E_ref):
gamma = params['gamma'].value
# NOTE: leakageM and dir_ex_t are globals
return E_ref - fretmath.correct_E_gamma_leak_dir(E_raw, leakage=leakageM, gamma=gamma, dir_ex_t=dir_ex_t)
params = lmfit.Parameters()
params.add('gamma', value=0.5)
E_pr_fret_mean = E_pr_fret.mean(1)
E_pr_fret_mean
m = lmfit.minimize(residuals, params, args=(E_pr_fret_mean, E_alex))
lmfit.report_fit(m.params, show_correl=False)
E_alex['12d'], E_pr_fret_mean['12d']
m = lmfit.minimize(residuals, params, args=(np.array([E_pr_fret_mean['12d']]), np.array([E_alex['12d']])))
lmfit.report_fit(m.params, show_correl=False)
print('Fitted gamma(multispot):', m.params['gamma'].value)
multispot_gamma = m.params['gamma'].value
multispot_gamma
E_fret_mch = fretmath.correct_E_gamma_leak_dir(E_pr_fret, leakage=leakageM, dir_ex_t=dir_ex_t,
gamma=multispot_gamma)
E_fret_mch = E_fret_mch.round(6)
E_fret_mch
E_fret_mch.to_csv('results/Multi-spot - dsDNA - Corrected E - all_samples all_ch.csv')
'%.5f' % multispot_gamma
with open('results/Multi-spot - gamma factor.csv', 'wt') as f:
f.write('%.5f' % multispot_gamma)
norm = (E_fret_mch.T - E_fret_mch.mean(1))#/E_pr_fret.mean(1)
norm_rel = (E_fret_mch.T - E_fret_mch.mean(1))/E_fret_mch.mean(1)
norm.plot()
norm_rel.plot()
Explanation: Multi-spot gamma fitting
End of explanation
sns.set_style('whitegrid')
CH = np.arange(8)
CH_labels = ['CH%d' % i for i in CH]
dist_s_bp = [7, 12, 17, 22, 27]
fontsize = 16
fig, ax = plt.subplots(figsize=(8, 5))
ax.plot(dist_s_bp, E_fret_mch, '+', lw=2, mew=1.2, ms=10, zorder=4)
ax.plot(dist_s_bp, E_alex, '-', lw=3, mew=0, alpha=0.5, color='k', zorder=3)
plt.title('Multi-spot smFRET dsDNA, Gamma = %.2f' % multispot_gamma)
plt.xlabel('Distance in base-pairs', fontsize=fontsize);
plt.ylabel('E', fontsize=fontsize)
plt.ylim(0, 1); plt.xlim(0, 30)
plt.grid(True)
plt.legend(['CH1','CH2','CH3','CH4','CH5','CH6','CH7','CH8', u'μsALEX'],
fancybox=True, prop={'size':fontsize-1},
loc='best');
Explanation: Plot FRET vs distance
End of explanation |
3,535 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Extracting the time series of activations in a label
We first apply a dSPM inverse operator to get signed activations in a label
(with positive and negative values) and we then compare different strategies
to average the times series in a label. We compare a simple average, with an
averaging using the dipoles normal (flip mode) and then a PCA,
also using a sign flip.
Step1: Compute inverse solution
Step2: View source activations
Step3: Using vector solutions
It's also possible to compute label time courses for a | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
# Eric Larson <[email protected]>
#
# License: BSD-3-Clause
import matplotlib.pyplot as plt
import matplotlib.patheffects as path_effects
import mne
from mne.datasets import sample
from mne.minimum_norm import read_inverse_operator, apply_inverse
print(__doc__)
data_path = sample.data_path()
label = 'Aud-lh'
label_fname = data_path + '/MEG/sample/labels/%s.label' % label
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
fname_evoked = data_path + '/MEG/sample/sample_audvis-ave.fif'
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE or sLORETA)
# Load data
evoked = mne.read_evokeds(fname_evoked, condition=0, baseline=(None, 0))
inverse_operator = read_inverse_operator(fname_inv)
src = inverse_operator['src']
Explanation: Extracting the time series of activations in a label
We first apply a dSPM inverse operator to get signed activations in a label
(with positive and negative values) and we then compare different strategies
to average the times series in a label. We compare a simple average, with an
averaging using the dipoles normal (flip mode) and then a PCA,
also using a sign flip.
End of explanation
pick_ori = "normal" # Get signed values to see the effect of sign flip
stc = apply_inverse(evoked, inverse_operator, lambda2, method,
pick_ori=pick_ori)
label = mne.read_label(label_fname)
stc_label = stc.in_label(label)
modes = ('mean', 'mean_flip', 'pca_flip')
tcs = dict()
for mode in modes:
tcs[mode] = stc.extract_label_time_course(label, src, mode=mode)
print("Number of vertices : %d" % len(stc_label.data))
Explanation: Compute inverse solution
End of explanation
fig, ax = plt.subplots(1)
t = 1e3 * stc_label.times
ax.plot(t, stc_label.data.T, 'k', linewidth=0.5, alpha=0.5)
pe = [path_effects.Stroke(linewidth=5, foreground='w', alpha=0.5),
path_effects.Normal()]
for mode, tc in tcs.items():
ax.plot(t, tc[0], linewidth=3, label=str(mode), path_effects=pe)
xlim = t[[0, -1]]
ylim = [-27, 22]
ax.legend(loc='upper right')
ax.set(xlabel='Time (ms)', ylabel='Source amplitude',
title='Activations in Label %r' % (label.name),
xlim=xlim, ylim=ylim)
mne.viz.tight_layout()
Explanation: View source activations
End of explanation
pick_ori = 'vector'
stc_vec = apply_inverse(evoked, inverse_operator, lambda2, method,
pick_ori=pick_ori)
data = stc_vec.extract_label_time_course(label, src)
fig, ax = plt.subplots(1)
stc_vec_label = stc_vec.in_label(label)
colors = ['#EE6677', '#228833', '#4477AA']
for ii, name in enumerate('XYZ'):
color = colors[ii]
ax.plot(t, stc_vec_label.data[:, ii].T, color=color, lw=0.5, alpha=0.5,
zorder=5 - ii)
ax.plot(t, data[0, ii], lw=3, color=color, label='+' + name, zorder=8 - ii,
path_effects=pe)
ax.legend(loc='upper right')
ax.set(xlabel='Time (ms)', ylabel='Source amplitude',
title='Mean vector activations in Label %r' % (label.name,),
xlim=xlim, ylim=ylim)
mne.viz.tight_layout()
Explanation: Using vector solutions
It's also possible to compute label time courses for a
:class:mne.VectorSourceEstimate, but only with mode='mean'.
End of explanation |
3,536 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TensorFlow Authors.
Step1: LoggingTensorHook 및 StopAtStepHook을 Keras 콜백으로 마이그레이션
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: TensorFlow 1
Step3: TensorFlow 2
Step4: 완료되면 새로운 콜백인 StopAtStepCallback 및 LoggingTensorCallback 을 Model.fit의 callbacks 매개변수에 Model.fit . | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
import tensorflow as tf
import tensorflow.compat.v1 as tf1
features = [[1., 1.5], [2., 2.5], [3., 3.5]]
labels = [[0.3], [0.5], [0.7]]
# Define an input function.
def _input_fn():
return tf1.data.Dataset.from_tensor_slices((features, labels)).batch(1)
Explanation: LoggingTensorHook 및 StopAtStepHook을 Keras 콜백으로 마이그레이션
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/guide/migrate/logging_stop_hook"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/guide/migrate/logging_stop_hook.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/guide/migrate/logging_stop_hook.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서 소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/guide/migrate/logging_stop_hook.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드</a></td>
</table>
TensorFlow 1에서는 사용 tf.estimator.LoggingTensorHook 하면서, 텐서를 모니터링하고 기록하는 tf.estimator.StopAtStepHook 지정된 단계에서 정지 훈련을하는 데 도움이 때와 훈련 tf.estimator.Estimator . 이 노트북은 사용자 정의 Keras 콜백 (사용 TensorFlow 2에서 그 등가물에 이러한 API에서 마이그레이션하는 방법을 보여줍니다 tf.keras.callbacks.Callback 포함) Model.fit .
Keras 콜백 Model.fit / Model.evaluate / Model.predict API에서 학습/평가/예측 중에 서로 다른 지점에서 호출되는 객체입니다. 콜백에 대한 자세한 내용은 tf.keras.callbacks.Callback API 문서와 자체 콜백 작성 및 내장 메서드를 사용한 교육 및 평가 ( 콜백 사용 섹션) 가이드를 참조하세요. SessionRunHook 에서 TensorFlow 2의 Keras 콜백으로 마이그레이션하려면 지원 논리를 사용한 마이그레이션 교육 가이드를 확인하세요.
설정
데모용으로 가져오기 및 간단한 데이터세트로 시작합니다.
End of explanation
def _model_fn(features, labels, mode):
dense = tf1.layers.Dense(1)
logits = dense(features)
loss = tf1.losses.mean_squared_error(labels=labels, predictions=logits)
optimizer = tf1.train.AdagradOptimizer(0.05)
train_op = optimizer.minimize(loss, global_step=tf1.train.get_global_step())
# Define the stop hook.
stop_hook = tf1.train.StopAtStepHook(num_steps=2)
# Access tensors to be logged by names.
kernel_name = tf.identity(dense.weights[0])
bias_name = tf.identity(dense.weights[1])
logging_weight_hook = tf1.train.LoggingTensorHook(
tensors=[kernel_name, bias_name],
every_n_iter=1)
# Log the training loss by the tensor object.
logging_loss_hook = tf1.train.LoggingTensorHook(
{'loss from LoggingTensorHook': loss},
every_n_secs=3)
# Pass all hooks to `EstimatorSpec`.
return tf1.estimator.EstimatorSpec(mode,
loss=loss,
train_op=train_op,
training_hooks=[stop_hook,
logging_weight_hook,
logging_loss_hook])
estimator = tf1.estimator.Estimator(model_fn=_model_fn)
# Begin training.
# The training will stop after 2 steps, and the weights/loss will also be logged.
estimator.train(_input_fn)
Explanation: TensorFlow 1: tf.estimator API를 사용하여 텐서를 기록하고 학습을 중지합니다.
TensorFlow 1에서는 훈련 동작을 제어하기 위해 다양한 후크를 정의합니다. 그런 다음 이 후크를 tf.estimator.EstimatorSpec 전달합니다.
아래 예에서:
텐서(예: 모델 가중치 또는 손실)를 모니터링/로그하려면 tf.estimator.LoggingTensorHook ( tf.train.LoggingTensorHook 은 별칭)을 사용합니다.
특정 단계에서 훈련을 중지하려면 tf.estimator.StopAtStepHook ( tf.train.StopAtStepHook 은 별칭)을 사용합니다.
End of explanation
class StopAtStepCallback(tf.keras.callbacks.Callback):
def __init__(self, stop_step=None):
super().__init__()
self._stop_step = stop_step
def on_batch_end(self, batch, logs=None):
if self.model.optimizer.iterations >= self._stop_step:
self.model.stop_training = True
print('\nstop training now')
class LoggingTensorCallback(tf.keras.callbacks.Callback):
def __init__(self, every_n_iter):
super().__init__()
self._every_n_iter = every_n_iter
self._log_count = every_n_iter
def on_batch_end(self, batch, logs=None):
if self._log_count > 0:
self._log_count -= 1
print("Logging Tensor Callback: dense/kernel:",
model.layers[0].weights[0])
print("Logging Tensor Callback: dense/bias:",
model.layers[0].weights[1])
print("Logging Tensor Callback loss:", logs["loss"])
else:
self._log_count -= self._every_n_iter
Explanation: TensorFlow 2: 사용자 지정 콜백 및 Model.fit을 사용하여 텐서를 기록하고 훈련을 중지합니다.
TensorFlow 2에서 Model.fit (또는 Model.evaluate tf.keras.callbacks.Callback 을 정의하여 텐서 모니터링 및 학습 중지를 구성할 수 있습니다. 그런 다음 이를 Model.fit (또는 Model.evaluate ) callbacks 매개변수에 전달합니다. (자신만의 콜백 작성 가이드에서 자세히 알아보세요.)
아래 예에서:
StopAtStepHook 의 기능을 다시 생성하려면 특정 단계 수 후에 훈련을 중지 on_batch_end 메서드를 재정의하는 사용자 지정 콜백(아래에서 StopAtStepCallback
LoggingTensorHook 동작을 다시 생성하려면 이름으로 텐서에 액세스하는 것이 지원되지 않으므로 로깅된 텐서를 수동으로 기록하고 출력하는 사용자 지정 콜백( LoggingTensorCallback 사용자 정의 콜백 내에서 로깅 빈도를 구현할 수도 있습니다. 아래 예에서는 두 단계마다 가중치를 인쇄합니다. N초마다 기록하는 것과 같은 다른 전략도 가능합니다.
End of explanation
dataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(1)
model = tf.keras.models.Sequential([tf.keras.layers.Dense(1)])
optimizer = tf.keras.optimizers.Adagrad(learning_rate=0.05)
model.compile(optimizer, "mse")
# Begin training.
# The training will stop after 2 steps, and the weights/loss will also be logged.
model.fit(dataset, callbacks=[StopAtStepCallback(stop_step=2),
LoggingTensorCallback(every_n_iter=2)])
Explanation: 완료되면 새로운 콜백인 StopAtStepCallback 및 LoggingTensorCallback 을 Model.fit의 callbacks 매개변수에 Model.fit .
End of explanation |
3,537 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Class Session 4 Exercise
Step1: Now, define a function that returns the index numbers of the neighbors of a vertex i, when the
graph is stored in adjacency matrix format. So your function will accept as an input a NxN numpy matrix.
Step2: Define a function that enumerates the neighbors of a vertex i, when the
graph is stored in adjacency list format (a list of lists)
Step3: Define a function that enumerates the neighbors of a vertex i, when the
graph is stored in edge-list format (a numpy array of length-two-lists). Use numpy.where and numpy.unique
Step4: This next function is the simulation funtion. "n" is the number of vertices.
It returns a length-three list containing the average running time for enumerating the neighbor vertices of a vertex in the graph.
Step5: A simulation with 1000 vertices clearly shows that adjacency list is fastest
Step6: We see the expected behavior, with the running time for the adjacency-matrix and edge-list formats going up when we increase "n", but there is hardly any change in the running time for the graph stored in adjacency list format | Python Code:
import numpy as np
import igraph
import timeit
import itertools
Explanation: Class Session 4 Exercise:
Comparing asymptotic running time for enumerating neighbors of all vertices in a graph
We will measure the running time for enumerating the neighbor vertices for three different data structures for representing an undirected graph:
adjacency matrix
adjacency list
edge list
Let's assume that each vertex is labeled with a unique integer number. So if there are N vertices, the vertices are labeled 0, 2, 3, 4, ..., N-1.
First, we will import all of the Python modules that we will need for this exercise:
note how we assign a short name, "np" to the numpy module. This will save typing.
End of explanation
def enumerate_matrix(gmat, i):
Explanation: Now, define a function that returns the index numbers of the neighbors of a vertex i, when the
graph is stored in adjacency matrix format. So your function will accept as an input a NxN numpy matrix.
End of explanation
def enumerate_adj_list(adj_list, i):
Explanation: Define a function that enumerates the neighbors of a vertex i, when the
graph is stored in adjacency list format (a list of lists):
End of explanation
def enumerate_edge_list(edge_list, i):
Explanation: Define a function that enumerates the neighbors of a vertex i, when the
graph is stored in edge-list format (a numpy array of length-two-lists). Use numpy.where and numpy.unique
End of explanation
def do_sim(n):
retlist = []
nrep = 10
nsubrep = 10
# this is (sort of) a Python way of doing the R function "replicate":
for _ in itertools.repeat(None, nrep):
# make a random undirected graph with fixed (average) vertex degree = 5
g = igraph.Graph.Barabasi(n, 5)
# get the graph in three different representations
g_matrix = np.matrix(g.get_adjacency().data)
g_adj_list = g.get_adjlist()
g_edge_list = np.array(g.get_edgelist())
start_time = timeit.default_timer()
for _ in itertools.repeat(None, nsubrep):
for i in range(0, n):
enumerate_matrix(g_matrix, i)
matrix_elapsed = timeit.default_timer() - start_time
start_time = timeit.default_timer()
for _ in itertools.repeat(None, nsubrep):
for i in range(0, n):
enumerate_adj_list(g_adj_list, i)
adjlist_elapsed = timeit.default_timer() - start_time
start_time = timeit.default_timer()
for _ in itertools.repeat(None, nsubrep):
for i in range(0, n):
enumerate_edge_list(g_edge_list, i)
edgelist_elapsed = timeit.default_timer() - start_time
retlist.append([matrix_elapsed, adjlist_elapsed, edgelist_elapsed])
# average over replicates and then
# divide by n so that the running time results are on a per-vertex basis
return np.mean(np.array(retlist), axis=0)/n
Explanation: This next function is the simulation funtion. "n" is the number of vertices.
It returns a length-three list containing the average running time for enumerating the neighbor vertices of a vertex in the graph.
End of explanation
do_sim(1000)*1000
Explanation: A simulation with 1000 vertices clearly shows that adjacency list is fastest:
(I multiply by 1000 just so the results are in ms.)
End of explanation
do_sim(2000)*1000
Explanation: We see the expected behavior, with the running time for the adjacency-matrix and edge-list formats going up when we increase "n", but there is hardly any change in the running time for the graph stored in adjacency list format:
End of explanation |
3,538 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Workshop 3 - Practice Makes Perfect
There is a sign-in sheet, sign in or you wont get credit for attendance today!
Today
Step1: Breaking it down
Step2: Problem 1
We are going to plot a sawtooth wave. This is trickier, since there isn't a numpy function for it!
You will have to construct a list of numbers which contains the correct y values.
This list of values will have to be something like
Step3: Problem 2
Fun with lists!
Lists can do all sorts of things, for example, we can repeat entries of a list many times
Step4: Now redo problem 1 but using this, to get this to work you will have to build a list of the correct length.
range() isn't actually a list, but you can turn it into one!
list(range())
There are many ways to achieve the same goal when programming, some take less effort than others.
Step5: Problem 3
Now we will play with another aspect of lists, indexing.
Given a list of numbers x, set every 5th number to 2.
In order to do this, we need to have a way of accessing an element of the list.
x[0] is the first element of the list
x[1] is the second
x[2] is the third
...
x[n] is the n+1 th
this is called 0-indexing, because we started counting at 0 (if you were wondering why i always start counting from 0 in this class, this is why)
We can set a value of a list to something like so
Step6: Problem 4
Tell me the average value, and standard deviation, of a list of numbers I provide | Python Code:
import matplotlib.pyplot as plt
import numpy as np
# Base Python range() doesn't allow decimal numbers
# numpy improved and made thier own:
t = np.arange(0.0, 1., 0.01)
y = t**3.
plt.plot(100 * t, y)
plt.xlabel('Time (% of semester)')
plt.ylabel('Enjoyment of Fridays')
plt.title('Happiness over Time')
plt.show()
Explanation: Workshop 3 - Practice Makes Perfect
There is a sign-in sheet, sign in or you wont get credit for attendance today!
Today:
Today is about practicing what we were introduced to last friday.
First I will review what we did, then talk a little about whitespace, and packages.
Github Repo for Workshops - Where you can download the notebooks used in this class - https://github.com/dahlend/Physics77Fall17
Goals for Today:
You can download todays notebook from github.com/dahlend/Physics77Fall17
- 0-Review
- 1-Whitespace
- 2-Packages!
- 3-Problems
- Problem 0
- Problem 1
- Problem 2
- Problem 3
- Problem 4
0 - Review
Reminder cheat sheet:
Types of Variables (not complete list)
|Type | Name | Example |
| ----- | -------------- | -------:|
|int() | Integer | 1337 |
|float() | decimal number | -2345.12|
|complex() | complex number | 7-1j |
|string() | text | "Hello World"|
|list() | list of things| ['c', 1, 3, 1j]|
|bool() | boolean (True or False)| True |
Comparing things
|Operator| Name|
|:------:| ----|
| > | greater than|
| < | less than|
| == | equal|
| >= | greater than or equal|
| <= | less than or equal|
| != | not equal |
If Statements
if (some condition):
if the condition is true, run code which is indented here
else:
run this code if the condition is not met
For Loops
some_list = [1, 2, 'text', -51.2]
for x in some_list:
print(x)
if type(x) == int:
print("x is an integer!")
1 - Whitespace
Whitespace is a name for the space character ' ' or a tab, which is written '\t'. Whitespace is very important in python (This is not always true with other languages), it is used to signify heirarchy.
What does that mean? Lets say we have an 'if' statement:
if 6 < 5:
print("Hi im filler text")
print("see any good movies lately?")
print("ok, 6 is less than 5. That's not true!")
print("If statement is over")
Whitespace is how you tell python what lines of code are associated with the if statement, so in this case we should see only the output:
"If statement is over"
Python treats all code which is at the same number of spaces as being in the same sort of 'block', so above, the 3 print lines 'inside' the if statement will be run when the statement is true.
Having to add space before lines like this happens any time you have a line of code that ends with a ':'
Examples of this are:
# If statements
if True:
print("Doing stuff")
else:
print("Doing other stuff")
# For loops
for x in some_list:
print('x is equal to ', x)
print('x + 5 = ', x + 5)
# Defining your own function
def my_function(x):
print("my_function was given the variable x=", x)
return x
When you combine multiples of these together, you have to keep indenting!
x = 6
for y in range(10):
print("y = ", y) # This will run for every y in [0,...,9]
if x > y:
print(x) # this only runs if x > y, for every y in [0,...,9]
else:
print("y is bigger than x!")
2 - Packages (AKA Libraries, Modules, etc.)
Python itself has a limited number of tools available to you. For example, lets say I want to know the result of some bessel function. The average python user doesn't care about that, so its not included in base python. But if you recall, on the first day I said, 'someone, somewhere, has done what I'm trying to do'. What we can do is use their solution.
Libraries, like their namesakes, are collections of information, in this case they generally contain functions and tools that other people have built and made available to the world.
The main packages we will use in this class are:
- Numpy (Numerical Python) - essential mathematical methods for number crunching
- MatPlotLib - the standard plotting toolset used in python
- Scipy (Scientific Python) - advanced mathematical tools built on numpy
Lets take a look at matplotlib's website
https://matplotlib.org/
All of these packages are EXTREMELY well documented, hopefully you will get comfortable looking at the documentation for these tools by the end of the semester.
Example:
End of explanation
# Your Code Here
Explanation: Breaking it down:
I want access to numpy's functions, so I need to 'import' the package.
But 'numpy' is anoying to type, so I'll name it np when I use it.
import numpy as np
This means all of numpy is available to me, but I have to tell python when I'm using it.
np.arange(0, 1, 0.01)
This says, there is a function called arange() inside numpy, so I'm telling python to use it.
Periods here are used to signify something is INSIDE something else, IE:
np.arange is saying look for a thing called 'arange' inside np (which is a shorthand for numpy)
Links to documentation for some functions used above:
np.arange
plt.plot
3 - Problems
Problem 0
Plot a sin wave using np.sin(), the example above is a good starting point!
Hint:
np.sin can accept a list of numbers and returns the sin for each of the numbers as another list.
End of explanation
n = 20
length = 100
# Your code here
Explanation: Problem 1
We are going to plot a sawtooth wave. This is trickier, since there isn't a numpy function for it!
You will have to construct a list of numbers which contains the correct y values.
This list of values will have to be something like:
y = [0, 1, 2, 3, 4, 5, 0, 1, 2 ...]
In this case, there we have the numbers from 0 to 5 being repeated over and over.
Goals for the sawtooth plot:
- go from 0 to n-1 - in the example above, n=6, where n is how many numbers are being repeated
- have a total of length numbers in the list y
Steps to pull this off:
1) Start with an empty list, this can be done with y = [ ]
We will then loop length times, adding the right value to the end of the list y
2) Make a for loop going from 0 to length, and call the iteration value i
3) Now we have to add the correct value to the end of the list y, for the first n steps of the loop this is easy, we just are adding i.
Thinking this through:
i = 0, we add 0 to the end of the list
i = 1, we add 1 to the end of the list
...
i = n, we add 0 to the end of the list
i = n + 1, we add 1 to the end of the list
i = n + 2, we add 2 to the end of the list
...
i = 2*n, we add 0 to the end of the list
i = 2*n+1, we add 1 to the end of the list
Hint
Remember the % operator from last week?
5 % 2 = 1 ( the remainder after division is 1)
(3*n) % n = 0 $\qquad$ $\frac{3n}{n}$ is 3 remainder 0
(3*n + 1) % n = 1 $\qquad$ $\frac{3n+1}{n}$ is 3 remainder 1
4) Once we know the correct value from (3), we can add it to the list y with
y.append(value_from_3)
5) Plot it!
Lists can have values "appended" to them, in other words you can add more things to the list you have already made.
End of explanation
list_of_single_thing = ['hello']
5 * list_of_single_thing
# Look familiar?
4 * [0, 1, 2, 3, 4, 5]
Explanation: Problem 2
Fun with lists!
Lists can do all sorts of things, for example, we can repeat entries of a list many times:
End of explanation
n = 20
length = 100
# Your code here
Explanation: Now redo problem 1 but using this, to get this to work you will have to build a list of the correct length.
range() isn't actually a list, but you can turn it into one!
list(range())
There are many ways to achieve the same goal when programming, some take less effort than others.
End of explanation
# Here is a list of 100 zeros
x = 100*[0]
Explanation: Problem 3
Now we will play with another aspect of lists, indexing.
Given a list of numbers x, set every 5th number to 2.
In order to do this, we need to have a way of accessing an element of the list.
x[0] is the first element of the list
x[1] is the second
x[2] is the third
...
x[n] is the n+1 th
this is called 0-indexing, because we started counting at 0 (if you were wondering why i always start counting from 0 in this class, this is why)
We can set a value of a list to something like so:
x[7] = 2
now the 8th element of the list x is 2.
So to solve this problem, I'm providing you a list x, containing 100 zeros.
Set every 5th to a 2, and plot the result.
Steps:
1) Make a for loop going from 0 to 100 in steps of 5
hint
range(0, 100, 5)
2) for each step in the for loop set the x[i] number to 2
3) plot the result
End of explanation
x = np.log(np.arange(1, 100) ** 3)
# Your code here
Explanation: Problem 4
Tell me the average value, and standard deviation, of a list of numbers I provide:
numpy has a function for this, google is your friend.
End of explanation |
3,539 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using a notebook
The purpose of this notebook is to introduce the Jupyter interface. This notebook is a guide to the Jupyter interface and writing code and text in Jupyter notebooks with the Python programming language and Markdown, the lightweight markup language.
This notebook was originally created for a Digital Mixer session at the 2016 STELLA Unconference
Cells
The basic structure of a Jupyter notebook consists of linear sequence of cells from the top to the bottom of the page. A cell's content can consist of either
Step1: 2. Markdown/html
This is a big header written in markdown
This is a medium header written in markdown
This is a small header written in markdown
This is a paragraph written in markdown
<div class="alert alert-warning" role="alert"><p>You can also write with <strong>html tags</strong></p></div>
3. raw text
Click on any text above to see what cell it belongs to. The active cell will be surrounded by a green or blue outline. A green outline indicates you are in edit mode for that cell and you can type in the cell. A blue outline indicates that you are in command mode and you cannot type in the active cell. To enter edit mode in a cell click on any code input area or double-click on any rendered Markdown text.
Notice that you can see the content type of the active cell in the multi-choice button in the notebook toolbar at the top of the page | Python Code:
# create a range of numbers
numbers = range(0, 5)
# print out each of the numbers in the range
for number in numbers:
print(number)
Explanation: Using a notebook
The purpose of this notebook is to introduce the Jupyter interface. This notebook is a guide to the Jupyter interface and writing code and text in Jupyter notebooks with the Python programming language and Markdown, the lightweight markup language.
This notebook was originally created for a Digital Mixer session at the 2016 STELLA Unconference
Cells
The basic structure of a Jupyter notebook consists of linear sequence of cells from the top to the bottom of the page. A cell's content can consist of either:
1. code and code output
End of explanation
# this is Python code -> RUN IT
x = 2
# the output of the last line of code is shown below the cell
x * x
# this is Python code -> RUN IT
x = 2
# you can also use the 'print' statement to print information to the output below the cell
print(x)
# the output of the last line of code is still shown below the cell
x * x
# this is Python code -> RUN IT
x = 2
# if there is an error in your code an error message will display in the output below the cell
x + "two"
Explanation: 2. Markdown/html
This is a big header written in markdown
This is a medium header written in markdown
This is a small header written in markdown
This is a paragraph written in markdown
<div class="alert alert-warning" role="alert"><p>You can also write with <strong>html tags</strong></p></div>
3. raw text
Click on any text above to see what cell it belongs to. The active cell will be surrounded by a green or blue outline. A green outline indicates you are in edit mode for that cell and you can type in the cell. A blue outline indicates that you are in command mode and you cannot type in the active cell. To enter edit mode in a cell click on any code input area or double-click on any rendered Markdown text.
Notice that you can see the content type of the active cell in the multi-choice button in the notebook toolbar at the top of the page:
You can also use this button to change the cell type.
Adding, removing, and moving cells
You can manage cells using the notebook toolbar.
* Adding a cell: To add a new cell below the active cell, click <i class="fa-plus fa"></i>
* Cut/copy/paste a cell: Use <i class="fa-cut fa"></i> to cut or <i class="fa-copy fa"></i> to copy a cell and <i class="fa-paste fa"></i> to paste the cut/copied cell below the active cell
* Move a cell: To move the active cell up or down, click <i class="fa-arrow-up fa"></i> or <i class="fa-arrow-down fa"></i>
* Delete a cell: To delete the active cell, click Edit > Delete Cells
Try adding a new Markdown cell and a Code cell, moving them around, and deleting them. If you accidentally delete something you shouldn't have, you can undo it by going to: Edit > Undo Delete Cells
Running a cell
To run code in a cell or to render markdown as html in a cell you must run the cell.
To run the contents of a cell:
1. activate it
2. press shift+return or click <i class="fa-step-forward fa"></i> in the notebook toolbar at the top of the page.
Try running the three Python code cells below. You can edit and re-run a cell as many times as you want.
End of explanation |
3,540 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Keras for Text Classification
Learning Objectives
1. Learn how to create a text classification datasets using BigQuery
1. Learn how to tokenize and integerize a corpus of text for training in Keras
1. Learn how to do one-hot-encodings in Keras
1. Learn how to use embedding layers to represent words in Keras
1. Learn about the bag-of-word representation for sentences
1. Learn how to use DNN/CNN/RNN model to classify text in keras
Introduction
In this notebook, we will implement text models to recognize the probable source (GitHub, TechCrunch, or The New York Times) of the titles we have in the title dataset we constructed in the first task of the lab.
In the next step, we will load and pre-process the texts and labels so that they are suitable to be fed to a Keras model. For the texts of the titles we will learn how to split them into a list of tokens, and then how to map each token to an integer using the Keras Tokenizer class. What will be fed to our Keras models will be batches of padded list of integers representing the text. For the labels, we will learn how to one-hot-encode each of the 3 classes into a 3 dimensional basis vector.
Then we will explore a few possible models to do the title classification. All models will be fed padded list of integers, and all models will start with a Keras Embedding layer that transforms the integer representing the words into dense vectors.
The first model will be a simple bag-of-word DNN model that averages up the word vectors and feeds the tensor that results to further dense layers. Doing so means that we forget the word order (and hence that we consider sentences as a “bag-of-words”). In the second and in the third model we will keep the information about the word order using a simple RNN and a simple CNN allowing us to achieve the same performance as with the DNN model but in much fewer epochs.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Step1: Replace the variable values in the cell below
Step2: Create a Dataset from BigQuery
Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015.
Lab Task 1a
Step3: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http
Step6: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.
Step7: For ML training, we usually need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). AutoML however figures out on its own how to create these splits, so we won't need to do that here.
Step8: AutoML for text classification requires that
* the dataset be in csv form with
* the first column being the texts to classify or a GCS path to the text
* the last colum to be the text labels
The dataset we pulled from BiqQuery satisfies these requirements.
Step9: Let's make sure we have roughly the same number of labels for each of our three labels
Step10: Finally we will save our data, which is currently in-memory, to disk.
We will create a csv file containing the full dataset and another containing only 1000 articles for development.
Note
Step11: Now let's sample 1000 articles from the full dataset and make sure we have enough examples for each label in our sample dataset (see here for further details on how to prepare data for AutoML).
Lab Task 1c
Step12: Let's write the sample datatset to disk.
Step13: Let's start by specifying where the information about the trained models will be saved as well as where our dataset is located
Step14: Loading the dataset
Our dataset consists of titles of articles along with the label indicating from which source these articles have been taken from (GitHub, TechCrunch, or The New York Times).
Step15: Integerize the texts
The first thing we need to do is to find how many words we have in our dataset (VOCAB_SIZE), how many titles we have (DATASET_SIZE), and what the maximum length of the titles we have (MAX_LEN) is. Keras offers the Tokenizer class in its keras.preprocessing.text module to help us with that
Step16: Let's now implement a function create_sequence that will
* take as input our titles as well as the maximum sentence length and
* returns a list of the integers corresponding to our tokens padded to the sentence maximum length
Keras has the helper functions pad_sequence for that on the top of the tokenizer methods.
Lab Task #2
Step17: We now need to write a function that
* takes a title source and
* returns the corresponding one-hot encoded vector
Keras to_categorical is handy for that.
Step18: Lab Task #3
Step19: Preparing the train/test splits
Let's split our data into train and test splits
Step20: To be on the safe side, we verify that the train and test splits
have roughly the same number of examples per classes.
Since it is the case, accuracy will be a good metric to use to measure
the performance of our models.
Step21: Using create_sequence and encode_labels, we can now prepare the
training and validation data to feed our models.
The features will be
padded list of integers and the labels will be one-hot-encoded 3D vectors.
Step22: Building a DNN model
The build_dnn_model function below returns a compiled Keras model that implements a simple embedding layer transforming the word integers into dense vectors, followed by a Dense softmax layer that returns the probabilities for each class.
Note that we need to put a custom Keras Lambda layer in between the Embedding layer and the Dense softmax layer to do an average of the word vectors returned by the embedding layer. This is the average that's fed to the dense softmax layer. By doing so, we create a model that is simple but that loses information about the word order, creating a model that sees sentences as "bag-of-words".
Lab Tasks #4, #5, and #6
Step23: Below we train the model on 100 epochs but adding an EarlyStopping callback that will stop the training as soon as the validation loss has not improved after a number of steps specified by PATIENCE . Note that we also give the model.fit method a Tensorboard callback so that we can later compare all the models using TensorBoard.
Step24: Building a RNN model
The build_dnn_model function below returns a compiled Keras model that implements a simple RNN model with a single GRU layer, which now takes into account the word order in the sentence.
The first and last layers are the same as for the simple DNN model.
Note that we set mask_zero=True in the Embedding layer so that the padded words (represented by a zero) are ignored by this and the subsequent layers.
Lab Task #4 and #6
Step25: Let's train the model with early stoping as above.
Observe that we obtain the same type of accuracy as with the DNN model, but in less epochs (~3 v.s. ~20 epochs)
Step26: Build a CNN model
The build_dnn_model function below returns a compiled Keras model that implements a simple CNN model with a single Conv1D layer, which now takes into account the word order in the sentence.
The first and last layers are the same as for the simple DNN model, but we need to add a Flatten layer betwen the convolution and the softmax layer.
Note that we set mask_zero=True in the Embedding layer so that the padded words (represented by a zero) are ignored by this and the subsequent layers.
Lab Task #4 and #6
Complete the code below to create a CNN model for text classification. This model is similar to the previous models in that you should start with an embedding layer. However, the embedding next layers should pass through a 1-dimensional convolution and ultimately the final fully connected, dense layer. Use the arguments of the build_cnn_model function to set up the 1D convolution layer.
Step27: Let's train the model.
Again we observe that we get the same kind of accuracy as with the DNN model but in many fewer steps.
Step28: (Optional) Using the Keras Text Preprocessing Layer
Thanks to the new Keras preprocessing layer, we can also include the preprocessing of the text (i.e., the tokenization followed by the integer representation of the tokens) within the model itself as a standard Keras layer. Let us first import this text preprocessing layer
Step29: At instanciation, we can specify the maximum length of the sequence output as well as the maximum number of tokens to be considered
Step30: Before using this layer in our model, we need to adapt it to our data so that it generates a token-to-integer mapping. Remeber our dataset looks like the following
Step31: We can directly use the Pandas Series corresponding to the titles in our dataset to adapt the data using the adapt method
Step32: At this point, the preprocessing layer can create the integer representation of our input text if we simply apply the layer to it
Step33: Exercise
Step34: Our model is now able to cosume text directly as input! Again, consider the following text sample
Step35: Then we can have our model directly predict on this input
Step36: Of course the model above has not yet been trained, so its prediction are meaningless so far. Let us train it now on our dataset as before | Python Code:
import os
import pandas as pd
from google.cloud import bigquery
%load_ext google.cloud.bigquery
Explanation: Keras for Text Classification
Learning Objectives
1. Learn how to create a text classification datasets using BigQuery
1. Learn how to tokenize and integerize a corpus of text for training in Keras
1. Learn how to do one-hot-encodings in Keras
1. Learn how to use embedding layers to represent words in Keras
1. Learn about the bag-of-word representation for sentences
1. Learn how to use DNN/CNN/RNN model to classify text in keras
Introduction
In this notebook, we will implement text models to recognize the probable source (GitHub, TechCrunch, or The New York Times) of the titles we have in the title dataset we constructed in the first task of the lab.
In the next step, we will load and pre-process the texts and labels so that they are suitable to be fed to a Keras model. For the texts of the titles we will learn how to split them into a list of tokens, and then how to map each token to an integer using the Keras Tokenizer class. What will be fed to our Keras models will be batches of padded list of integers representing the text. For the labels, we will learn how to one-hot-encode each of the 3 classes into a 3 dimensional basis vector.
Then we will explore a few possible models to do the title classification. All models will be fed padded list of integers, and all models will start with a Keras Embedding layer that transforms the integer representing the words into dense vectors.
The first model will be a simple bag-of-word DNN model that averages up the word vectors and feeds the tensor that results to further dense layers. Doing so means that we forget the word order (and hence that we consider sentences as a “bag-of-words”). In the second and in the third model we will keep the information about the word order using a simple RNN and a simple CNN allowing us to achieve the same performance as with the DNN model but in much fewer epochs.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
End of explanation
PROJECT = !(gcloud config get-value core/project)
PROJECT = PROJECT[0]
%env PROJECT = {PROJECT}
%env BUCKET = {PROJECT}
%env REGION = "us-central1"
SEED = 0
Explanation: Replace the variable values in the cell below:
End of explanation
%%bigquery --project $PROJECT
SELECT
# TODO: Your code goes here.
FROM
# TODO: Your code goes here.
WHERE
# TODO: Your code goes here.
# TODO: Your code goes here.
# TODO: Your code goes here.
LIMIT 10
Explanation: Create a Dataset from BigQuery
Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015.
Lab Task 1a:
Complete the query below to create a sample dataset containing the url, title, and score of articles from the public dataset bigquery-public-data.hacker_news.stories. Use a WHERE clause to restrict to only those articles with
* title length greater than 10 characters
* score greater than 10
* url length greater than 0 characters
End of explanation
%%bigquery --project $PROJECT
SELECT
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,
# TODO: Your code goes here.
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')
# TODO: Your code goes here.
GROUP BY
# TODO: Your code goes here.
ORDER BY num_articles DESC
LIMIT 100
Explanation: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http://mobile.nytimes.com/...., I want to be left with <i>nytimes</i>
Lab task 1b:
Complete the query below to count the number of titles within each 'source' category. Note that to grab the 'source' of the article we use the a regex command on the url of the article. To count the number of articles you'll use a GROUP BY in sql, and we'll also restrict our attention to only those articles whose title has greater than 10 characters.
End of explanation
regex = ".*://(.[^/]+)/"
sub_query =
SELECT
title,
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '{0}'), '.'))[OFFSET(1)] AS source
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '{0}'), '.com$')
AND LENGTH(title) > 10
.format(
regex
)
query =
SELECT
LOWER(REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ')) AS title,
source
FROM
({sub_query})
WHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch')
.format(
sub_query=sub_query
)
print(query)
Explanation: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.
End of explanation
bq = bigquery.Client(project=PROJECT)
title_dataset = bq.query(query).to_dataframe()
title_dataset.head()
Explanation: For ML training, we usually need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). AutoML however figures out on its own how to create these splits, so we won't need to do that here.
End of explanation
print(f"The full dataset contains {len(title_dataset)} titles")
Explanation: AutoML for text classification requires that
* the dataset be in csv form with
* the first column being the texts to classify or a GCS path to the text
* the last colum to be the text labels
The dataset we pulled from BiqQuery satisfies these requirements.
End of explanation
title_dataset.source.value_counts()
Explanation: Let's make sure we have roughly the same number of labels for each of our three labels:
End of explanation
DATADIR = "./data/"
if not os.path.exists(DATADIR):
os.makedirs(DATADIR)
FULL_DATASET_NAME = "titles_full.csv"
FULL_DATASET_PATH = os.path.join(DATADIR, FULL_DATASET_NAME)
# Let's shuffle the data before writing it to disk.
title_dataset = title_dataset.sample(n=len(title_dataset))
title_dataset.to_csv(
FULL_DATASET_PATH, header=False, index=False, encoding="utf-8"
)
Explanation: Finally we will save our data, which is currently in-memory, to disk.
We will create a csv file containing the full dataset and another containing only 1000 articles for development.
Note: It may take a long time to train AutoML on the full dataset, so we recommend to use the sample dataset for the purpose of learning the tool.
End of explanation
sample_title_dataset = # TODO: Your code goes here.
# TODO: Your code goes here.
Explanation: Now let's sample 1000 articles from the full dataset and make sure we have enough examples for each label in our sample dataset (see here for further details on how to prepare data for AutoML).
Lab Task 1c:
Use .sample to create a sample dataset of 1,000 articles from the full dataset. Use .value_counts to see how many articles are contained in each of the three source categories?
End of explanation
SAMPLE_DATASET_NAME = "titles_sample.csv"
SAMPLE_DATASET_PATH = os.path.join(DATADIR, SAMPLE_DATASET_NAME)
sample_title_dataset.to_csv(
SAMPLE_DATASET_PATH, header=False, index=False, encoding="utf-8"
)
sample_title_dataset.head()
import os
import shutil
import pandas as pd
import tensorflow as tf
from tensorflow.keras.callbacks import EarlyStopping, TensorBoard
from tensorflow.keras.layers import (
GRU,
Conv1D,
Dense,
Embedding,
Flatten,
Lambda,
)
from tensorflow.keras.models import Sequential
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.utils import to_categorical
print(tf.__version__)
%matplotlib inline
Explanation: Let's write the sample datatset to disk.
End of explanation
LOGDIR = "./text_models"
DATA_DIR = "./data"
Explanation: Let's start by specifying where the information about the trained models will be saved as well as where our dataset is located:
End of explanation
DATASET_NAME = "titles_full.csv"
TITLE_SAMPLE_PATH = os.path.join(DATA_DIR, DATASET_NAME)
COLUMNS = ["title", "source"]
titles_df = pd.read_csv(TITLE_SAMPLE_PATH, header=None, names=COLUMNS)
titles_df.head()
Explanation: Loading the dataset
Our dataset consists of titles of articles along with the label indicating from which source these articles have been taken from (GitHub, TechCrunch, or The New York Times).
End of explanation
tokenizer = Tokenizer()
tokenizer.fit_on_texts(titles_df.title)
integerized_titles = tokenizer.texts_to_sequences(titles_df.title)
integerized_titles[:3]
VOCAB_SIZE = len(tokenizer.index_word)
VOCAB_SIZE
DATASET_SIZE = tokenizer.document_count
DATASET_SIZE
MAX_LEN = max(len(sequence) for sequence in integerized_titles)
MAX_LEN
Explanation: Integerize the texts
The first thing we need to do is to find how many words we have in our dataset (VOCAB_SIZE), how many titles we have (DATASET_SIZE), and what the maximum length of the titles we have (MAX_LEN) is. Keras offers the Tokenizer class in its keras.preprocessing.text module to help us with that:
End of explanation
# TODO 1
def create_sequences(texts, max_len=MAX_LEN):
sequences = # TODO: Your code goes here.
padded_sequences = # TODO: Your code goes here.
return padded_sequences
sequences = create_sequences(titles_df.title[:3])
sequences
titles_df.source[:4]
Explanation: Let's now implement a function create_sequence that will
* take as input our titles as well as the maximum sentence length and
* returns a list of the integers corresponding to our tokens padded to the sentence maximum length
Keras has the helper functions pad_sequence for that on the top of the tokenizer methods.
Lab Task #2:
Complete the code in the create_sequences function below to
* create text sequences from texts using the tokenizer we created above
* pad the end of those text sequences to have length max_len
End of explanation
CLASSES = {"github": 0, "nytimes": 1, "techcrunch": 2}
N_CLASSES = len(CLASSES)
Explanation: We now need to write a function that
* takes a title source and
* returns the corresponding one-hot encoded vector
Keras to_categorical is handy for that.
End of explanation
# TODO 2
def encode_labels(sources):
classes = # TODO: Your code goes here.
one_hots = # TODO: Your code goes here.
return one_hots
encode_labels(titles_df.source[:4])
Explanation: Lab Task #3:
Complete the code in the encode_labels function below to
* create a list that maps each source in sources to its corresponding numeric value using the dictionary CLASSES above
* use the Keras function to one-hot encode the variable classes
End of explanation
N_TRAIN = int(DATASET_SIZE * 0.80)
titles_train, sources_train = (
titles_df.title[:N_TRAIN],
titles_df.source[:N_TRAIN],
)
titles_valid, sources_valid = (
titles_df.title[N_TRAIN:],
titles_df.source[N_TRAIN:],
)
Explanation: Preparing the train/test splits
Let's split our data into train and test splits:
End of explanation
sources_train.value_counts()
sources_valid.value_counts()
Explanation: To be on the safe side, we verify that the train and test splits
have roughly the same number of examples per classes.
Since it is the case, accuracy will be a good metric to use to measure
the performance of our models.
End of explanation
X_train, Y_train = create_sequences(titles_train), encode_labels(sources_train)
X_valid, Y_valid = create_sequences(titles_valid), encode_labels(sources_valid)
X_train[:3]
Y_train[:3]
Explanation: Using create_sequence and encode_labels, we can now prepare the
training and validation data to feed our models.
The features will be
padded list of integers and the labels will be one-hot-encoded 3D vectors.
End of explanation
# TODOs 4-6
def build_dnn_model(embed_dim):
model = Sequential(
[
# TODO: Your code goes here.
# TODO: Your code goes here.
# TODO: Your code goes here.
]
)
model.compile(
optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"]
)
return model
Explanation: Building a DNN model
The build_dnn_model function below returns a compiled Keras model that implements a simple embedding layer transforming the word integers into dense vectors, followed by a Dense softmax layer that returns the probabilities for each class.
Note that we need to put a custom Keras Lambda layer in between the Embedding layer and the Dense softmax layer to do an average of the word vectors returned by the embedding layer. This is the average that's fed to the dense softmax layer. By doing so, we create a model that is simple but that loses information about the word order, creating a model that sees sentences as "bag-of-words".
Lab Tasks #4, #5, and #6:
Create a Keras Sequential model with three layers:
* The first layer should be an embedding layer with output dimension equal to embed_dim.
* The second layer should use a Lambda layer to create a bag-of-words representation of the sentences by computing the mean.
* The last layer should use a Dense layer to predict which class the example belongs to.
End of explanation
%%time
tf.random.set_seed(33)
MODEL_DIR = os.path.join(LOGDIR, "dnn")
shutil.rmtree(MODEL_DIR, ignore_errors=True)
BATCH_SIZE = 300
EPOCHS = 100
EMBED_DIM = 10
PATIENCE = 5
dnn_model = build_dnn_model(embed_dim=EMBED_DIM)
dnn_history = dnn_model.fit(
X_train,
Y_train,
epochs=EPOCHS,
batch_size=BATCH_SIZE,
validation_data=(X_valid, Y_valid),
callbacks=[EarlyStopping(patience=PATIENCE), TensorBoard(MODEL_DIR)],
)
pd.DataFrame(dnn_history.history)[["loss", "val_loss"]].plot()
pd.DataFrame(dnn_history.history)[["accuracy", "val_accuracy"]].plot()
dnn_model.summary()
Explanation: Below we train the model on 100 epochs but adding an EarlyStopping callback that will stop the training as soon as the validation loss has not improved after a number of steps specified by PATIENCE . Note that we also give the model.fit method a Tensorboard callback so that we can later compare all the models using TensorBoard.
End of explanation
def build_rnn_model(embed_dim, units):
model = Sequential(
[
# TODO: Your code goes here.
# TODO: Your code goes here.
Dense(N_CLASSES, activation="softmax")
]
)
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=0.0001),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
return model
Explanation: Building a RNN model
The build_dnn_model function below returns a compiled Keras model that implements a simple RNN model with a single GRU layer, which now takes into account the word order in the sentence.
The first and last layers are the same as for the simple DNN model.
Note that we set mask_zero=True in the Embedding layer so that the padded words (represented by a zero) are ignored by this and the subsequent layers.
Lab Task #4 and #6:
Complete the code below to build an RNN model which predicts the article class. The code below is similar to the DNN you created above; however, here we do not need to use a bag-of-words representation of the sentence. Instead, you can pass the embedding layer directly to an RNN/LSTM/GRU layer.
End of explanation
%%time
tf.random.set_seed(33)
MODEL_DIR = os.path.join(LOGDIR, "rnn")
shutil.rmtree(MODEL_DIR, ignore_errors=True)
EPOCHS = 100
BATCH_SIZE = 300
EMBED_DIM = 10
UNITS = 16
PATIENCE = 2
rnn_model = build_rnn_model(embed_dim=EMBED_DIM, units=UNITS)
history = rnn_model.fit(
X_train,
Y_train,
epochs=EPOCHS,
batch_size=BATCH_SIZE,
validation_data=(X_valid, Y_valid),
callbacks=[EarlyStopping(patience=PATIENCE), TensorBoard(MODEL_DIR)],
)
pd.DataFrame(history.history)[["loss", "val_loss"]].plot()
pd.DataFrame(history.history)[["accuracy", "val_accuracy"]].plot()
rnn_model.summary()
Explanation: Let's train the model with early stoping as above.
Observe that we obtain the same type of accuracy as with the DNN model, but in less epochs (~3 v.s. ~20 epochs):
End of explanation
def build_cnn_model(embed_dim, filters, ksize, strides):
model = Sequential(
[
# TODO: Your code goes here.
# TODO: Your code goes here.
# TODO: Your code goes here.
Dense(N_CLASSES, activation="softmax")
]
)
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=0.0001),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
return model
Explanation: Build a CNN model
The build_dnn_model function below returns a compiled Keras model that implements a simple CNN model with a single Conv1D layer, which now takes into account the word order in the sentence.
The first and last layers are the same as for the simple DNN model, but we need to add a Flatten layer betwen the convolution and the softmax layer.
Note that we set mask_zero=True in the Embedding layer so that the padded words (represented by a zero) are ignored by this and the subsequent layers.
Lab Task #4 and #6
Complete the code below to create a CNN model for text classification. This model is similar to the previous models in that you should start with an embedding layer. However, the embedding next layers should pass through a 1-dimensional convolution and ultimately the final fully connected, dense layer. Use the arguments of the build_cnn_model function to set up the 1D convolution layer.
End of explanation
%%time
tf.random.set_seed(33)
MODEL_DIR = os.path.join(LOGDIR, "cnn")
shutil.rmtree(MODEL_DIR, ignore_errors=True)
EPOCHS = 100
BATCH_SIZE = 300
EMBED_DIM = 5
FILTERS = 200
STRIDES = 2
KSIZE = 3
PATIENCE = 2
cnn_model = build_cnn_model(
embed_dim=EMBED_DIM,
filters=FILTERS,
strides=STRIDES,
ksize=KSIZE,
)
cnn_history = cnn_model.fit(
X_train,
Y_train,
epochs=EPOCHS,
batch_size=BATCH_SIZE,
validation_data=(X_valid, Y_valid),
callbacks=[EarlyStopping(patience=PATIENCE), TensorBoard(MODEL_DIR)],
)
pd.DataFrame(cnn_history.history)[["loss", "val_loss"]].plot()
pd.DataFrame(cnn_history.history)[["accuracy", "val_accuracy"]].plot()
cnn_model.summary()
Explanation: Let's train the model.
Again we observe that we get the same kind of accuracy as with the DNN model but in many fewer steps.
End of explanation
from keras.layers import TextVectorization
Explanation: (Optional) Using the Keras Text Preprocessing Layer
Thanks to the new Keras preprocessing layer, we can also include the preprocessing of the text (i.e., the tokenization followed by the integer representation of the tokens) within the model itself as a standard Keras layer. Let us first import this text preprocessing layer:
End of explanation
MAX_LEN = 26
MAX_TOKENS = 20000
preprocessing_layer = TextVectorization(
output_sequence_length=MAX_LEN, max_tokens=MAX_TOKENS
)
Explanation: At instanciation, we can specify the maximum length of the sequence output as well as the maximum number of tokens to be considered:
End of explanation
titles_df.head()
Explanation: Before using this layer in our model, we need to adapt it to our data so that it generates a token-to-integer mapping. Remeber our dataset looks like the following:
End of explanation
preprocessing_layer.adapt(titles_df.title)
Explanation: We can directly use the Pandas Series corresponding to the titles in our dataset to adapt the data using the adapt method:
End of explanation
X_train, X_valid = titles_train, titles_valid
X_train[:5]
integers = preprocessing_layer(X_train[:5])
integers
Explanation: At this point, the preprocessing layer can create the integer representation of our input text if we simply apply the layer to it:
End of explanation
def build_model_with_text_preprocessing(embed_dim, units):
# TODO
return model
Explanation: Exercise: In the following cell, implement a function
build_model_with_text_preprocessing(embed_dim, units) that returns a text model with the following sequential structure:
the preprocessing_layer we defined above folowed by
an embedding layer with embed_dim dimension for the output vectors followed by
a GRU layer with units number of neurons followed by
a final dense layer for classification
End of explanation
X_train[:5]
Explanation: Our model is now able to cosume text directly as input! Again, consider the following text sample:
End of explanation
model = build_model_with_text_preprocessing(embed_dim=EMBED_DIM, units=UNITS)
model.predict(X_train[:5])
Explanation: Then we can have our model directly predict on this input:
End of explanation
%%time
tf.random.set_seed(33)
MODEL_DIR = os.path.join(LOGDIR, "rnn")
shutil.rmtree(MODEL_DIR, ignore_errors=True)
EPOCHS = 100
BATCH_SIZE = 300
EMBED_DIM = 10
UNITS = 16
PATIENCE = 2
model = build_model_with_text_preprocessing(embed_dim=EMBED_DIM, units=UNITS)
history = model.fit(
X_train,
Y_train,
epochs=EPOCHS,
batch_size=BATCH_SIZE,
validation_data=(X_valid, Y_valid),
callbacks=[EarlyStopping(patience=PATIENCE), TensorBoard(MODEL_DIR)],
)
pd.DataFrame(history.history)[["loss", "val_loss"]].plot()
pd.DataFrame(history.history)[["accuracy", "val_accuracy"]].plot()
model.summary()
Explanation: Of course the model above has not yet been trained, so its prediction are meaningless so far. Let us train it now on our dataset as before:
End of explanation |
3,541 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Here we are using the California Housing dataset to learn more about Machine Learning.
Step1: In the meanwhile we are trying to have more information about pandas. In the following sections we are using the value_counts method to have more information about each feature values. This method specify number of different values for given feature.
Step2: See the difference between loc and iloc methods in a simple pandas DataFrame.
Step3: Here we want to see the apply function of pandas for an specific feature.
Step4: The following function helps to split the given dataset into test and train sets. | Python Code:
import pandas as pd
housing = pd.read_csv('housing.csv')
housing.head()
housing.info()
housing.describe()
Explanation: Here we are using the California Housing dataset to learn more about Machine Learning.
End of explanation
housing['total_rooms'].value_counts()
housing['ocean_proximity'].value_counts()
Explanation: In the meanwhile we are trying to have more information about pandas. In the following sections we are using the value_counts method to have more information about each feature values. This method specify number of different values for given feature.
End of explanation
pd.DataFrame([{'a': 1, 'b': '1'}, {'a': 2, 'b': 1}, {'a': 3, 'b': 1}]).iloc[1]
pd.DataFrame([{'a': 1, 'b': '1'}, {'a': 2, 'b': 1}, {'a': 3, 'b': 1}]).loc[1]
pd.DataFrame([{'a': 1, 'b': '1'}, {'a': 2, 'b': 1}, {'a': 3, 'b': 1}]).loc[1, ['b']]
pd.DataFrame([{'a': 1, 'b': '1'}, {'a': 2, 'b': 1}, {'a': 3, 'b': 1}]).loc[[True, True, False]]
Explanation: See the difference between loc and iloc methods in a simple pandas DataFrame.
End of explanation
pd.DataFrame([{'a': 1, 'b': '1'}, {'a': 2, 'b': 1}, {'a': 3, 'b': 1}])['a'].apply(lambda a: a > 10)
Explanation: Here we want to see the apply function of pandas for an specific feature.
End of explanation
from zlib import crc32
import numpy as np
def test_set_check(identifier, test_ratio):
return crc32(np.int64(identifier)) & 0xffffffff < test_ratio * 2**32
def split_train_test_by_id(data, test_ratio, id_column):
ids = data[id_column]
in_test_set = ids.apply(lambda _id: test_set_check(_id, test_ratio))
return data.loc[~in_test_set], data.loc[in_test_set]
housing_with_id = housing.reset_index() # adds an "index" column
train_set, test_set = split_train_test_by_id(housing_with_id, 0.2, 'index')
housing = train_set.copy()
housing.plot(kind="scatter", x="longitude", y="latitude", alpha=0.1)
import matplotlib.pyplot as plt
housing.plot(kind='scatter', x='longitude', y='latitude',
alpha=0.4, s=housing['population']/100, label='population',
c='median_house_value', cmap=plt.get_cmap('jet'), colorbar=True,
)
Explanation: The following function helps to split the given dataset into test and train sets.
End of explanation |
3,542 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
Step9: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Step10: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
#self.activation_function = lambda x : sigmoid(x) # Replace 0 with your sigmoid calculation.
self.activation_function = lambda x: 1/(1+np.exp(-x))
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
#def sigmoid(x):
# return 1/(1+np.exp(-x)) # Replace 0 with your sigmoid calculation here
#self.activation_function = sigmoid
#def sigmoid(x):
# return 1/(1+np.exp(-x))
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs =np.dot(X,self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
#delta=output_error_term*hidden_outputs
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.matmul(hidden_outputs,self.weights_hidden_to_output)
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y-final_outputs # Output layer error is the difference between desired target and actual output.
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = error*(self.weights_hidden_to_output.T)
# TODO: Backpropagated error terms - Replace these values with your calculations.
output_error_term = error
hidden_error_term = hidden_error*hidden_outputs*(1-hidden_outputs)
# Weight step (input to hidden)
delta_weights_i_h += hidden_error_term*X[:,None]
# Weight step (hidden to output)
delta_weights_h_o += output_error_term*hidden_outputs[:,None]
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += self.lr*delta_weights_h_o/ n_records# update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr*delta_weights_i_h/n_records # update input-to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = np.dot(features,self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs)# signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = np.dot(hidden_outputs,self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
End of explanation
import sys
### Set the hyperparameters here ###
iterations = 5000
learning_rate = 0.5
hidden_nodes = 30
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation |
3,543 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step3: <div style="text-align
Step4: In this notebook we use this code to show how to solve some particularly perplexing paradoxical probability problems.
Child Paradoxes
In 1959, Martin Gardner posed these two problems
Step5: Let's define predicates for the conditions of having two boys, and of the older child being a boy
Step6: Now we can answer Problem 1
Step7: You're probably thinking that was a lot of mechanism just to get the obvious answer. But in the next problems, what is obvious becomes less obvious.
Child Problem 2
Step8: Understanding the answer is tougher. Some people think the answer should be 1/2. Can we justify the answer 1/3? We can see there are three equiprobable outcomes in which there is at least one boy
Step9: Of those three outcomes, only one has two boys, so the answer of 1/3 is indeed justified.
But some people still think the answer should be 1/2.
Their reasoning is "If one child is a boy, then there are two equiprobable outcomes for the other child, so the probability that the other child is a boy, and thus that there are two boys, is 1/2."
When two methods of reasoning give two different answers, we have a paradox. Here are three responses to a paradox
Step10: Now we can figure out the subset of this sample space in which we observe Mr. Smith with a boy
Step11: And finally we can determine the probability that he has two boys, given that we observed him with a boy
Step12: The paradox is resolved. Two reasonable people can have different interpretations of the problem, and can each reason flawlessly to reach different conclusions, 1/3 or 1/2.
Which interpretation of the problem is "better?" We could debate that, or we could just agree to use unambiguous wording (that is, use the language of Experiment 2a or Experiment 2b, not the ambiguous language of Problem 2).
The Reasonable Person Principle
It is an unfortunate fact of human nature that we often assume the other person is an idiot. As George Carlin puts it "Have you ever noticed when you're driving that anybody driving slower than you is an idiot, and anyone going faster than you is a maniac?"
<img src="https
Step13: That's too many to print, but we can sample them
Step14: We determine below that the probability of having at least one boy is 3/4, both in S3 (where we keep track of the birth day of week) and in S (where we don't)
Step15: The probability of two boys is 1/4 in either sample space
Step16: And the probability of two boys given at least one boy is 1/3 in either sample space
Step17: We will define a predicate for the event of at least one boy born on Tuesday
Step18: We are now ready to answer Problem 3
Step19: 13/27?
How many saw that coming? 13/27 is quite different from 1/3, but rather close to 1/2. So "at least one boy born on Tuesday" is quite different from "at least one boy." Are you surprised? Do you accept the answer, or do you think we did something wrong? Are there other interpretations of the experiment that lead to other answers?
Here is one alternative interpretation
Step20: Now we can answer this version of Child Problem 3
Step22: So with the wording of Child Experiment 3b, the answer is the same as 2b.
Still confused? Let's build a visualization tool to make things more concrete.
Visualization
We'll display the results as a two dimensional grid of outcomes. An outcome will be colored white if it does not satisfy the condition stated in the problem; green if the outcome contains two boys; and yellow if it does satisfy the condition, but does not have two boys. Every cell in a row has the same older child, and every cell in a column has the same younger child. Here's the code to display a table
Step23: We can use this visualization tool to see that in Child Problem 1, there is one outcome with two boys (green) out of a total of two outcomes where the older is a boy (green and yellow) so the probability of two boys given that the older is a boy is 1/2.
Step24: For Child Problem 2, we see the probability of two boys (green) given at least one boy (green and yellow) is 1/3.
Step25: The answer is still 1/3 when we consider the day of the week of each birth.
Step26: Now for the paradox of Child Problem 3
Step27: We see there are 27 relevant outcomes, of which 13 are green. So 13/27 really does seem to be the right answer. This picture also gives us a way to think about why the answer is not 1/3. Think of the yellow-plus-green area as a horizontal stripe and a vertical stripe, with an overlap. Each stripe is half yellow and half green, so if there were no overlap at all, the probability of green would be 1/2. When each stripe takes up half the sample space and the overlap is maximal, the probability is 1/3. And in the Problem 3 table, where the overlap is small, the probability is close to 1/2 (but slightly smaller).
One way to look at it is that if I tell you very specific information (such as a boy born on Tuesday), it is unlikely that this applies to both children, so we have smaller overlap and a probability closer to 1/2, but if I give you broad information (a boy), this is more likely to apply to either child, resulting in a larger overlap, and a probability closer to 1/3.
You can read some more discussions of the problem by (in alphabetical order)
Alex Bellos,
Alexander Bogomolny,
Andrew Gelman,
David Bigelow,
Julie Rehmeyer,
Keith Devlin,
Peter Lynch,
Tanya Khovanova,
and
Wendy Taylor & Kaye Stacey.
The Sleeping Beauty Paradox
The Sleeping Beauty Paradox is another tricky one
Step28: At this point, you're probably expecting me to define predicates, like this
Step29: Now we can get the answer
Step30: Note
Step31: But that seems like the wrong question; we want the probability of heads given that Sleeping Beauty was interviewed, not the unconditional probability of heads.
The "halfers" argue that before Sleeping Beauty goes to sleep, her unconditional probability for heads should be 1/2. When she is interviewed, she doesn't know anything more than before she went to sleep, so nothing has changed, so the probability of heads should still be 1/2. I find two flaws with this argument. First, if you want to convince me, show me a sample space; don't just make philosophical arguments. (Although a philosophical argument can be employed to help you define the right sample space.) Second, while I agree that before she goes to sleep, Beauty's unconditional probability for heads should be 1/2, I would say that both before she goes to sleep and when she is awakened, her conditional probability of heads given that she is being interviewed should be 1/3, as shown by the sample space.
The Monty Hall Paradox
This is one of the most famous probability paradoxes. It can be stated as follows
Step32: Now, assuming the contestant picks door 1 and the host opens door 3, we can ask
Step33: We see that the strategy of switching from door 1 to door 2 will win the car 2/3 of the time, whereas the strategy of sticking with the original pick wins the car only 1/3 of the time. So if you like cars more than goats, you should switch. But don't feel bad if you got this one wrong; it turns out that Monty Hall himself, who opened many doors while hosting Let's Make a Deal for 13 years, didn't know the answer either, as revealed in this letter from Monty to Prof. Lawrence Denenberg, when Denenberg asked for permission to use the problem in his textbook
Step34: And we can calculate the probability of the car being behind each door, given that the contestant picks door 1 and the host opens door 3 to reveal a goat
Step36: So we see that under this interpretation it doesn't matter if you switch or not.
Is this a valid interpretation? I agree that the wording of the problem can be seen as being ambiguous. However, this interpretation has a serious problem
Step37: We can confirm that the contestant wins about 2/3 of the time with the switch strategy, and only wins about 1/3 of the time with the stick strategy
Step38: Reasoning with Probability Distributions
So far, we have made the assumption that every outcome in a sample space is equally likely. In real life, the probability of a child being a girl is not exactly 1/2. As mentioned in the previous notebook, an article gives the following counts for two-child families in Denmark
Step39: Now let's try the first two Child Problems with the probability distribution DK. Since boys are slightly more probable than girls, we expect a little over 1/2 for Problem 1, and a little over 1/3 for problem 2
Step40: It all looks good. Now let's leave Denmark behind and try a new problem
Step41: Let's check out these last two probability distributions
Step42: Now we can solve the problem. Since "boy born on a leap day" applies to so few children, we expect the probability of two boys to be just ever so slightly below the baseline rate for boys, 51.5%.
Step43: The St. Petersburg Paradox
The St. Petersburg paradox from 1713, named for the home town of the Bernoullis, and introduced by Daniel Bernoulli, the nephew of Jacob Bernoulli (the urn guy)
Step44: Let's try with the casino limited to 100 million dollars
Step45: Now we define the function EV to compute the expected value of a probability distribution
Step46: This says that for a casino with a bankroll of 100 million dollars, if you want to maximize your expected value, you should be willing to pay up to \$27.49 to play the game. Would you pay that much? I wouldn't, and neither would Daniel Bernoulli.
Response 2
Step47: A table and a plot will give a feel for the util function. Notice the characterisitc concave-down shape of the plot.
Step48: Now I will define the function EU, which computes the expected utility of the game
Step49: That says we should pay up to \$13.10 to play the game, which sounds more reasonable than \$27.49.
Understanding St. Petersburg through Simulation
Before I plunk down my \$13, I'd like to understand the game better. I'll write a simulation of the game
Step50: I will run the simulation 100,000 times (with a random seed specified for reproducability) and make the results into a probability distribution
Step51: The results are about what you would expect
Step52: These are not too far off from the theoretial values.
To see better how things unfold, I will define a function to plot the running average of repeated rounds
Step53: Let's do ten repetitions of plotting the running averages of 100,000 rounds
Step54: What can we see from this? Nine of the 10 repetitions have a final expected value payoff (after 100,000 rounds) between 10 and 35. So a price around \$13 still seems reasonable. One outlier has an average payoff just over 100, so if you are feeling lucky you might be willing to pay more than \$13.
The Ellsburg Paradox
The Ellsburg Paradox has it all
Step55: We see that for any number of black balls up to 33, the solid red line is above the solid black line, which means R is better than B. The two gambles are equal with 33 black balls, and from there on, B is better than R.
Similarly, up to 33 black balls, the dashed red line is above the dashed black line, so RY is better than BY. They are equal at 33, and after that, BY is better than RY. So in summary, R > B if and only if RY > BY.
It is pretty clear that this hold for every possible mix of black and yellow balls, taken one at a time. But what if you believe that the mix might be one of several possibilities? We'll define avgscore to give the score for a gamble (as specified by the colors in it), averaged over a collection of possible urns, each with a different black/yellow mix. Then we'll define compare to compare the four gambles on the collection of possible urns
Step56: The above says that if you think any number of black balls is possible and they are all equally equally likely, then you should slightly prefer B > R and BY > RY.
Now imagine that for some reason you believe that any mix is possible, but that a majority of black balls is more likely (i.e. the urns in the second half of the list of urns are twice as likely as those in the first half). Then we will see that the same preferences hold, but more strongly
Step57: If we believe the first half of the list (with fewer black balls) is twice as likely, we get this
Step58: This time the preferences are reversed for both gambles, R > B and RY > BY.
Now let's try another approach. Imagine there are two urns, each as described before, and the ball will be drawn from one or the other. We will plot the expected value of each of the four gambles, over all possible pairs of two different urns (sorted by the number of black balls in the pair)
Step59: The curves are different, but the results are the same
Step60: Let's compare probabilities of success
Step61: We see that for small stones, A is better, 93% to 87%, and for large stones, A is also better, 75% to 69%. So A is better no matter what, right?
Not so fast.
We can add up Counters to get the overall success rate for A and B, over all cases
Step62: Overall, B is more successful, 83% to 78%, even though A is better in both cases. So if you had kidney stones, and you want the highest chance of success, which treatment would you prefer? If you knew you had small stones (or large stones), the evidence supports A. But if the size was unknown, does that mean you should prefer B? Analysts agree that the answer is no, you should stick with A. The only reason why B has a higher overall success rate is that doctors choose to do B more often on the easier, small stone cases, and reserve A for the harder, large stone cases. B is better, but it has a lower overall percentage because it is given the difficult patients.
Here's another example, showing the batting averages for two baseball players, Derek jeter and David Justice, for the years 1995 and 1996
Step63: So Justice had a higher batting average than Jeter for both 1995 and 1996. Let's check overall | Python Code:
from fractions import Fraction
class ProbDist(dict):
"A Probability Distribution; an {outcome: probability} mapping."
def __init__(self, mapping=(), **kwargs):
self.update(mapping, **kwargs)
# Make probabilities sum to 1.0; assert no negative probabilities
total = sum(self.values())
for outcome in self:
self[outcome] = self[outcome] / total
assert self[outcome] >= 0
def P(event, space):
The probability of an event, given a sample space of equiprobable outcomes.
event: a collection of outcomes, or a predicate that is true of outcomes in the event.
space: a set of outcomes or a probability distribution of {outcome: frequency}.
if is_predicate(event):
event = such_that(event, space)
if isinstance(space, ProbDist):
return sum(space[o] for o in space if o in event)
else:
return Fraction(len(event & space), len(space))
def such_that(predicate, space):
The outcomes in the sample space for which the predicate is true.
If space is a set, return a subset {outcome,...};
if space is a ProbDist, return a ProbDist {outcome: frequency,...};
in both cases only with outcomes where predicate(element) is true.
if isinstance(space, ProbDist):
return ProbDist({o:space[o] for o in space if predicate(o)})
else:
return {o for o in space if predicate(o)}
is_predicate = callable
def cross(A, B):
"The set of ways of concatenating one item from collection A with one from B."
return {a + b
for a in A for b in B}
def joint(A, B, sep=''):
The joint distribution of two independent probability distributions.
Result is all entries of the form {a+sep+b: P(a)*P(b)}
return ProbDist({a + sep + b: A[a] * B[b]
for a in A
for b in B})
Explanation: <div style="text-align: right">Peter Norvig, 3 Oct 2015, revised Oct-Feb 2016</div>
Probability, Paradox, and the Reasonable Person Principle
In another notebook, I introduced the basics of probability theory. I'll duplicate the code we developed there:
End of explanation
S = {'BG', 'BB', 'GB', 'GG'}
Explanation: In this notebook we use this code to show how to solve some particularly perplexing paradoxical probability problems.
Child Paradoxes
In 1959, Martin Gardner posed these two problems:
Child Problem 1. Mr. Jones has two children. The older child is a boy. What is the
probability that both children are boys?
Child Problem 2. Mr. Smith has two children. At least one of them is a boy. What is
the probability that both children are boys?
Then in 2006, Mike & Tom Starbird came up with a variant, which Gary Foshee introduced to Gardner fans in 2010:
Child Problem 3. I have two children. At least one of them is a boy born on Tuesday. What is
the probability that both children are boys?
Problems 2 and 3 are considered paradoxes because they have surprising answers that people
argue about.
(Assume the probability of a boy is exactly 1/2, and is independent of any siblings.)
<center>Martin Gardner</center>
Child Problem 1: Older child is a boy. What is the probability both are boys?
We use 'BG' to denote the outcome in which the older child is a boy and the younger a girl. The sample space, S, of equi-probable outcomes is:
End of explanation
def two_boys(outcome): return outcome.count('B') == 2
def older_is_a_boy(outcome): return outcome.startswith('B')
Explanation: Let's define predicates for the conditions of having two boys, and of the older child being a boy:
End of explanation
P(two_boys, such_that(older_is_a_boy, S))
Explanation: Now we can answer Problem 1:
End of explanation
def at_least_one_boy(outcome): return 'B' in outcome
P(two_boys, such_that(at_least_one_boy, S))
Explanation: You're probably thinking that was a lot of mechanism just to get the obvious answer. But in the next problems, what is obvious becomes less obvious.
Child Problem 2: At least one is a boy. What is the probability both are boys?
Implementing this problem and finding the answer is easy:
End of explanation
such_that(at_least_one_boy, S)
Explanation: Understanding the answer is tougher. Some people think the answer should be 1/2. Can we justify the answer 1/3? We can see there are three equiprobable outcomes in which there is at least one boy:
End of explanation
S2b = {'BB/b?', 'BB/?b',
'BG/b?', 'BG/?g',
'GB/g?', 'GB/?b',
'GG/g?', 'GG/?g'}
Explanation: Of those three outcomes, only one has two boys, so the answer of 1/3 is indeed justified.
But some people still think the answer should be 1/2.
Their reasoning is "If one child is a boy, then there are two equiprobable outcomes for the other child, so the probability that the other child is a boy, and thus that there are two boys, is 1/2."
When two methods of reasoning give two different answers, we have a paradox. Here are three responses to a paradox:
The very fundamentals of mathematics must be incomplete, and this problem reveals it!!!
I'm right, and anyone who disagrees with me is an idiot!!!
I have the right answer for one interpretation of the problem, and you have the right answer
for a different interpretation of the problem.
If you're Bertrand Russell or Georg Cantor, you might very well uncover a fundamental flaw in mathematics; for the rest of us, I recommend Response 3. When I believe the answer is 1/3, and I hear someone say the answer is 1/2, my response is not "You're wrong!", rather it is "How interesting! You must have a different interpretation of the problem; I should try to discover what your interpretation is, and why your answer is correct for your interpretation." The first step is to be more precise in my wording of the experiment:
Child Experiment 2a. Mr. Smith is chosen at random from families with two children. He is asked if at least one of his children is a boy. He replies "yes."
The next step is to envision another possible interpretation of the experiment:
Child Experiment 2b. Mr. Smith is chosen at random from families with two children. He is observed at a time when he is accompanied by one of his children, chosen at random. The child is observed to be a boy.
Experiment 2b needs a different sample space, which we will call S2b. It consists of 8 outcomes, not just 4; for each of the 4 outcomes in S, we have a choice of observing either the older child or the younger child. We will use the notation 'GB/g?' to mean that the older child is a girl, the younger a boy, the older child was observed to be a girl, and the younger was not observed. The sample space is therefore:
End of explanation
def observed_boy(outcome): return 'b' in outcome
such_that(observed_boy, S2b)
Explanation: Now we can figure out the subset of this sample space in which we observe Mr. Smith with a boy:
End of explanation
P(two_boys, such_that(observed_boy, S2b))
Explanation: And finally we can determine the probability that he has two boys, given that we observed him with a boy:
End of explanation
sexesdays = cross('BG', '1234567')
S3 = cross(sexesdays, sexesdays)
len(S3)
Explanation: The paradox is resolved. Two reasonable people can have different interpretations of the problem, and can each reason flawlessly to reach different conclusions, 1/3 or 1/2.
Which interpretation of the problem is "better?" We could debate that, or we could just agree to use unambiguous wording (that is, use the language of Experiment 2a or Experiment 2b, not the ambiguous language of Problem 2).
The Reasonable Person Principle
It is an unfortunate fact of human nature that we often assume the other person is an idiot. As George Carlin puts it "Have you ever noticed when you're driving that anybody driving slower than you is an idiot, and anyone going faster than you is a maniac?"
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/2/2e/Jesus_is_coming.._Look_Busy_%28George_Carlin%29.jpg/192px-Jesus_is_coming.._Look_Busy_%28George_Carlin%29.jpg">
<center>George Carlin</center>
The opposite assumption—that other people are more likely to be reasonable rather than idiots is known as the reasonable person principle. It is a guiding principle at Carnegie Mellon University's School of Computer Science, and is a principle I try to live by as well.
Now let's return to an even more paradoxical problem.
Child Problem 3. One is a boy born on Tuesday. What's the probability both are boys?
Most people can not imagine how the boy's birth-day-of-week could be relevant, and feel the answer should be the same as Problem 2. But to be sure, we need to clearly describe the experiment, define the sample space, and calculate. First:
Child Experiment 3a. A parent is chosen at random from families with two children. She is asked if at least one of her children is a boy born on Tuesday. She replies "yes."
Next we'll define a sample space. We'll use the notation "G1B3" to mean the older child is a girl born on the first day of the week (Sunday) and the younger a boy born on the third day of the week (Tuesday). We'll call the resulting sample space S3.
End of explanation
import random
random.sample(S3, 8)
Explanation: That's too many to print, but we can sample them:
End of explanation
P(at_least_one_boy, S3)
P(at_least_one_boy, S)
Explanation: We determine below that the probability of having at least one boy is 3/4, both in S3 (where we keep track of the birth day of week) and in S (where we don't):
End of explanation
P(two_boys, S3)
P(two_boys, S)
Explanation: The probability of two boys is 1/4 in either sample space:
End of explanation
P(two_boys, such_that(at_least_one_boy, S3))
P(two_boys, such_that(at_least_one_boy, S))
Explanation: And the probability of two boys given at least one boy is 1/3 in either sample space:
End of explanation
def at_least_one_boy_tues(outcome): return 'B3' in outcome
Explanation: We will define a predicate for the event of at least one boy born on Tuesday:
End of explanation
P(two_boys, such_that(at_least_one_boy_tues, S3))
Explanation: We are now ready to answer Problem 3:
End of explanation
def observed_boy_tues(outcome): return 'b3' in outcome
S3b = {children + '/' + observation
for children in S3
for observation in (children[:2].lower()+'??', '??'+children[-2:].lower())}
random.sample(S3b, 5)
Explanation: 13/27?
How many saw that coming? 13/27 is quite different from 1/3, but rather close to 1/2. So "at least one boy born on Tuesday" is quite different from "at least one boy." Are you surprised? Do you accept the answer, or do you think we did something wrong? Are there other interpretations of the experiment that lead to other answers?
Here is one alternative interpretation:
Child Experiment 3b. A parent is chosen at random from families with two children. She is observed at a time when she is accompanied by one of her children, chosen at random. The child is observed to be a boy who reports that his birth day is Tuesday.
We can represent outcomes in this sample space with the notation G1B3/??b3, meaning the older child is a girl born on Sunday, the younger a boy born on Tuesday, the older was not observed, and the younger was.
End of explanation
P(two_boys, such_that(observed_boy_tues, S3b))
Explanation: Now we can answer this version of Child Problem 3:
End of explanation
from IPython.display import HTML
def Pgrid(space, n, event, condition):
Display sample space in a grid, color-coded: green if event and condition is true;
yellow if only condition is true; white otherwise.
# n is the number of characters that make up the older child.
olders = sorted(set(outcome[:n] for outcome in space))
return HTML('<table>' +
cat(row(older, space, event, condition) for older in olders) +
'</table>' +
'<tt>P({} | {}) = {}</tt>'.format(
event.__name__, condition.__name__,
P(event, such_that(condition, space))))
def row(older, space, event, condition):
"Display a row where an older child is paired with each of the possible younger children."
thisrow = sorted(outcome for outcome in space if outcome.startswith(older))
return '<tr>' + cat(cell(outcome, event, condition) for outcome in thisrow) + '</tr>'
def cell(outcome, event, condition):
"Display outcome in appropriate color."
color = ('lightgreen' if event(outcome) and condition(outcome) else
'yellow' if condition(outcome) else
'white')
return '<td style="background-color: {}">{}</td>'.format(color, outcome)
cat = ''.join
Explanation: So with the wording of Child Experiment 3b, the answer is the same as 2b.
Still confused? Let's build a visualization tool to make things more concrete.
Visualization
We'll display the results as a two dimensional grid of outcomes. An outcome will be colored white if it does not satisfy the condition stated in the problem; green if the outcome contains two boys; and yellow if it does satisfy the condition, but does not have two boys. Every cell in a row has the same older child, and every cell in a column has the same younger child. Here's the code to display a table:
End of explanation
# Child Problem 1
Pgrid(S, 1, two_boys, older_is_a_boy)
Explanation: We can use this visualization tool to see that in Child Problem 1, there is one outcome with two boys (green) out of a total of two outcomes where the older is a boy (green and yellow) so the probability of two boys given that the older is a boy is 1/2.
End of explanation
# Child Problem 2
Pgrid(S, 1, two_boys, at_least_one_boy)
Explanation: For Child Problem 2, we see the probability of two boys (green) given at least one boy (green and yellow) is 1/3.
End of explanation
# Child Problem 2, with days of week enumerated
Pgrid(S3, 2, two_boys, at_least_one_boy)
Explanation: The answer is still 1/3 when we consider the day of the week of each birth.
End of explanation
# Child Problem 3
Pgrid(S3, 2, two_boys, at_least_one_boy_tues)
Explanation: Now for the paradox of Child Problem 3:
End of explanation
B = {'heads/Monday/interviewed', 'heads/Tuesday/sleep',
'tails/Monday/interviewed', 'tails/Tuesday/interviewed'}
Explanation: We see there are 27 relevant outcomes, of which 13 are green. So 13/27 really does seem to be the right answer. This picture also gives us a way to think about why the answer is not 1/3. Think of the yellow-plus-green area as a horizontal stripe and a vertical stripe, with an overlap. Each stripe is half yellow and half green, so if there were no overlap at all, the probability of green would be 1/2. When each stripe takes up half the sample space and the overlap is maximal, the probability is 1/3. And in the Problem 3 table, where the overlap is small, the probability is close to 1/2 (but slightly smaller).
One way to look at it is that if I tell you very specific information (such as a boy born on Tuesday), it is unlikely that this applies to both children, so we have smaller overlap and a probability closer to 1/2, but if I give you broad information (a boy), this is more likely to apply to either child, resulting in a larger overlap, and a probability closer to 1/3.
You can read some more discussions of the problem by (in alphabetical order)
Alex Bellos,
Alexander Bogomolny,
Andrew Gelman,
David Bigelow,
Julie Rehmeyer,
Keith Devlin,
Peter Lynch,
Tanya Khovanova,
and
Wendy Taylor & Kaye Stacey.
The Sleeping Beauty Paradox
The Sleeping Beauty Paradox is another tricky one:
Sleeping Beauty volunteers to undergo the following experiment and is told all of the following details: On Sunday she will be put to sleep. Then a fair coin will be tossed,
to determine which experimental procedure to undertake:
- Heads: Beauty will be awakened and interviewed on Monday only.
- Tails: Beauty will be awakened and interviewed on Monday and Tuesday only.
In all cases she is put back to sleep with an amnesia-inducing drug that makes her forget that awakening and sleep until the next one. In any case, she will be awakened on Wednesday without interview and the experiment ends. Any time Beauty is awakened and interviewed, she is asked, "What is your belief now for the proposition that the coin landed heads?"
What should Sleeping Beauty say when she is interviewed? First, she should define the sample space. She could use the notation 'heads/Monday/interviewed' to mean the outcome where the coin flip was heads, it is Monday, and she is interviewed. So there are 4 equiprobable outcomes:
End of explanation
def T(property):
"Return a predicate that is true of all outcomes that have 'property' as a substring."
return lambda outcome: property in outcome
Explanation: At this point, you're probably expecting me to define predicates, like this:
def heads(outcome): return 'heads' in outcome
def interviewed(outcome): return 'interviewed' in outcome
We've seen a lot of predicates like this. I think it is time to heed the "don't repeat yourself" principle, so I will define a predicate-defining function, T. Think of "T" for "it is true that":
End of explanation
heads = T("heads")
interviewed = T("interviewed")
P(heads, such_that(interviewed, B))
Explanation: Now we can get the answer:
End of explanation
P(heads, B)
Explanation: Note: I could have done that in one line: P(T("heads"), such_that(T("interviewed"), B))
This problem is considered a paradox because there are people who argue that the answer should be 1/2, not 1/3. I admit I'm having difficulty coming up with a sample space that supports the "halfer" position.
I do know of a question that has the answer 1/2:
End of explanation
M = {'Car1/Lo/Pick1/Open2', 'Car1/Hi/Pick1/Open3',
'Car2/Lo/Pick1/Open3', 'Car2/Hi/Pick1/Open3',
'Car3/Lo/Pick1/Open2', 'Car3/Hi/Pick1/Open2'}
Explanation: But that seems like the wrong question; we want the probability of heads given that Sleeping Beauty was interviewed, not the unconditional probability of heads.
The "halfers" argue that before Sleeping Beauty goes to sleep, her unconditional probability for heads should be 1/2. When she is interviewed, she doesn't know anything more than before she went to sleep, so nothing has changed, so the probability of heads should still be 1/2. I find two flaws with this argument. First, if you want to convince me, show me a sample space; don't just make philosophical arguments. (Although a philosophical argument can be employed to help you define the right sample space.) Second, while I agree that before she goes to sleep, Beauty's unconditional probability for heads should be 1/2, I would say that both before she goes to sleep and when she is awakened, her conditional probability of heads given that she is being interviewed should be 1/3, as shown by the sample space.
The Monty Hall Paradox
This is one of the most famous probability paradoxes. It can be stated as follows:
Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the doors, opens another door, say No. 3, which has a goat. He then says to you, "Do you want to switch your choice to door No. 2?" Is it to your advantage to switch your choice?
<img src="http://retrothing.typepad.com/.a/6a00d83452989a69e20120a4cb10a2970b-800wi">
<center>Monty Hall</center>
Much has been written about this problem, but to solve it all we have to do is be careful about how we understand the problem, and about defining our sample space. I will define outcomes of the form 'Car1/Lo/Pick1/Open2', which means:
* Car1: The producers of the show randomly placed the car behind door 1.
* Lo: The host randomly commits to the strategy of opening the lowest-numbered allowable door. A door is allowable if it does not contain the car and was not picked by the contestant. Alternatively, the host could have chosen to open the highest-numbered allowable door (Hi).
* Pick1: The contestant picks door 1. Our sample space will only consider cases where the contestant picks door 1, but by symmetry, the same arguments could be used if the contestant picked door 2 or 3.
* Open2: After hearing the contestant's choice, and following the strategy, the host opens a door; in this case door 2.
We can see that the sample space has 6 equiprobable outcomes involving Pick1:
End of explanation
such_that(T("Open3"), M)
P(T("Car1"), such_that(T("Open3"), M))
P(T("Car2"), such_that(T("Open3"), M))
Explanation: Now, assuming the contestant picks door 1 and the host opens door 3, we can ask:
- What are the possible outcomes remaining?
- What is the probability that the car is behind door 1?
- Or door 2?
End of explanation
M2 = {'Car1/Pick1/Open2/Goat', 'Car1/Pick1/Open3/Goat',
'Car2/Pick1/Open2/Car', 'Car2/Pick1/Open3/Goat',
'Car3/Pick1/Open2/Goat', 'Car3/Pick1/Open3/Car'}
Explanation: We see that the strategy of switching from door 1 to door 2 will win the car 2/3 of the time, whereas the strategy of sticking with the original pick wins the car only 1/3 of the time. So if you like cars more than goats, you should switch. But don't feel bad if you got this one wrong; it turns out that Monty Hall himself, who opened many doors while hosting Let's Make a Deal for 13 years, didn't know the answer either, as revealed in this letter from Monty to Prof. Lawrence Denenberg, when Denenberg asked for permission to use the problem in his textbook:
<img src="http://norvig.com/monty-hall-letter.jpg">
If you were Denenberg, how would you answer Monty, in non-mathematical terms? I would try something like this:
When the contestant makes her initial pick, she has 1/3 chance of picking the car, and there is a 2/3 chance the car is behind one of the other doors. That's still true after you open a door, but now the 2/3 chance for either other door becomes concentrated as 2/3 behind one other door, so the contestant should switch.
But that type of argument was not persuasive to everyone. Marilyn vos Savant reports that many of her readers (including, she is pleased to point out, many Ph.D.s) still insist the answer is that it doesn't matter if the contestant switches; the odds are 1/2 either way. Let's try to discover what problem and what sample space those people are dealing with. Perhaps they are reasoning like this:
They define outcomes of the form 'Car1/Pick1/Open2/Goat', which means:
* Car1: First the car is randomly placed behind door 1.
* Pick1: The contestant picks door 1.
* Open2: The host opens one of the two other doors at random (so the host might open the door with the car).
* Goat: We observe there is a goat behind the opened door.
Under this interpretation, the sample space of all outcomes involving Pick1 is:
End of explanation
P(T("Car1"), such_that(T("Open3/Goat"), M2))
P(T("Car2"), such_that(T("Open3/Goat"), M2))
P(T("Car3"), such_that(T("Open3/Goat"), M2))
Explanation: And we can calculate the probability of the car being behind each door, given that the contestant picks door 1 and the host opens door 3 to reveal a goat:
End of explanation
import random
def monty(strategy):
Simulate this sequence of events:
1. The host randomly chooses a door for the 'car'
2. The contestant randomly makes a 'pick' of one of the doors
3. The host randomly selects a non-car, non-pick door to be 'opened.'
4. If strategy == 'switch', contestant changes 'pick' to the other unopened door
5. Return true if the pick is the door with the car.
doors = (1, 2, 3)
car = random.choice(doors)
pick = random.choice(doors)
opened = random.choice([d for d in doors if d != car and d != pick])
if strategy == 'switch':
pick = next(d for d in doors if d != pick and d != opened)
return (pick == car)
Explanation: So we see that under this interpretation it doesn't matter if you switch or not.
Is this a valid interpretation? I agree that the wording of the problem can be seen as being ambiguous. However, this interpretation has a serious problem: in all the history of Let's Make a Deal, it was never the case that the host opened up a door with the grand prize. This strongly suggests (but does not prove) that M is the correct sample space, not M2
Simulating the Monty Hall Problem
Some people might be more convinced by a simulation than by a probability argument. Here is code for a simulation:
End of explanation
from collections import Counter
Counter(monty('switch') for _ in range(10 ** 5))
Counter(monty('stick') for _ in range(10 ** 5))
Explanation: We can confirm that the contestant wins about 2/3 of the time with the switch strategy, and only wins about 1/3 of the time with the stick strategy:
End of explanation
DK = ProbDist(GG=121801, GB=126840,
BG=127123, BB=135138)
DK
Explanation: Reasoning with Probability Distributions
So far, we have made the assumption that every outcome in a sample space is equally likely. In real life, the probability of a child being a girl is not exactly 1/2. As mentioned in the previous notebook, an article gives the following counts for two-child families in Denmark:
GG: 121801 GB: 126840
BG: 127123 BB: 135138
Let's implement that:
End of explanation
# Child Problem 1 in DK
P(two_boys, such_that(older_is_a_boy, DK))
# Child Problem 2 in DK
P(two_boys, such_that(at_least_one_boy, DK))
Explanation: Now let's try the first two Child Problems with the probability distribution DK. Since boys are slightly more probable than girls, we expect a little over 1/2 for Problem 1, and a little over 1/3 for problem 2:
End of explanation
sexes = ProbDist(B=51.5, G=48.5) # Probability distribution over sexes
days = ProbDist(L=1, N=4*365) # Probability distribution over Leap days and Non-leap days
child = joint(sexes, days) # Probability distribution for one child family
S4 = joint(child, child) # Probability distribution for two-child family
Explanation: It all looks good. Now let's leave Denmark behind and try a new problem:
Child Problem 4. One is a boy born on Feb. 29. What is the probability both are boys?
Child Problem 4. I have two children. At least one of them is a boy born on leap day, February 29. What is the probability that both children are boys? Assume that 51.5% of births are boys and that birth days are distributed evenly across the 4×365 + 1 days in a 4-year cycle.
We will use the notation GLBN to mean an older girl born on leap day (L) and a younger boy born on a non-leap day (N).
End of explanation
child
S4
Explanation: Let's check out these last two probability distributions:
End of explanation
# Child Problem 4
boy_born_on_leap_day = T("BL")
P(two_boys, such_that(boy_born_on_leap_day, S4))
Explanation: Now we can solve the problem. Since "boy born on a leap day" applies to so few children, we expect the probability of two boys to be just ever so slightly below the baseline rate for boys, 51.5%.
End of explanation
def st_pete(limit):
"Return the probability distribution for the St. Petersburg Paradox with a limited bank."
P = {} # The probability distribution
pot = 2 # Amount of money in the pot
pr = 1/2. # Probability that you end up with the amount in pot
while pot < limit:
P[pot] = pr
pot = pot * 2
pr = pr / 2
P[limit] = pr * 2 # pr * 2 because you get limit for heads or tails
return ProbDist(P)
Explanation: The St. Petersburg Paradox
The St. Petersburg paradox from 1713, named for the home town of the Bernoullis, and introduced by Daniel Bernoulli, the nephew of Jacob Bernoulli (the urn guy):
A casino offers a game of chance for a single player in which a fair coin is tossed at each stage. The pot starts at 2 dollars and is doubled every time a head appears. The first time a tail appears, the game ends and the player wins whatever is in the pot. Thus the player wins 2 dollars if a tail appears on the first toss, 4 dollars if a head appears on the first toss and a tail on the second, etc. What is the expected value of this game to the player?
To calculate the expected value, we see there is a 1/2 chance of a tail on the first toss (yielding a pot of \$2) and if not that, a 1/2 × 1/2 = 1/4 chance of a tail on the second toss (yielding a pot of \$4), and so on. So in total, the expected value is:
$$\frac{1}{2}\cdot 2 + \frac{1}{4}\cdot 4 + \frac{1}{8}\cdot 8 + \frac{1}{16} \cdot 16 + \cdots = 1 + 1 + 1 + 1 + \cdots = \infty$$
The expected value is infinite! But anyone playing the game would not expect to win an infinite amount; thus the paradox.
Response 1: Limited Resources
The first major response to the paradox is that the casino's resources are limited. Once you break their bank, they can't pay out any more, and thus the expected return is finite. Let's consider the case where the bank has a limit to their resources, and create a probability distribution for the problem. We keep doubling the pot and halving the probability of winning the amount in the pot (half because you get the pot on a tail but not a head), until we reach the limit.
End of explanation
StP = st_pete(limit=10**8)
StP
Explanation: Let's try with the casino limited to 100 million dollars:
End of explanation
def EV(P):
"The expected value of a probability distribution."
return sum(P[v] * v
for v in P)
EV(StP)
Explanation: Now we define the function EV to compute the expected value of a probability distribution:
End of explanation
def util(dollars, enough=1000):
"The value of money: only half as valuable after you already have enough."
if dollars < enough:
return dollars
else:
additional = dollars-enough
return enough + util(additional / 2, enough * 2)
Explanation: This says that for a casino with a bankroll of 100 million dollars, if you want to maximize your expected value, you should be willing to pay up to \$27.49 to play the game. Would you pay that much? I wouldn't, and neither would Daniel Bernoulli.
Response 2: Value of Money
Daniel Bernoulli came up with a second response to the paradox based on the idea that if you have a lot of money, then additional money becomes less valuable to you. If I had nothing, and I won \$1000, I would be very happy. But if I already had a million dollars and I won \$1000, it would be less valuable. How much less valuable? Bernoulli proposed, and experiments confirm, that the value of money is roughly logarithmic. That is, rational bettors don't try to maximize their expected monetary value, they try to maximize their expected utility: the amount of "happiness" that the money is worth.
I'll write the function util to describe what a dollar amount is worth to a hypothetical gambler. util says that a dollar is worth a dollar, until the amount is "enough" money. After that point, each additional dollar is worth half as much (only brings half as much happiness). Value keeps accumulating at this rate until we reach the next threshold of "enough," when the utility of additional dollars is halfed again. The exact details of util are not critical; what matters is that overall money becomes less valuable after we have won a lot of it.
End of explanation
for d in range(2, 10):
m = 10 ** d
print('{:15,d} $ = {:10,d} util'.format(m, int(util(m))))
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot([util(x) for x in range(1000, 10000000, 1000)])
print('Y axis is util(x); x axis is in thousands of dollars.')
Explanation: A table and a plot will give a feel for the util function. Notice the characterisitc concave-down shape of the plot.
End of explanation
def EU(P, U):
"The expected utility of a probability distribution, given a utility function."
return sum(P[e] * U(e)
for e in P)
EU(StP, util)
Explanation: Now I will define the function EU, which computes the expected utility of the game:
End of explanation
def flip(): return random.choice(('head', 'tail'))
def simulate_st_pete(limit=10**9):
"Simulate one round of the St. Petersburg game, and return the payoff."
pot = 2
while flip() == 'head':
pot = pot * 2
if pot > limit:
return limit
return pot
Explanation: That says we should pay up to \$13.10 to play the game, which sounds more reasonable than \$27.49.
Understanding St. Petersburg through Simulation
Before I plunk down my \$13, I'd like to understand the game better. I'll write a simulation of the game:
End of explanation
random.seed(123456)
results = ProbDist(Counter(simulate_st_pete() for _ in range(100000)))
results
Explanation: I will run the simulation 100,000 times (with a random seed specified for reproducability) and make the results into a probability distribution:
End of explanation
EU(results, util), EV(results)
Explanation: The results are about what you would expect: about half the pots are 2, a quarter are 4, an eighth are 8, and higher pots are more and more unlikely. Let's check expected utility and expected value:
End of explanation
def running_averages(iterable):
"For each element in the iterable, yield the mean of all elements seen so far."
total, n = 0, 0
for x in iterable:
total, n = total + x, n + 1
yield total / n
def plot_running_averages(fn, n):
"Plot the running average of calling the function n times."
plt.plot(list(running_averages(fn() for _ in range(n))))
Explanation: These are not too far off from the theoretial values.
To see better how things unfold, I will define a function to plot the running average of repeated rounds:
End of explanation
random.seed('running')
for i in range(10):
plot_running_averages(simulate_st_pete, 100000);
Explanation: Let's do ten repetitions of plotting the running averages of 100,000 rounds:
End of explanation
def ellsburg():
show('R', 'r')
show('B', 'k')
show('RY', 'r--')
show('BY', 'k--')
plt.xlabel('Number of black balls')
plt.ylabel('Expected value of each gamble')
blacks = list(range(68))
urns = [Counter(R=33, B=b, Y=67-b) for b in blacks]
def show(colors, line):
scores = [score(colors, urn) for urn in urns]
plt.plot(blacks, scores, line)
def score(colors, urn): return sum(urn[c] for c in colors)
ellsburg()
Explanation: What can we see from this? Nine of the 10 repetitions have a final expected value payoff (after 100,000 rounds) between 10 and 35. So a price around \$13 still seems reasonable. One outlier has an average payoff just over 100, so if you are feeling lucky you might be willing to pay more than \$13.
The Ellsburg Paradox
The Ellsburg Paradox has it all: an urn problem; a paradox; a conclusion that can only be resolved through psychology, not mathematics alone; and a colorful history with an inventor, Daniel Ellsburg, who went on to become the releaser of the Pentagon Papers. The paradox is as follows:
An urn contains 33 red balls and 66 other balls that are either black or yellow. You don't know the mix of black and yellow, just that they total 66. A single ball is drawn at random. You are given a choice between these two gambles:
- R: Win \$100 for a red ball.
- B: Win \$100 for a black ball.
You are also given a choice between these two gambles:
- RY: Win \$100 for a red or yellow ball.
- BY: Win \$100 for a black or yellow ball.
Many people reason as follows:
- R: I win 1/3 of the time
- B: I win somewhere between 0 and 2/3 of the time, but I'm not sure of the probability.
- RY: I win at least 1/3 of the time and maybe up to 100% of the time; I'm not sure.
- BY: I win 2/3 of the time.
- Overall, I prefer the relative certainty of R over B and of BY over RY.
The paradox is that, from an expected utility point of view, that reasoning is inconsistent, no matter what the mix of black and yellow balls is (or no matter what you believe the mix might be). RY and BY are just the same gambles as R and B, but with an additional \$100 for a yellow ball. So if you prefer R over B, you should prefer RY over BY (and if you prefer B over R you should prefer BY over RY), for any possible mix of black and yellow balls.
Let's demonstrate. For each possible number of black balls (on the x axis), we'll plot the expected value of each of the four gambles; R as a solid red line, B as a solid black line, RY as a dashed red line, and BY as a dashed black line:
End of explanation
def avgscore(colors, urns):
return sum(score(colors, urn) for urn in urns) / len(urns)
def compare(urns):
for colors in ('R', 'B', 'RY', 'BY'):
print(colors.ljust(2), avgscore(colors, urns))
compare(urns)
Explanation: We see that for any number of black balls up to 33, the solid red line is above the solid black line, which means R is better than B. The two gambles are equal with 33 black balls, and from there on, B is better than R.
Similarly, up to 33 black balls, the dashed red line is above the dashed black line, so RY is better than BY. They are equal at 33, and after that, BY is better than RY. So in summary, R > B if and only if RY > BY.
It is pretty clear that this hold for every possible mix of black and yellow balls, taken one at a time. But what if you believe that the mix might be one of several possibilities? We'll define avgscore to give the score for a gamble (as specified by the colors in it), averaged over a collection of possible urns, each with a different black/yellow mix. Then we'll define compare to compare the four gambles on the collection of possible urns:
End of explanation
compare(urns[:33] + 2 * urns[33:])
Explanation: The above says that if you think any number of black balls is possible and they are all equally equally likely, then you should slightly prefer B > R and BY > RY.
Now imagine that for some reason you believe that any mix is possible, but that a majority of black balls is more likely (i.e. the urns in the second half of the list of urns are twice as likely as those in the first half). Then we will see that the same preferences hold, but more strongly:
End of explanation
compare(2 * urns[:33] + urns[33:])
Explanation: If we believe the first half of the list (with fewer black balls) is twice as likely, we get this:
End of explanation
def ellsburg2():
show2('R', 'r')
show2('B', 'k')
show2('RY', 'r--')
show2('BY', 'k--')
plt.xlabel('Different combinations of two urns')
plt.ylabel('Expected value of each gamble')
def show2(colors, line):
urnpairs = [(u1, u2) for u1 in urns for u2 in urns]
urnpairs.sort(key=lambda urns: avgscore('B', urns))
X = list(range(len(urnpairs)))
plt.plot(X, [avgscore(colors, urns) for urns in urnpairs], line)
ellsburg2()
Explanation: This time the preferences are reversed for both gambles, R > B and RY > BY.
Now let's try another approach. Imagine there are two urns, each as described before, and the ball will be drawn from one or the other. We will plot the expected value of each of the four gambles, over all possible pairs of two different urns (sorted by the number of black balls in the pair):
End of explanation
# Good and bad outcomes for kidney stone reatments A and B,
# each in two cases: [small_stones, large_stones]
A = [Counter(good=81, bad=6), Counter(good=192, bad=71)]
B = [Counter(good=234, bad=36), Counter(good=55, bad=25)]
def success(case): return ProbDist(case)['good']
Explanation: The curves are different, but the results are the same: R > B if and only if RY > BY.
So why do many people prefer R > B and BY > RY. One explanation is risk aversion; it feels safer to take a definite 1/3 chance of winning, rather than a gamble that might be as good as 2/3, but might be as bad as 0. This is irrational thinking (in the sense that those who follow this strategy will win less), but people are sometimes irrational.
Simpson's Paradox
This has nothing to do with the TV show. D'oh! In 1951, statistician Edward Simpson (who worked with Alan Turing at Bletchley Park during World War II), noted that it is possible to take a sample space in which A is better than B, and split it into two groups, such that B is better than A in both groups.
For example, here is data from trials of two treatments for kidney stones, A and B, separated into two subgroups or cases: first, for small kidney stones, and second for large ones. In all cases we record the number of good and bad outcomes of the treatment:
End of explanation
[success(case) for case in A]
[success(case) for case in B]
Explanation: Let's compare probabilities of success:
End of explanation
success(A[0] + A[1])
success(B[0] + B[1])
Explanation: We see that for small stones, A is better, 93% to 87%, and for large stones, A is also better, 75% to 69%. So A is better no matter what, right?
Not so fast.
We can add up Counters to get the overall success rate for A and B, over all cases:
End of explanation
Jeter = [Counter(hit=12, out=36), Counter(hit=183, out=399)]
Justice = [Counter(hit=104, out=307), Counter(hit=45, out=95)]
def BA(case): "Batting average"; return ProbDist(case)['hit']
[BA(year) for year in Jeter]
[BA(year) for year in Justice]
Explanation: Overall, B is more successful, 83% to 78%, even though A is better in both cases. So if you had kidney stones, and you want the highest chance of success, which treatment would you prefer? If you knew you had small stones (or large stones), the evidence supports A. But if the size was unknown, does that mean you should prefer B? Analysts agree that the answer is no, you should stick with A. The only reason why B has a higher overall success rate is that doctors choose to do B more often on the easier, small stone cases, and reserve A for the harder, large stone cases. B is better, but it has a lower overall percentage because it is given the difficult patients.
Here's another example, showing the batting averages for two baseball players, Derek jeter and David Justice, for the years 1995 and 1996:
End of explanation
BA(Jeter[0] + Jeter[1])
BA(Justice[0] + Justice[1])
Explanation: So Justice had a higher batting average than Jeter for both 1995 and 1996. Let's check overall:
End of explanation |
3,544 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Climate Projections
https
Step1: Please put your datahub API key into a file called APIKEY and place it to the notebook folder or assign your API key directly to the variable API_key!
Step2: Getting available locations and making dropdown list to choose desired one
Step3: Dropdown where you can choose a model you want to use. Also, in addition to models, you can choose between quantiles 5-75.
Step4: Getting avaiable scenarios of the model and reading it to pandas dataframes.
Step5: Making a plot of Monthly Temperatures from chosen location and model.
Step6: Plot precipitation data
Step7: Make graph about quantiles.
Commented out right now as quantiles are work on progress with subarea data. | Python Code:
import json
import pandas as pd
from urllib.request import urlopen
from urllib.parse import quote
import plotly.graph_objects as go
import plotly.express as px
import ipywidgets as widgets
from IPython.display import display
import numpy as np
Explanation: Climate Projections
https://data.planetos.com/datasets/climate_risk_monthly_version1
Notebook for showing how to fetch Climate Risk data from Planet OS Datahub and make simple plots from the raw data.
Note that this dataset is the first version of climate risk data. Right now, data is country level, but we plan to have county level data for some countries as well. Please, let us know all your requirements and what you would like to see in the future.
End of explanation
apikey = open('APIKEY').read()
server = 'https://api.planetos.com/v1'
dataset = 'climate_risk_monthly_v1'
def get_data(server,dataset,model,scenarios,location,apikey):
if " " in location:
location = location.replace(' ', '%20')
if type(scenarios) == list:
scenarios = ','.join(scenarios)
url = f'{server}/data/dataset_physical_values/{dataset}?node:station={quote(location)}&classifier:scenario={scenarios}&classifier:model={model}&count=100000&apikey={apikey}'
data = json.loads(urlopen(url).read())
return data
def data_to_pd(data,scenarios):
variables = data['entries'][0]['data'].keys()
data_out = {}
for scenario in scenarios:
#print (scenario)
scenario_data = {}
for var in variables:
vardata = [t['data'][var] for t in data['entries'] if t['classifiers']['classifier:scenario'] == scenario]
time = [pd.to_datetime(t['axes']['time']) for t in data['entries'] if t['classifiers']['classifier:scenario'] == scenario]
if not 'time' in scenario_data:
scenario_data['time'] = time
scenario_data[var] = vardata
data_pd = pd.DataFrame(scenario_data)
data_out[scenario] = data_pd
return data_out
def get_available_model_scenarios(server,dataset,model,location):
if " " in location:
location = location.replace(' ', '%20')
url = f'{server}/data/dataset_physical_values/{dataset}?node:station={quote(location)}&classifier:model={model}&count=1&apikey={apikey}'
#print (url)
data = json.loads(urlopen(url).read())
scenarios = [s['classifiers']['classifier:scenario'] for s in data['entries']]
return scenarios
def get_available_locations(server,dataset):
url = f'{server}/datasets/{dataset}/stations?apikey={apikey}'
data = json.loads(urlopen(url).read())
return list(data['station'].keys())
def get_available_models(server, dataset, location):
if " " in location:
location = location.replace(' ', '%20')
url = f'{server}/data/dataset_physical_values/{dataset}?node:station={quote(location)}&count=1&apikey={apikey}'
data = json.loads(urlopen(url).read())
models = np.unique([m['classifiers']['classifier:model'] for m in data['entries']])
return models
Explanation: Please put your datahub API key into a file called APIKEY and place it to the notebook folder or assign your API key directly to the variable API_key!
End of explanation
locations = get_available_locations(server,dataset)
drop_down = widgets.Dropdown(
options=sorted(locations),
description='Available locations:',
disabled=False
)
display(drop_down)
Explanation: Getting available locations and making dropdown list to choose desired one
End of explanation
location = drop_down.value
print ('Getting models for location: ', location)
models = get_available_models(server, dataset, location)
drop_down2 = widgets.Dropdown(
options=models,
description='Available models:',
disabled=False
)
display(drop_down2)
for mo in models:
scenarios = get_available_model_scenarios(server,dataset,mo,location)
print (mo, scenarios)
Explanation: Dropdown where you can choose a model you want to use. Also, in addition to models, you can choose between quantiles 5-75.
End of explanation
model = drop_down2.value
print (model)
scenarios = get_available_model_scenarios(server,dataset,model,location)
print (scenarios)
data = get_data(server,dataset,model,scenarios,location,apikey)
scenarios_data_pd_list = data_to_pd(data,scenarios)
Explanation: Getting avaiable scenarios of the model and reading it to pandas dataframes.
End of explanation
fig = go.Figure()
for scenario in scenarios:
data_pd = scenarios_data_pd_list[scenario]
fig.add_traces(go.Scatter(
x=data_pd['time'],
y=data_pd['tas'].values,
mode='lines',
name = f'{location} {scenario}',
legendgroup = scenario
))
fig.update_layout(title = f'{location} Monthly Temperature [C]')
fig.show()
Explanation: Making a plot of Monthly Temperatures from chosen location and model.
End of explanation
fig = go.Figure()
for scenario in scenarios:
data_pd = scenarios_data_pd_list[scenario]
fig.add_traces(go.Scatter(
x=data_pd.time,
y=data_pd['pr'],
mode='lines',
name = f'{location} {scenario}',
legendgroup = scenario
))
fig.update_layout(title = f'{location} Monthly Precipitation [kg m-2 s-1]')
fig.show()
Explanation: Plot precipitation data
End of explanation
# quantile_keys = [f for f in models if 'quantile' in f]
# scenarios = ['historical']#get_available_model_scenarios(server,dataset,model,location)
# var = 'tas'
# for sc in scenarios:
# q_data = {}
# for q_key in sorted(quantile_keys):
# data = get_data(server,dataset,q_key,sc,location,apikey)
# scenarios_data_pd_list = data_to_pd(data,[sc])
# q_data[q_key] = scenarios_data_pd_list[sc][var].values
# if not 'time' in q_data:
# q_data['time'] = scenarios_data_pd_list[sc]['time'].values
# q_data_pd = pd.DataFrame(q_data)
# q_data_pd = q_data_pd.set_index('time')
# q_data_resampled = q_data_pd.resample('10Y').mean().transpose()
# fig = px.box(q_data_resampled)
# fig.update_layout(title = f'{location} {var} Quantiles Scenario: {sc}')
# fig.show()
Explanation: Make graph about quantiles.
Commented out right now as quantiles are work on progress with subarea data.
End of explanation |
3,545 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Plotting HYCOM Global Ocean Forecast Data
Note
Step2: Let's choose a location near Oahu, Hawaii...
Step3: Important! You'll need to replace apikey below with your actual Planet OS API key, which you'll find on the Planet OS account settings page.
Step4: Show the available variables and their contexts...
Step5: Now let's extract data for all variables and create a different plot for each... | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import dateutil.parser
import datetime
from urllib.request import urlopen, Request
import simplejson as json
def extract_reference_time(API_data_loc):
Find reference time that corresponds to most complete forecast. Should be the earliest value.
reftimes = set()
for i in API_data_loc['entries']:
reftimes.update([i['axes']['reftime']])
reftimes=list(reftimes)
if len(reftimes)>1:
reftime = reftimes[0] if dateutil.parser.parse(reftimes[0])<dateutil.parser.parse(reftimes[1]) else reftimes[1]
else:
reftime = reftimes[0]
return reftime
Explanation: Plotting HYCOM Global Ocean Forecast Data
Note: this notebook requires python3.
This notebook demonstrates a simple Planet OS API use case using the HYCOM Global Ocean Forecast dataset.
API documentation is available at http://docs.planetos.com. If you have questions or comments, join the Planet OS Slack community to chat with our development team.
For general information on usage of IPython/Jupyter and Matplotlib, please refer to their corresponding documentation. https://ipython.org/ and http://matplotlib.org/
End of explanation
location = 'Hawaii Oahu'
if location == 'Est':
longitude = 24.+45./60
latitude = 59+25/60.
elif location == 'Au':
longitude = 149. + 7./60
latitude = -35.-18./60
elif location == "Hawaii Oahu":
latitude = 21.205
longitude = -158.35
elif location == 'Somewhere':
longitude == -20.
latitude == 10.
Explanation: Let's choose a location near Oahu, Hawaii...
End of explanation
apikey = open('APIKEY').readlines()[0].strip() #'<YOUR API KEY HERE>'
API_url = "http://api.planetos.com/v1/datasets/hycom_glbu0.08_91.2_global_0.08d/point?lon={0}&lat={1}&count=10000&verbose=false&apikey={2}".format(longitude,latitude,apikey)
request = Request(API_url)
response = urlopen(request)
API_data = json.loads(response.read())
Explanation: Important! You'll need to replace apikey below with your actual Planet OS API key, which you'll find on the Planet OS account settings page.
End of explanation
varlist = []
print("{0:<50} {1}".format("Variable","Context"))
print()
for k,v in set([(j,i['context']) for i in API_data['entries'] for j in i['data'].keys()]):
print("{0:<50} {1}".format(k,v))
varlist.append(k)
reftime = extract_reference_time(API_data)
Explanation: Show the available variables and their contexts...
End of explanation
vardict = {}
for i in varlist:
vardict['time_'+i]=[]
vardict['data_'+i]=[]
for i in API_data['entries']:
#print(i['context'])
reftime = extract_reference_time(API_data)
for j in i['data']:
if reftime == i['axes']['reftime']:
if j != 'surf_el':
if i['axes']['z'] < 1.:
vardict['data_'+j].append(i['data'][j])
vardict['time_'+j].append(dateutil.parser.parse(i['axes']['time']))
else:
vardict['data_'+j].append(i['data'][j])
vardict['time_'+j].append(dateutil.parser.parse(i['axes']['time']))
for i in varlist:
fig = plt.figure(figsize=(15,3))
plt.title(i)
ax = fig.add_subplot(111)
plt.plot(vardict['time_'+i],vardict['data_'+i],color='r')
ax.set_ylabel(i)
print(API_data['entries'][0]['data'])
print(API_data['entries'][0]['axes'])
print(API_data['entries'][0]['context'])
Explanation: Now let's extract data for all variables and create a different plot for each...
End of explanation |
3,546 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Building and deploying machine learning solutions with Vertex AI
Step1: Import libraries
Step2: Initialize Vertex AI Python SDK
Initialize the Vertex AI Python SDK with your GCP Project, Region, and Google Cloud Storage Bucket.
Step5: Build and train your model locally in a Vertex Notebook
Note
Step6: Let's print a few example reviews
Step7: Choose a pre-trained BERT model to fine-tune for higher accuracy
Bidirectional Encoder Representations from Transformers (BERT) is a transformer-based text representation model pre-trained on massive datasets (3+ billion words) that can be fine-tuned for state-of-the art results on many natural language processing (NLP) tasks. Since release in 2018 by Google researchers, its has transformed the field of NLP research and come to form a core part of significant improvements to Google Search.
To meet your business requirements of achieving higher accuracy on a small dataset (20k training examples), you will use a technique called transfer learning to combine a pre-trained BERT encoder and classification layers to fine tune a new higher performing model for binary sentiment classification.
For this lab, you will use a smaller BERT model that trades some accuracy for faster training times.
The Small BERT models are instances of the original BERT architecture with a smaller number L of layers (i.e., residual blocks) combined with a smaller hidden size H and a matching smaller number A of attention heads, as published by
Iulia Turc, Ming-Wei Chang, Kenton Lee, Kristina Toutanova
Step8: Text inputs need to be transformed to numeric token ids and arranged in several Tensors before being input to BERT. TensorFlow Hub provides a matching preprocessing model for each of the BERT models discussed above, which implements this transformation using TF ops from the TF.text library. Since this text preprocessor is a TensorFlow model, It can be included in your model directly.
For fine-tuning, you will use the same optimizer that BERT was originally trained with
Step10: Build and compile a TensorFlow BERT sentiment classifier
Next, you will define and compile your model by assembling pre-built TF-Hub components and tf.keras layers.
Step11: Train and evaluate your BERT sentiment classifier
Step13: Note
Step14: Based on the History object returned by model.fit(). You can plot the training and validation loss for comparison, as well as the training and validation accuracy
Step19: In this plot, the red lines represent the training loss and accuracy, and the blue lines are the validation loss and accuracy. Based on the plots above, you should see model accuracy of around 78-80% which exceeds your business requirements target of greater than 75% accuracy.
Containerize your model code
Now that you trained and evaluated your model locally in a Vertex Notebook as part of an experimentation workflow, your next step is to train and deploy your model on Google Cloud's Vertex AI platform.
To train your BERT classifier on Google Cloud, you will you will package your Python training scripts and write a Dockerfile that contains instructions on your ML model code, dependencies, and execution instructions. You will build your custom container with Cloud Build, whose instructions are specified in cloudbuild.yaml and publish your container to your Artifact Registry. This workflow gives you the opportunity to use the same container to run as part of a portable and scalable Vertex Pipelines workflow.
You will walk through creating the following project structure for your ML mode code
Step20: 2. Write a task.py file as an entrypoint to your custom model container
Step21: 3. Write a Dockerfile for your custom model container
Third, you will write a Dockerfile that contains instructions to package your model code in bert-sentiment-classifier as well as specifies your model code's dependencies needed for execution together in a Docker container.
Step22: 4. Write a requirements.txt file to specify additional ML code dependencies
These are additional dependencies for your model code not included in the pre-built Vertex TensorFlow images such as TF-Hub, TensorFlow AdamW optimizer, and TensorFlow Text needed for importing and working with pre-trained TensorFlow BERT models.
Step23: Use Cloud Build to build and submit your model container to Google Cloud Artifact Registry
Next, you will use Cloud Build to build and upload your custom TensorFlow model container to Google Cloud Artifact Registry.
Cloud Build brings reusability and automation to your ML experimentation by enabling you to reliably build, test, and deploy your ML model code as part of a CI/CD workflow. Artifact Registry provides a centralized repository for you to store, manage, and secure your ML container images. This will allow you to securely share your ML work with others and reproduce experiment results.
Note
Step25: 2. Create cloudbuild.yaml instructions
Step26: 3. Build and submit your container image to Artifact Registry using Cloud Build
Note
Step27: Define a pipeline using the KFP V2 SDK
To address your business requirements and get your higher performing model into production to deliver value faster, you will define a pipeline using the Kubeflow Pipelines (KFP) V2 SDK to orchestrate the training and deployment of your model on Vertex Pipelines below.
Step28: The pipeline consists of three components
Step29: Compile the pipeline
Step30: Run the pipeline on Vertex Pipelines
The PipelineJob is configured below and triggered through the run() method.
Note
Step31: Query deployed model on Vertex Endpoint for online predictions
Finally, you will retrieve the Endpoint deployed by the pipeline and use it to query your model for online predictions.
Configure the Endpoint() function below with the following parameters
Step32: Next steps
Congratulations! You walked through a full experimentation, containerization, and MLOps workflow on Vertex AI. First, you built, trained, and evaluated a BERT sentiment classifier model in a Vertex Notebook. You then packaged your model code into a Docker container to train on Google Cloud's Vertex AI. Lastly, you defined and ran a Kubeflow Pipeline on Vertex Pipelines that trained and deployed your model container to a Vertex Endpoint that you queried for online predictions.
License | Python Code:
# Add installed library dependencies to Python PATH variable.
PATH=%env PATH
%env PATH={PATH}:/home/jupyter/.local/bin
# Retrieve and set PROJECT_ID and REGION environment variables.
# TODO: fill in PROJECT_ID.
PROJECT_ID = ""
REGION = "us-central1"
# TODO: Create a globally unique Google Cloud Storage bucket for artifact storage.
GCS_BUCKET = ""
!gsutil mb -l $REGION $GCS_BUCKET
Explanation: Building and deploying machine learning solutions with Vertex AI: Challenge Lab
This Challenge Lab is recommended for students who have enrolled in the Building and deploying machine learning solutions with Vertex AI. You will be given a scenario and a set of tasks. Instead of following step-by-step instructions, you will use the skills learned from the labs in the quest to figure out how to complete the tasks on your own! An automated scoring system (shown on the Qwiklabs lab instructions page) will provide feedback on whether you have completed your tasks correctly.
When you take a Challenge Lab, you will not be taught Google Cloud concepts. To build the solution to the challenge presented, use skills learned from the labs in the Quest this challenge lab is part of. You are expected to extend your learned skills and complete all the TODO: comments in this notebook.
Are you ready for the challenge?
Scenario
You were recently hired as a Machine Learning Engineer at a startup movie review website. Your manager has tasked you with building a machine learning model to classify the sentiment of user movie reviews as positive or negative. These predictions will be used as an input in downstream movie rating systems and to surface top supportive and critical reviews on the movie website application. The challenge: your business requirements are that you have just 6 weeks to productionize a model that achieves great than 75% accuracy to improve upon an existing bootstrapped solution. Furthermore, after doing some exploratory analysis in your startup's data warehouse, you found that you only have a small dataset of 50k text reviews to build a higher performing solution.
To build and deploy a high performance machine learning model with limited data quickly, you will walk through training and deploying a custom TensorFlow BERT sentiment classifier for online predictions on Google Cloud's Vertex AI platform. Vertex AI is Google Cloud's next generation machine learning development platform where you can leverage the latest ML pre-built components and AutoML to significantly enhance your development productivity, scale your workflow and decision making with your data, and accelerate time to value.
First, you will progress through a typical experimentation workflow where you will build your model from pre-trained BERT components from TF-Hub and tf.keras classification layers to train and evaluate your model in a Vertex Notebook. You will then package your model code into a Docker container to train on Google Cloud's Vertex AI. Lastly, you will define and run a Kubeflow Pipeline on Vertex Pipelines that trains and deploys your model to a Vertex Endpoint that you will query for online predictions.
Learning objectives
Train a TensorFlow model locally in a hosted Vertex Notebook.
Containerize your training code with Cloud Build and push it to Google Cloud Artifact Registry.
Define a pipeline using the Kubeflow Pipelines (KFP) V2 SDK to train and deploy your model on Vertex Pipelines.
Query your model on a Vertex Endpoint using online predictions.
Setup
Define constants
End of explanation
import os
import shutil
import logging
# TensorFlow model building libraries.
import tensorflow as tf
import tensorflow_text as text
import tensorflow_hub as hub
# Re-create the AdamW optimizer used in the original BERT paper.
from official.nlp import optimization
# Libraries for data and plot model training metrics.
import pandas as pd
import matplotlib.pyplot as plt
# Import the Vertex AI Python SDK.
from google.cloud import aiplatform as vertexai
Explanation: Import libraries
End of explanation
vertexai.init(project=PROJECT_ID, location=REGION, staging_bucket=GCS_BUCKET)
Explanation: Initialize Vertex AI Python SDK
Initialize the Vertex AI Python SDK with your GCP Project, Region, and Google Cloud Storage Bucket.
End of explanation
DATA_URL = "https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz"
LOCAL_DATA_DIR = "."
def download_data(data_url, local_data_dir):
Download dataset.
Args:
data_url(str): Source data URL path.
local_data_dir(str): Local data download directory path.
Returns:
dataset_dir(str): Local unpacked data directory path.
if not os.path.exists(local_data_dir):
os.makedirs(local_data_dir)
dataset = tf.keras.utils.get_file(
fname="aclImdb_v1.tar.gz",
origin=data_url,
untar=True,
cache_dir=local_data_dir,
cache_subdir="")
dataset_dir = os.path.join(os.path.dirname(dataset), "aclImdb")
train_dir = os.path.join(dataset_dir, "train")
# Remove unused folders to make it easier to load the data.
remove_dir = os.path.join(train_dir, "unsup")
shutil.rmtree(remove_dir)
return dataset_dir
DATASET_DIR = download_data(data_url=DATA_URL, local_data_dir=LOCAL_DATA_DIR)
# Create a dictionary to iteratively add data pipeline and model training hyperparameters.
HPARAMS = {
# Set a random sampling seed to prevent data leakage in data splits from files.
"seed": 42,
# Number of training and inference examples.
"batch-size": 32
}
def load_datasets(dataset_dir, hparams):
Load pre-split tf.datasets.
Args:
hparams(dict): A dictionary containing model training arguments.
Returns:
raw_train_ds(tf.dataset): Train split dataset (20k examples).
raw_val_ds(tf.dataset): Validation split dataset (5k examples).
raw_test_ds(tf.dataset): Test split dataset (25k examples).
raw_train_ds = tf.keras.preprocessing.text_dataset_from_directory(
os.path.join(dataset_dir, 'train'),
batch_size=hparams['batch-size'],
validation_split=0.2,
subset='training',
seed=hparams['seed'])
raw_val_ds = tf.keras.preprocessing.text_dataset_from_directory(
os.path.join(dataset_dir, 'train'),
batch_size=hparams['batch-size'],
validation_split=0.2,
subset='validation',
seed=hparams['seed'])
raw_test_ds = tf.keras.preprocessing.text_dataset_from_directory(
os.path.join(dataset_dir, 'test'),
batch_size=hparams['batch-size'])
return raw_train_ds, raw_val_ds, raw_test_ds
raw_train_ds, raw_val_ds, raw_test_ds = load_datasets(DATASET_DIR, HPARAMS)
AUTOTUNE = tf.data.AUTOTUNE
CLASS_NAMES = raw_train_ds.class_names
train_ds = raw_train_ds.prefetch(buffer_size=AUTOTUNE)
val_ds = raw_val_ds.prefetch(buffer_size=AUTOTUNE)
test_ds = raw_test_ds.prefetch(buffer_size=AUTOTUNE)
Explanation: Build and train your model locally in a Vertex Notebook
Note: this lab adapts and extends the official TensorFlow BERT text classification tutorial to utilize Vertex AI services. See the tutorial for additional coverage on fine-tuning BERT models using TensorFlow.
Lab dataset
In this lab, you will use the Large Movie Review Dataset that contains the text of 50,000 movie reviews from the Internet Movie Database. These are split into 25,000 reviews for training and 25,000 reviews for testing. The training and testing sets are balanced, meaning they contain an equal number of positive and negative reviews. Data ingestion and processing code has been provided for you below:
Import dataset
End of explanation
for text_batch, label_batch in train_ds.take(1):
for i in range(3):
print(f'Review {i}: {text_batch.numpy()[i]}')
label = label_batch.numpy()[i]
print(f'Label : {label} ({CLASS_NAMES[label]})')
Explanation: Let's print a few example reviews:
End of explanation
HPARAMS.update({
# TF Hub BERT modules.
"tfhub-bert-preprocessor": "https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3",
"tfhub-bert-encoder": "https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-128_A-2/2",
})
Explanation: Choose a pre-trained BERT model to fine-tune for higher accuracy
Bidirectional Encoder Representations from Transformers (BERT) is a transformer-based text representation model pre-trained on massive datasets (3+ billion words) that can be fine-tuned for state-of-the art results on many natural language processing (NLP) tasks. Since release in 2018 by Google researchers, its has transformed the field of NLP research and come to form a core part of significant improvements to Google Search.
To meet your business requirements of achieving higher accuracy on a small dataset (20k training examples), you will use a technique called transfer learning to combine a pre-trained BERT encoder and classification layers to fine tune a new higher performing model for binary sentiment classification.
For this lab, you will use a smaller BERT model that trades some accuracy for faster training times.
The Small BERT models are instances of the original BERT architecture with a smaller number L of layers (i.e., residual blocks) combined with a smaller hidden size H and a matching smaller number A of attention heads, as published by
Iulia Turc, Ming-Wei Chang, Kenton Lee, Kristina Toutanova: "Well-Read Students Learn Better: On the Importance of Pre-training Compact Models", 2019.
They have the same general architecture but fewer and/or smaller Transformer blocks, which lets you explore tradeoffs between speed, size and quality.
The following preprocessing and encoder models in the TensorFlow 2 SavedModel format use the implementation of BERT from the TensorFlow Models Github repository with the trained weights released by the authors of Small BERT.
End of explanation
HPARAMS.update({
# Model training hyperparameters for fine tuning and regularization.
"epochs": 3,
"initial-learning-rate": 3e-5,
"dropout": 0.1
})
epochs = HPARAMS['epochs']
steps_per_epoch = tf.data.experimental.cardinality(train_ds).numpy()
n_train_steps = steps_per_epoch * epochs
n_warmup_steps = int(0.1 * n_train_steps)
OPTIMIZER = optimization.create_optimizer(init_lr=HPARAMS['initial-learning-rate'],
num_train_steps=n_train_steps,
num_warmup_steps=n_warmup_steps,
optimizer_type='adamw')
Explanation: Text inputs need to be transformed to numeric token ids and arranged in several Tensors before being input to BERT. TensorFlow Hub provides a matching preprocessing model for each of the BERT models discussed above, which implements this transformation using TF ops from the TF.text library. Since this text preprocessor is a TensorFlow model, It can be included in your model directly.
For fine-tuning, you will use the same optimizer that BERT was originally trained with: the "Adaptive Moments" (Adam). This optimizer minimizes the prediction loss and does regularization by weight decay (not using moments), which is also known as AdamW.
For the learning rate initial-learning-rate, you will use the same schedule as BERT pre-training: linear decay of a notional initial learning rate, prefixed with a linear warm-up phase over the first 10% of training steps n_warmup_steps. In line with the BERT paper, the initial learning rate is smaller for fine-tuning.
End of explanation
def build_text_classifier(hparams, optimizer):
Define and compile a TensorFlow BERT sentiment classifier.
Args:
hparams(dict): A dictionary containing model training arguments.
Returns:
model(tf.keras.Model): A compiled TensorFlow model.
text_input = tf.keras.layers.Input(shape=(), dtype=tf.string, name='text')
# TODO: Add a hub.KerasLayer for BERT text preprocessing using the hparams dict.
# Name the layer 'preprocessing' and store in the variable preprocessor.
encoder_inputs = preprocessor(text_input)
# TODO: Add a trainable hub.KerasLayer for BERT text encoding using the hparams dict.
# Name the layer 'BERT_encoder' and store in the variable encoder.
outputs = encoder(encoder_inputs)
# For the fine-tuning you are going to use the `pooled_output` array which represents
# each input sequence as a whole. The shape is [batch_size, H].
# You can think of this as an embedding for the entire movie review.
classifier = outputs['pooled_output']
# Add dropout to prevent overfitting during model fine-tuning.
classifier = tf.keras.layers.Dropout(hparams['dropout'], name='dropout')(classifier)
classifier = tf.keras.layers.Dense(1, activation=None, name='classifier')(classifier)
model = tf.keras.Model(text_input, classifier, name='bert-sentiment-classifier')
loss = tf.keras.losses.BinaryCrossentropy(from_logits=True)
metrics = tf.metrics.BinaryAccuracy()
model.compile(optimizer=optimizer,
loss=loss,
metrics=metrics)
return model
model = build_text_classifier(HPARAMS, OPTIMIZER)
# Visualize your fine-tuned BERT sentiment classifier.
tf.keras.utils.plot_model(model)
TEST_REVIEW = ['this is such an amazing movie!']
BERT_RAW_RESULT = model(tf.constant(TEST_REVIEW))
print(BERT_RAW_RESULT)
Explanation: Build and compile a TensorFlow BERT sentiment classifier
Next, you will define and compile your model by assembling pre-built TF-Hub components and tf.keras layers.
End of explanation
HPARAMS.update({
# TODO: Save your BERT sentiment classifier locally.
# Hint: Save it to './bert-sentiment-classifier-local'. Note the key name in model.save().
})
Explanation: Train and evaluate your BERT sentiment classifier
End of explanation
def train_evaluate(hparams):
Train and evaluate TensorFlow BERT sentiment classifier.
Args:
hparams(dict): A dictionary containing model training arguments.
Returns:
history(tf.keras.callbacks.History): Keras callback that records training event history.
# dataset_dir = download_data(data_url, local_data_dir)
raw_train_ds, raw_val_ds, raw_test_ds = load_datasets(DATASET_DIR, hparams)
train_ds = raw_train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = raw_val_ds.cache().prefetch(buffer_size=AUTOTUNE)
test_ds = raw_test_ds.cache().prefetch(buffer_size=AUTOTUNE)
epochs = hparams['epochs']
steps_per_epoch = tf.data.experimental.cardinality(train_ds).numpy()
n_train_steps = steps_per_epoch * epochs
n_warmup_steps = int(0.1 * n_train_steps)
optimizer = optimization.create_optimizer(init_lr=hparams['initial-learning-rate'],
num_train_steps=n_train_steps,
num_warmup_steps=n_warmup_steps,
optimizer_type='adamw')
mirrored_strategy = tf.distribute.MirroredStrategy()
with mirrored_strategy.scope():
model = build_text_classifier(hparams=hparams, optimizer=optimizer)
logging.info(model.summary())
history = model.fit(x=train_ds,
validation_data=val_ds,
epochs=epochs)
logging.info("Test accuracy: %s", model.evaluate(test_ds))
# Export Keras model in TensorFlow SavedModel format.
model.save(hparams['model-dir'])
return history
Explanation: Note: training your model locally will take about 8-10 minutes.
End of explanation
history = train_evaluate(HPARAMS)
history_dict = history.history
print(history_dict.keys())
acc = history_dict['binary_accuracy']
val_acc = history_dict['val_binary_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
fig = plt.figure(figsize=(10, 6))
fig.tight_layout()
plt.subplot(2, 1, 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'r', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
# plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.subplot(2, 1, 2)
plt.plot(epochs, acc, 'r', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend(loc='lower right');
Explanation: Based on the History object returned by model.fit(). You can plot the training and validation loss for comparison, as well as the training and validation accuracy:
End of explanation
MODEL_DIR = "bert-sentiment-classifier"
%%writefile {MODEL_DIR}/trainer/model.py
import os
import shutil
import logging
import tensorflow as tf
import tensorflow_text as text
import tensorflow_hub as hub
from official.nlp import optimization
DATA_URL = 'https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz'
LOCAL_DATA_DIR = './tmp/data'
AUTOTUNE = tf.data.AUTOTUNE
def download_data(data_url, local_data_dir):
Download dataset.
Args:
data_url(str): Source data URL path.
local_data_dir(str): Local data download directory path.
Returns:
dataset_dir(str): Local unpacked data directory path.
if not os.path.exists(local_data_dir):
os.makedirs(local_data_dir)
dataset = tf.keras.utils.get_file(
fname='aclImdb_v1.tar.gz',
origin=data_url,
untar=True,
cache_dir=local_data_dir,
cache_subdir="")
dataset_dir = os.path.join(os.path.dirname(dataset), 'aclImdb')
train_dir = os.path.join(dataset_dir, 'train')
# Remove unused folders to make it easier to load the data.
remove_dir = os.path.join(train_dir, 'unsup')
shutil.rmtree(remove_dir)
return dataset_dir
def load_datasets(dataset_dir, hparams):
Load pre-split tf.datasets.
Args:
hparams(dict): A dictionary containing model training arguments.
Returns:
raw_train_ds(tf.dataset): Train split dataset (20k examples).
raw_val_ds(tf.dataset): Validation split dataset (5k examples).
raw_test_ds(tf.dataset): Test split dataset (25k examples).
raw_train_ds = tf.keras.preprocessing.text_dataset_from_directory(
os.path.join(dataset_dir, 'train'),
batch_size=hparams['batch-size'],
validation_split=0.2,
subset='training',
seed=hparams['seed'])
raw_val_ds = tf.keras.preprocessing.text_dataset_from_directory(
os.path.join(dataset_dir, 'train'),
batch_size=hparams['batch-size'],
validation_split=0.2,
subset='validation',
seed=hparams['seed'])
raw_test_ds = tf.keras.preprocessing.text_dataset_from_directory(
os.path.join(dataset_dir, 'test'),
batch_size=hparams['batch-size'])
return raw_train_ds, raw_val_ds, raw_test_ds
def build_text_classifier(hparams, optimizer):
Define and compile a TensorFlow BERT sentiment classifier.
Args:
hparams(dict): A dictionary containing model training arguments.
Returns:
model(tf.keras.Model): A compiled TensorFlow model.
text_input = tf.keras.layers.Input(shape=(), dtype=tf.string, name='text')
# TODO: Add a hub.KerasLayer for BERT text preprocessing using the hparams dict.
# Name the layer 'preprocessing' and store in the variable preprocessor.
preprocessor = hub.KerasLayer(hparams['tfhub-bert-preprocessor'], name='preprocessing')
encoder_inputs = preprocessor(text_input)
# TODO: Add a trainable hub.KerasLayer for BERT text encoding using the hparams dict.
# Name the layer 'BERT_encoder' and store in the variable encoder.
encoder = hub.KerasLayer(hparams['tfhub-bert-encoder'], trainable=True, name='BERT_encoder')
outputs = encoder(encoder_inputs)
# For the fine-tuning you are going to use the `pooled_output` array which represents
# each input sequence as a whole. The shape is [batch_size, H].
# You can think of this as an embedding for the entire movie review.
classifier = outputs['pooled_output']
# Add dropout to prevent overfitting during model fine-tuning.
classifier = tf.keras.layers.Dropout(hparams['dropout'], name='dropout')(classifier)
classifier = tf.keras.layers.Dense(1, activation=None, name='classifier')(classifier)
model = tf.keras.Model(text_input, classifier, name='bert-sentiment-classifier')
loss = tf.keras.losses.BinaryCrossentropy(from_logits=True)
metrics = tf.metrics.BinaryAccuracy()
model.compile(optimizer=optimizer,
loss=loss,
metrics=metrics)
return model
def train_evaluate(hparams):
Train and evaluate TensorFlow BERT sentiment classifier.
Args:
hparams(dict): A dictionary containing model training arguments.
Returns:
history(tf.keras.callbacks.History): Keras callback that records training event history.
dataset_dir = download_data(data_url=DATA_URL,
local_data_dir=LOCAL_DATA_DIR)
raw_train_ds, raw_val_ds, raw_test_ds = load_datasets(dataset_dir=dataset_dir,
hparams=hparams)
train_ds = raw_train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = raw_val_ds.cache().prefetch(buffer_size=AUTOTUNE)
test_ds = raw_test_ds.cache().prefetch(buffer_size=AUTOTUNE)
epochs = hparams['epochs']
steps_per_epoch = tf.data.experimental.cardinality(train_ds).numpy()
n_train_steps = steps_per_epoch * epochs
n_warmup_steps = int(0.1 * n_train_steps)
optimizer = optimization.create_optimizer(init_lr=hparams['initial-learning-rate'],
num_train_steps=n_train_steps,
num_warmup_steps=n_warmup_steps,
optimizer_type='adamw')
mirrored_strategy = tf.distribute.MirroredStrategy()
with mirrored_strategy.scope():
model = build_text_classifier(hparams=hparams, optimizer=optimizer)
logging.info(model.summary())
history = model.fit(x=train_ds,
validation_data=val_ds,
epochs=epochs)
logging.info("Test accuracy: %s", model.evaluate(test_ds))
# Export Keras model in TensorFlow SavedModel format.
model.save(hparams['model-dir'])
return history
Explanation: In this plot, the red lines represent the training loss and accuracy, and the blue lines are the validation loss and accuracy. Based on the plots above, you should see model accuracy of around 78-80% which exceeds your business requirements target of greater than 75% accuracy.
Containerize your model code
Now that you trained and evaluated your model locally in a Vertex Notebook as part of an experimentation workflow, your next step is to train and deploy your model on Google Cloud's Vertex AI platform.
To train your BERT classifier on Google Cloud, you will you will package your Python training scripts and write a Dockerfile that contains instructions on your ML model code, dependencies, and execution instructions. You will build your custom container with Cloud Build, whose instructions are specified in cloudbuild.yaml and publish your container to your Artifact Registry. This workflow gives you the opportunity to use the same container to run as part of a portable and scalable Vertex Pipelines workflow.
You will walk through creating the following project structure for your ML mode code:
|--/bert-sentiment-classifier
|--/trainer
|--__init__.py
|--model.py
|--task.py
|--Dockerfile
|--cloudbuild.yaml
|--requirements.txt
1. Write a model.py training script
First, you will tidy up your local TensorFlow model training code from above into a training script.
End of explanation
%%writefile {MODEL_DIR}/trainer/task.py
import os
import argparse
from trainer import model
if __name__ == '__main__':
parser = argparse.ArgumentParser()
# Vertex custom container training args. These are set by Vertex AI during training but can also be overwritten.
parser.add_argument('--model-dir', dest='model-dir',
default=os.environ['AIP_MODEL_DIR'], type=str, help='GCS URI for saving model artifacts.')
# Model training args.
parser.add_argument('--tfhub-bert-preprocessor', dest='tfhub-bert-preprocessor',
default='https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', type=str, help='TF-Hub URL.')
parser.add_argument('--tfhub-bert-encoder', dest='tfhub-bert-encoder',
default='https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-128_A-2/2', type=str, help='TF-Hub URL.')
parser.add_argument('--initial-learning-rate', dest='initial-learning-rate', default=3e-5, type=float, help='Learning rate for optimizer.')
parser.add_argument('--epochs', dest='epochs', default=3, type=int, help='Training iterations.')
parser.add_argument('--batch-size', dest='batch-size', default=32, type=int, help='Number of examples during each training iteration.')
parser.add_argument('--dropout', dest='dropout', default=0.1, type=float, help='Float percentage of DNN nodes [0,1] to drop for regularization.')
parser.add_argument('--seed', dest='seed', default=42, type=int, help='Random number generator seed to prevent overlap between train and val sets.')
args = parser.parse_args()
hparams = args.__dict__
model.train_evaluate(hparams)
Explanation: 2. Write a task.py file as an entrypoint to your custom model container
End of explanation
%%writefile {MODEL_DIR}/Dockerfile
# Specifies base image and tag.
# https://cloud.google.com/vertex-ai/docs/training/pre-built-containers
FROM us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-6:latest
# Sets the container working directory.
WORKDIR /root
# Copies the requirements.txt into the container to reduce network calls.
COPY requirements.txt .
# Installs additional packages.
RUN pip3 install -U -r requirements.txt
# b/203105209 Removes unneeded file from TF2.5 CPU image for python_module CustomJob training.
# Will be removed on subsequent public Vertex images.
RUN rm -rf /var/sitecustomize/sitecustomize.py
# Copies the trainer code to the docker image.
COPY . /trainer
# Sets the container working directory.
WORKDIR /trainer
# Sets up the entry point to invoke the trainer.
ENTRYPOINT ["python", "-m", "trainer.task"]
Explanation: 3. Write a Dockerfile for your custom model container
Third, you will write a Dockerfile that contains instructions to package your model code in bert-sentiment-classifier as well as specifies your model code's dependencies needed for execution together in a Docker container.
End of explanation
%%writefile {MODEL_DIR}/requirements.txt
tf-models-official==2.6.0
tensorflow-text==2.6.0
tensorflow-hub==0.12.0
Explanation: 4. Write a requirements.txt file to specify additional ML code dependencies
These are additional dependencies for your model code not included in the pre-built Vertex TensorFlow images such as TF-Hub, TensorFlow AdamW optimizer, and TensorFlow Text needed for importing and working with pre-trained TensorFlow BERT models.
End of explanation
ARTIFACT_REGISTRY="bert-sentiment-classifier"
# TODO: create a Docker Artifact Registry using the gcloud CLI. Note the required respository-format and location flags.
# Documentation link: https://cloud.google.com/sdk/gcloud/reference/artifacts/repositories/create
Explanation: Use Cloud Build to build and submit your model container to Google Cloud Artifact Registry
Next, you will use Cloud Build to build and upload your custom TensorFlow model container to Google Cloud Artifact Registry.
Cloud Build brings reusability and automation to your ML experimentation by enabling you to reliably build, test, and deploy your ML model code as part of a CI/CD workflow. Artifact Registry provides a centralized repository for you to store, manage, and secure your ML container images. This will allow you to securely share your ML work with others and reproduce experiment results.
Note: the initial build and submit step will take about 16 minutes but Cloud Build is able to take advantage of caching for faster subsequent builds.
1. Create Artifact Registry for custom container images
End of explanation
IMAGE_NAME="bert-sentiment-classifier"
IMAGE_TAG="latest"
IMAGE_URI=f"{REGION}-docker.pkg.dev/{PROJECT_ID}/{ARTIFACT_REGISTRY}/{IMAGE_NAME}:{IMAGE_TAG}"
cloudbuild_yaml = fsteps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', '{IMAGE_URI}', '.' ]
images:
- '{IMAGE_URI}'
with open(f"{MODEL_DIR}/cloudbuild.yaml", "w") as fp:
fp.write(cloudbuild_yaml)
Explanation: 2. Create cloudbuild.yaml instructions
End of explanation
# TODO: use Cloud Build to build and submit your custom model container to your Artifact Registry.
# Documentation link: https://cloud.google.com/sdk/gcloud/reference/builds/submit
# Hint: make sure the config flag is pointed at {MODEL_DIR}/cloudbuild.yaml defined above and you include your model directory.
Explanation: 3. Build and submit your container image to Artifact Registry using Cloud Build
Note: your custom model container will take about 16 minutes initially to build and submit to your Artifact Registry. Artifact Registry is able to take advantage of caching so subsequent builds take about 4 minutes.
End of explanation
import datetime
# google_cloud_pipeline_components includes pre-built KFP components for interfacing with Vertex AI services.
from google_cloud_pipeline_components import aiplatform as gcc_aip
from kfp.v2 import dsl
TIMESTAMP=datetime.datetime.now().strftime('%Y%m%d%H%M%S')
DISPLAY_NAME = "bert-sentiment-{}".format(TIMESTAMP)
GCS_BASE_OUTPUT_DIR= f"{GCS_BUCKET}/{MODEL_DIR}-{TIMESTAMP}"
USER = "" # TODO: change this to your name.
PIPELINE_ROOT = "{}/pipeline_root/{}".format(GCS_BUCKET, USER)
print(f"Model display name: {DISPLAY_NAME}")
print(f"GCS dir for model training artifacts: {GCS_BASE_OUTPUT_DIR}")
print(f"GCS dir for pipeline artifacts: {PIPELINE_ROOT}")
# Pre-built Vertex model serving container for deployment.
# https://cloud.google.com/vertex-ai/docs/predictions/pre-built-containers
SERVING_IMAGE_URI = "us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-6:latest"
Explanation: Define a pipeline using the KFP V2 SDK
To address your business requirements and get your higher performing model into production to deliver value faster, you will define a pipeline using the Kubeflow Pipelines (KFP) V2 SDK to orchestrate the training and deployment of your model on Vertex Pipelines below.
End of explanation
@dsl.pipeline(name="bert-sentiment-classification", pipeline_root=PIPELINE_ROOT)
def pipeline(
project: str = PROJECT_ID,
location: str = REGION,
staging_bucket: str = GCS_BUCKET,
display_name: str = DISPLAY_NAME,
container_uri: str = IMAGE_URI,
model_serving_container_image_uri: str = SERVING_IMAGE_URI,
base_output_dir: str = GCS_BASE_OUTPUT_DIR,
):
#TODO: add and configure the pre-built KFP CustomContainerTrainingJobRunOp component using
# the remaining arguments in the pipeline constructor.
# Hint: Refer to the component documentation link above if needed as well.
model_train_evaluate_op = gcc_aip.CustomContainerTrainingJobRunOp(
# Vertex AI Python SDK authentication parameters.
project=project,
location=location,
staging_bucket=staging_bucket,
# WorkerPool arguments.
replica_count=1,
machine_type="c2-standard-4",
# TODO: fill in the remaining arguments from the pipeline constructor.
)
# Create a Vertex Endpoint resource in parallel with model training.
endpoint_create_op = gcc_aip.EndpointCreateOp(
# Vertex AI Python SDK authentication parameters.
project=project,
location=location,
display_name=display_name
)
# Deploy your model to the created Endpoint resource for online predictions.
model_deploy_op = gcc_aip.ModelDeployOp(
# Link to model training component through output model artifact.
model=model_train_evaluate_op.outputs["model"],
# Link to the created Endpoint.
endpoint=endpoint_create_op.outputs["endpoint"],
# Define prediction request routing. {"0": 100} indicates 100% of traffic
# to the ID of the current model being deployed.
traffic_split={"0": 100},
# WorkerPool arguments.
dedicated_resources_machine_type="n1-standard-4",
dedicated_resources_min_replica_count=1,
dedicated_resources_max_replica_count=2
)
Explanation: The pipeline consists of three components:
CustomContainerTrainingJobRunOp (documentation): trains your custom model container using Vertex Training. This is the same as configuring a Vertex Custom Container Training Job using the Vertex Python SDK you covered in the Vertex AI: Qwik Start lab.
EndpointCreateOp (documentation): Creates a Google Cloud Vertex Endpoint resource that maps physical machine resources with your model to enable it to serve online predictions. Online predictions have low latency requirements; providing resources to the model in advance reduces latency.
ModelDeployOp(documentation): deploys your model to a Vertex Prediction Endpoint for online predictions.
End of explanation
from kfp.v2 import compiler
compiler.Compiler().compile(
pipeline_func=pipeline, package_path="bert-sentiment-classification.json"
)
Explanation: Compile the pipeline
End of explanation
vertex_pipelines_job = vertexai.pipeline_jobs.PipelineJob(
display_name="bert-sentiment-classification",
template_path="bert-sentiment-classification.json",
parameter_values={
"project": PROJECT_ID,
"location": REGION,
"staging_bucket": GCS_BUCKET,
"display_name": DISPLAY_NAME,
"container_uri": IMAGE_URI,
"model_serving_container_image_uri": SERVING_IMAGE_URI,
"base_output_dir": GCS_BASE_OUTPUT_DIR},
enable_caching=True,
)
vertex_pipelines_job.run()
Explanation: Run the pipeline on Vertex Pipelines
The PipelineJob is configured below and triggered through the run() method.
Note: This pipeline run will take around 30-40 minutes to train and deploy your model. Follow along with the execution using the URL from the job output below.
End of explanation
# Retrieve your deployed Endpoint name from your pipeline.
ENDPOINT_NAME = vertexai.Endpoint.list()[0].name
#TODO: Generate online predictions using your Vertex Endpoint.
endpoint = vertexai.Endpoint(
)
#TODO: write a movie review to test your model e.g. "The Dark Knight is the best Batman movie!"
test_review = ""
# TODO: use your Endpoint to return prediction for your test_review.
prediction =
print(prediction)
# Use a sigmoid function to compress your model output between 0 and 1. For binary classification, a threshold of 0.5 is typically applied
# so if the output is >= 0.5 then the predicted sentiment is "Positive" and < 0.5 is a "Negative" prediction.
print(tf.sigmoid(prediction.predictions[0]))
Explanation: Query deployed model on Vertex Endpoint for online predictions
Finally, you will retrieve the Endpoint deployed by the pipeline and use it to query your model for online predictions.
Configure the Endpoint() function below with the following parameters:
endpoint_name: A fully-qualified endpoint resource name or endpoint ID. Example: "projects/123/locations/us-central1/endpoints/456" or "456" when project and location are initialized or passed.
project_id: GCP project.
location: GCP region.
Call predict() to return a prediction for a test review.
End of explanation
# Copyright 2021 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Next steps
Congratulations! You walked through a full experimentation, containerization, and MLOps workflow on Vertex AI. First, you built, trained, and evaluated a BERT sentiment classifier model in a Vertex Notebook. You then packaged your model code into a Docker container to train on Google Cloud's Vertex AI. Lastly, you defined and ran a Kubeflow Pipeline on Vertex Pipelines that trained and deployed your model container to a Vertex Endpoint that you queried for online predictions.
License
End of explanation |
3,547 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Load Features
Step2: Notice that original data contains 569 observations and 30 features.
Step3: Here is what the data looks like.
Step4: Standardize Features
Step5: Conduct PCA
Notice that PCA contains a parameter, the number of components. This is the number of output features and will need to be tuned.
Step6: View New Features
After the PCA, the new data has been reduced to two features, with the same number of rows as the original feature. | Python Code:
# Import packages
import numpy as np
from sklearn import decomposition, datasets
from sklearn.preprocessing import StandardScaler
Explanation: Title: Feature Extraction With PCA
Slug: feature_extraction_with_pca
Summary: Feature extraction with PCA using scikit-learn.
Date: 2017-09-13 12:00
Category: Machine Learning
Tags: Feature Engineering
Authors: Chris Albon
Principle Component Analysis (PCA) is a common feature extraction method in data science. Technically, PCA finds the eigenvectors of a covariance matrix with the highest eigenvalues and then uses those to project the data into a new subspace of equal or less dimensions. Practically, PCA converts a matrix of n features into a new dataset of (hopefully) less than n features. That is, it reduces the number of features by constructing a new, smaller number variables which capture a signficant portion of the information found in the original features. However, the goal of this tutorial is not to explain the concept of PCA, that is done very well elsewhere, but rather to demonstrate PCA in action.
Preliminaries
End of explanation
# Load the breast cancer dataset
dataset = datasets.load_breast_cancer()
# Load the features
X = dataset.data
Explanation: Load Features
End of explanation
# View the shape of the dataset
X.shape
Explanation: Notice that original data contains 569 observations and 30 features.
End of explanation
# View the data
X
Explanation: Here is what the data looks like.
End of explanation
# Create a scaler object
sc = StandardScaler()
# Fit the scaler to the features and transform
X_std = sc.fit_transform(X)
Explanation: Standardize Features
End of explanation
# Create a pca object with the 2 components as a parameter
pca = decomposition.PCA(n_components=2)
# Fit the PCA and transform the data
X_std_pca = pca.fit_transform(X_std)
Explanation: Conduct PCA
Notice that PCA contains a parameter, the number of components. This is the number of output features and will need to be tuned.
End of explanation
# View the new feature data's shape
X_std_pca.shape
# View the new feature data
X_std_pca
Explanation: View New Features
After the PCA, the new data has been reduced to two features, with the same number of rows as the original feature.
End of explanation |
3,548 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 1
Step1: First , we introduce vector compression by Product Quantization (PQ) [Jegou, TPAMI 11]. The first task is to train an encoder. Let us assume that there are 1000 six-dimensional vectors for training; $X_1 \in \mathbb{R}^{1000\times6}$
Step2: Then we can train a PQEncoder using $X_1$.
Step3: The encoder takes two parameters
Step4: Note that you can train the encoder preliminary using training data, and write/read the encoder via pickle.
Step5: Next, let us consider database vectors (2000 six-dimensional vectors, $X_2$) that we'd like to compress.
Step6: We can compress these vectors by the trained PQ-encoder.
Step7: Each vector is splitted into $num_subdim(=2)$ sub-vectors, and the nearest codeword is searched for each sub-vector. The id of the nearest codeword is recorded, i.e., two integers in this case. This representation is called PQ-code.
PQ-code is a memory efficient data representation. The original 6D vector requies $6 * 64 = 384$ bit if 64 bit float is used for each element. On the other, a PQ-code requires only $2 * \log_2 256 = 16$ bit.
Note that we can approximately recunstruct the original vector from a PQ-code, by fetching the codewords using the PQ-code
Step8: As can be seen, the reconstructed vectors are similar to the original one.
In a large-scale data processing scenario where all data cannot be stored on memory, you can compress input vectors to PQ-codes, and store the PQ-codes only (X2_pqcode).
Step9: 2. Clustering by PQk-means
Let us run the clustering over the PQ-codes. The clustering object is instanciated with the trained encoder. Here, we set the number of cluster as $k=10$.
Step10: Let's run the PQk-means over X2_pqcode.
Step11: The resulting vector (clustered) contains the id of assigned codeword for each input PQ-code.
Step12: You can fetch the center of the clustering by
Step13: The centers are also PQ-codes. They can be reconstructed by the PQ-encoder.
Step14: Let's summalize the result
Step15: Note that you can pickle the kmeans instace. The instance can be reused later as a vector quantizer for new input vectors.
Step16: 3. Comparison to other clustering methods
Let us compare PQk-means and the traditional k-means using high-dimensional data.
Step17: Let's run the PQ-kmeans, and see the computational cost
Step18: Then, run the traditional k-means clustering
Step19: PQk-means would be tens to hundreds of times faster than k-means depending on your machine. Then let's see the accuracy. Since the result of PQk-means is the approximation of that of k-means, k-means achieved the lower error | Python Code:
import numpy
import pqkmeans
import sys
import pickle
Explanation: Chapter 1: PQk-means
This chapter contains the followings:
Vector compression by Product Quantization
Clustering by PQk-means
Comparison to other clustering methods
Requisites:
- numpy
- sklearn
- pqkmeans
1. Vector compression by Product Quantization
End of explanation
X1 = numpy.random.random((1000, 6))
print("X1.shape:\n{}\n".format(X1.shape))
print("X1:\n{}".format(X1))
Explanation: First , we introduce vector compression by Product Quantization (PQ) [Jegou, TPAMI 11]. The first task is to train an encoder. Let us assume that there are 1000 six-dimensional vectors for training; $X_1 \in \mathbb{R}^{1000\times6}$
End of explanation
encoder = pqkmeans.encoder.PQEncoder(num_subdim=2, Ks=256)
encoder.fit(X1)
Explanation: Then we can train a PQEncoder using $X_1$.
End of explanation
print("codewords.shape:\n{}".format(encoder.codewords.shape))
Explanation: The encoder takes two parameters: $num_subdim$ and $Ks$. In the training step, each vector is splitted into $num_subdim$ sub-vectors, and quantized with $Ks$ codewords. The $num_subdim$ decides the bit length of PQ-code, and typically set as 4, 8, etc. The $Ks$ is usually set as 256 so as to represent each sub-code by $\log_2 256=8$ bit.
In this example, each 6D training vector is splitted into $num_subdim(=2)$ sub-vectors (two 3D vectors). Consequently, the 1000 6D training vectors are splitted into the two set of 1000 3D vectors. The k-means clustering is applied for each set of subvectors with $Ks=256$.
Note that, alternatively, you can use fit_generator for a large dataset. This will be covered in the tutorial3.
After the training step, the encoder stores the resulting codewords (2 subpspaces $$ 256 codewords $$ 3 dimensions):
End of explanation
# pickle.dump(encoder, open('encoder.pkl', 'wb')) # Write
# encoder = pickle.load(open('encoder.pkl', 'rb')) # Read
Explanation: Note that you can train the encoder preliminary using training data, and write/read the encoder via pickle.
End of explanation
X2 = numpy.random.random((2000, 6))
print("X2.shape:\n{}\n".format(X2.shape))
print("X2:\n{}\n".format(X2))
print("Data type of each element:\n{}\n".format(type(X2[0][0])))
print("Memory usage:\n{} byte".format(X2.nbytes))
Explanation: Next, let us consider database vectors (2000 six-dimensional vectors, $X_2$) that we'd like to compress.
End of explanation
X2_pqcode = encoder.transform(X2)
print("X2_pqcode.shape:\n{}\n".format(X2_pqcode.shape))
print("X2_pqcode:\n{}\n".format(X2_pqcode))
print("Data type of each element:\n{}\n".format(type(X2_pqcode[0][0])))
print("Memory usage:\n{} byte".format(X2_pqcode.nbytes))
Explanation: We can compress these vectors by the trained PQ-encoder.
End of explanation
X2_reconstructed = encoder.inverse_transform(X2_pqcode)
print("original X2:\n{}\n".format(X2))
print("reconstructed X2:\n{}".format(X2_reconstructed))
Explanation: Each vector is splitted into $num_subdim(=2)$ sub-vectors, and the nearest codeword is searched for each sub-vector. The id of the nearest codeword is recorded, i.e., two integers in this case. This representation is called PQ-code.
PQ-code is a memory efficient data representation. The original 6D vector requies $6 * 64 = 384$ bit if 64 bit float is used for each element. On the other, a PQ-code requires only $2 * \log_2 256 = 16$ bit.
Note that we can approximately recunstruct the original vector from a PQ-code, by fetching the codewords using the PQ-code:
End of explanation
# numpy.save('pqcode.npy', X2_pqcode) # You can store the PQ-codes only
Explanation: As can be seen, the reconstructed vectors are similar to the original one.
In a large-scale data processing scenario where all data cannot be stored on memory, you can compress input vectors to PQ-codes, and store the PQ-codes only (X2_pqcode).
End of explanation
kmeans = pqkmeans.clustering.PQKMeans(encoder=encoder, k=10)
Explanation: 2. Clustering by PQk-means
Let us run the clustering over the PQ-codes. The clustering object is instanciated with the trained encoder. Here, we set the number of cluster as $k=10$.
End of explanation
clustered = kmeans.fit_predict(X2_pqcode)
print(clustered[:100]) # Just show the 100 results
Explanation: Let's run the PQk-means over X2_pqcode.
End of explanation
print("The id of assigned codeword for the 1st PQ-code is {}".format(clustered[0]))
print("The id of assigned codeword for the 2nd PQ-code is {}".format(clustered[1]))
print("The id of assigned codeword for the 3rd PQ-code is {}".format(clustered[2]))
Explanation: The resulting vector (clustered) contains the id of assigned codeword for each input PQ-code.
End of explanation
print("clustering centers:{}\n".format(kmeans.cluster_centers_))
Explanation: You can fetch the center of the clustering by:
End of explanation
clustering_centers_numpy = numpy.array(kmeans.cluster_centers_, dtype=encoder.code_dtype) # Convert to np.array with the proper dtype
clustering_centers_reconstructd = encoder.inverse_transform(clustering_centers_numpy) # From PQ-code to 6D vectors
print("reconstructed clustering centers:\n{}".format(clustering_centers_reconstructd))
Explanation: The centers are also PQ-codes. They can be reconstructed by the PQ-encoder.
End of explanation
print("13th input vector:\n{}\n".format(X2[12]))
print("13th PQ code:\n{}\n".format(X2_pqcode[12]))
print("reconstructed 13th PQ code:\n{}\n".format(X2_reconstructed[12]))
print("ID of the assigned center:\n{}\n".format(clustered[12]))
print("Assigned center (PQ-code):\n{}\n".format(kmeans.cluster_centers_[clustered[12]]))
print("Assigned center (reconstructed):\n{}".format(clustering_centers_reconstructd[clustered[12]]))
Explanation: Let's summalize the result:
End of explanation
# pickle.dump(kmeans, open('kmeans.pkl', 'wb')) # Write
# kmeans = pickle.load(open('kmeans.pkl', 'rb')) # Read
Explanation: Note that you can pickle the kmeans instace. The instance can be reused later as a vector quantizer for new input vectors.
End of explanation
from sklearn.cluster import KMeans
X3 = numpy.random.random((1000, 1024)) # 1K 1024-dim vectors, for training
X4 = numpy.random.random((10000, 1024)) # 10K 1024-dim vectors, for database
K = 100
# Train the encoder
encoder_large = pqkmeans.encoder.PQEncoder(num_subdim=4, Ks=256)
encoder_large.fit(X3)
# Encode the vectors to PQ-code
X4_pqcode = encoder_large.transform(X4)
Explanation: 3. Comparison to other clustering methods
Let us compare PQk-means and the traditional k-means using high-dimensional data.
End of explanation
%time clustered_pqkmeans = pqkmeans.clustering.PQKMeans(encoder=encoder_large, k=K).fit_predict(X4_pqcode)
Explanation: Let's run the PQ-kmeans, and see the computational cost
End of explanation
%time clustered_kmeans = KMeans(n_clusters=K, n_jobs=-1).fit_predict(X4)
Explanation: Then, run the traditional k-means clustering
End of explanation
_, pqkmeans_micro_average_error, _ = pqkmeans.evaluation.calc_error(clustered_pqkmeans, X4, K)
_, kmeans_micro_average_error, _ = pqkmeans.evaluation.calc_error(clustered_kmeans, X4, K)
print("PQk-means, micro avg error: {}".format(pqkmeans_micro_average_error))
print("k-means, micro avg error: {}".format(kmeans_micro_average_error))
Explanation: PQk-means would be tens to hundreds of times faster than k-means depending on your machine. Then let's see the accuracy. Since the result of PQk-means is the approximation of that of k-means, k-means achieved the lower error:
End of explanation |
3,549 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NumPy and J make Sweet Array Love
Import NumPy using the standard naming convention
Step1: Configure the J Python3 addon
To use the J Python3 addon you must edit path variables in jbase.py so Python can locate the J binaries. On my system I set
Step2: Character data is passed as bytes.
Step3: j.j() enters a simple REPL
Running j.j() opens a simple read, execute and reply loop with J. Exit by typing ....
Step4: J accepts a subset of NumPy datatypes
Passing datatypes that do not match the types the J addon supports is allowed but does not work as you might expect.
Step5: As you can see a round trip of numpy booleans generates digital noise.
The only numpy datatypes J natively supports on Win64 systems are
Step6: Basic Operations
Step7: Array Processing
Step8: Indexing and Slicing
Step9: Passing Larger Arrays
Toy interfaces abound. Useful interfaces scale. The current addon is capable of passing
large enough arrays for serious work. Useful subsets of J and NumPy arrays can be memory mapped. It wouldn't
be difficult to memory map very large (gigabyte sized) NumPy arrays for J. | Python Code:
import numpy as np
Explanation: NumPy and J make Sweet Array Love
Import NumPy using the standard naming convention
End of explanation
import sys
# local api/python3 path - adjust path for your system
japipath = 'C:\\j64\\j64-807\\addons\\api\\python3'
if japipath not in sys.path:
sys.path.append(japipath)
sys.path
import jbase as j
print(j.__doc__)
# start J - only one instance currently allowed
try:
j.init()
except:
print('j running')
j.dor("i. 2 3 4") # run sentence and print output result
rc = j.do(('+a.')) # run and return error code
print(rc)
j.getr() # get last output result
j.do('abc=: i.2 3') # define abc
q= j.get('abc') # get q as numpy array from J array
print (q)
j.set('ghi',23+q) # set J array from numpy array
j.dor('ghi') # print array (note typo in addon (j.__doc___)
Explanation: Configure the J Python3 addon
To use the J Python3 addon you must edit path variables in jbase.py so Python can locate the J binaries. On my system I set:
# typical for windows install in home
pathbin= 'c:/j64/j64-807/bin'
pathdll= pathbin+'/j.dll'
pathpro= pathbin+'/profile.ijs'
Insure jbase.py and jcore.py are on Python's search path
End of explanation
j.do("cows =. 'don''t have a cow man'")
j.get('cows')
ido = "I do what I do because I am what I am!"
j.set("ido", ido)
j.dor("ido")
Explanation: Character data is passed as bytes.
End of explanation
# decomment to run REPL
# j.j()
Explanation: j.j() enters a simple REPL
Running j.j() opens a simple read, execute and reply loop with J. Exit by typing ....
End of explanation
# boolean numpy array
p = np.array([True, False, True, True]).reshape(2,2)
p
j.set("p", p)
j.dor("p")
Explanation: J accepts a subset of NumPy datatypes
Passing datatypes that do not match the types the J addon supports is allowed but does not work as you might expect.
End of explanation
# numpy
a = np.arange(15).reshape(3, 5)
print(a)
# J
j.do("a =. 3 5 $ i. 15")
j.dor("a")
# numpy
a = np.array([2,3,4])
print(a)
# J
j.do("a =. 2 3 4")
j.dor("a")
# numpy
b = np.array([(1.5,2,3), (4,5,6)])
print(b)
# J
j.do("b =. 1.5 2 3 ,: 4 5 6")
j.dor("b")
# numpy
c = np.array( [ [1,2], [3,4] ], dtype=complex )
print(c)
# J
j.do("c =. 0 j.~ 1 2 ,: 3 4")
j.dor("c") # does not show as complex
j.dor("datatype c") # c is complex
# numpy - make complex numbers with nonzero real and imaginary parts
c + (0+4.7j)
# J - also for J
j.dor("c + 0j4.7")
# numpy
np.zeros( (3,4) )
# J
j.dor("3 4 $ 0")
# numpy - allocates array with whatever is in memory
np.empty( (2,3) )
# J - uses fill - safer but slower than numpy's trust memory method
j.dor("2 3 $ 0.0001")
Explanation: As you can see a round trip of numpy booleans generates digital noise.
The only numpy datatypes J natively supports on Win64 systems are:
np.int64
np.float64
simple character strings - passed as bytes
To use other types it will be necessary to encode and decode them with Python and J helper functions.
The limited datatype support is not as limiting as you might expect. The default NumPy array is
np.float64 on 64 bit systems and the majority of NumPy based packages manipulate floating point
and integer arrays.
NumPy and J are derivative Iverson Array Processing Notations
The following NumPy examples are from the SciPy.org's
NumPy quick start tutorial. For each NumPy statement, I have provided a J equivalent
Creating simple arrays
End of explanation
# numpy
a = np.array( [20,30,40,50] )
b = np.arange( 4 )
c = a - b
print(c)
# J
j.do("a =. 20 30 40 50")
j.do("b =. i. 4")
j.do("c =. a - b")
j.dor("c")
# numpy - uses previously defined (b)
b ** 2
# J
j.dor("b ^ 2")
# numpy - uses previously defined (a)
10 * np.sin(a)
# J
j.dor("10 * 1 o. a")
# numpy - booleans are True and False
a < 35
# J - booleans are 1 and 0
j.dor("a < 35")
Explanation: Basic Operations
End of explanation
# numpy
a = np.array( [[1,1], [0,1]] )
b = np.array( [[2,0], [3,4]] )
# elementwise product
a * b
# J
j.do("a =. 1 1 ,: 0 1")
j.do("b =. 2 0 ,: 3 4")
j.dor("a * b")
# numpy - matrix product
np.dot(a, b)
# J - matrix product
j.dor("a +/ . * b")
# numpy - uniform pseudo random - seeds are different in Python and J processes - results will differ
a = np.random.random( (2,3) )
print(a)
# J - uniform pseudo random
j.dor("?. 2 3 $ 0")
# numpy - sum all array elements - implicit ravel
a = np.arange(100).reshape(20,5)
a.sum()
# j - sum all array elements - explicit ravel
j.dor("+/ , 20 5 $ i.100")
# numpy
b = np.arange(12).reshape(3,4)
print(b)
# sum of each column
print(b.sum(axis=0))
# min of each row
print(b.min(axis=1))
# cumulative sum along each row
print(b.cumsum(axis=1))
# transpose
print(b.T)
# J
j.do("b =. 3 4 $ i. 12")
j.dor("b")
# sum of each column
j.dor("+/ b")
# min of each row
j.dor('<./"1 b')
# cumulative sum along each row
j.dor('+/\\"0 1 b') # must escape \ character to pass +/\"0 1 properly to J
# transpose
j.dor("|: b")
Explanation: Array Processing
End of explanation
# numpy
a = np.arange(10) ** 3
print(a[2])
print(a[2:5])
print(a[ : :-1]) # reversal
# J
j.do("a =. (i. 10) ^ 3")
j.dor("2 { a")
j.dor("(2 + i. 3) { a")
j.dor("|. a")
Explanation: Indexing and Slicing
End of explanation
from numpy import pi
x = np.linspace( 0, 2*pi, 100, np.float64) # useful to evaluate function at lots of points
f = np.sin(x)
f
j.set("f", f)
j.get("f")
r = np.random.random((2000,3000))
r = np.asarray(r, dtype=np.float64)
r
j.set("r", r)
j.get("r")
r.shape
j.get("r").shape
j.dor("r=. ,r")
j.get("r").shape
r.sum()
b = np.ones((5,300,4), dtype=np.int64)
j.set("b", b)
b2 = j.get("b")
print(b.sum())
print(b2.sum())
Explanation: Passing Larger Arrays
Toy interfaces abound. Useful interfaces scale. The current addon is capable of passing
large enough arrays for serious work. Useful subsets of J and NumPy arrays can be memory mapped. It wouldn't
be difficult to memory map very large (gigabyte sized) NumPy arrays for J.
End of explanation |
3,550 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 9 - Hierarchical Models
9.2.4 - Example
Step1: 9.2.4 - Example
Step2: Figure 9.9
Step3: Model (Kruschke, 2015)
Step4: Figure 9.10 - Marginal posterior distributions
Step5: Shrinkage
Let's create a model with just the theta estimations per practitioner, without the influence of a higher level distribution. Then we can compare the theta values with the hierarchical model above.
Step6: Here we concatenate the trace results (thetas) from both models into a dataframe. Next we shape the data into a format that we can use with Seaborn's pointplot.
Step7: The below plot shows that the theta estimates on practitioner level are pulled towards the group mean of the hierarchical model.
Step8: 9.5.1 - Example
Step9: The DataFrame contains records for 948 players in the 2012 regular season of Major League Baseball.
- One record per player
- 9 primary field positions
Step10: Model (Kruschke, 2015)
Step11: Figure 9.17
Posterior distribution of hyper parameter omega after sampling.
Step12: Posterior distributions of the omega_c parameters after sampling. | Python Code:
import pandas as pd
import numpy as np
import pymc3 as pm
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)
from IPython.display import Image
from matplotlib import gridspec
%matplotlib inline
plt.style.use('seaborn-white')
color = '#87ceeb'
%load_ext watermark
%watermark -p pandas,numpy,pymc3,matplotlib,seaborn
Explanation: Chapter 9 - Hierarchical Models
9.2.4 - Example: Therapeutic touch
Shrinkage
9.5.1 - Example: Baseball batting abilities by position (subjects within categories)
End of explanation
df = pd.read_csv('data/TherapeuticTouchData.csv', dtype={'s':'category'})
df.info()
df.head()
Explanation: 9.2.4 - Example: Therapeutic touch
End of explanation
df_proportions = df.groupby('s')['y'].apply(lambda x: x.sum()/len(x))
ax = sns.distplot(df_proportions, bins=8, kde=False, color='gray')
ax.set(xlabel='Proportion Correct', ylabel='# Practitioners')
sns.despine(ax=ax);
Explanation: Figure 9.9
End of explanation
Image('images/fig9_7.png', width=200)
practitioner_idx = df.s.cat.codes.values
practitioner_codes = df.s.cat.categories
n_practitioners = practitioner_codes.size
with pm.Model() as hierarchical_model:
omega = pm.Beta('omega', 1., 1.)
kappa_minus2 = pm.Gamma('kappa_minus2', 0.01, 0.01)
kappa = pm.Deterministic('kappa', kappa_minus2 + 2)
theta = pm.Beta('theta', alpha=omega*(kappa-2)+1, beta=(1-omega)*(kappa-2)+1, shape=n_practitioners)
y = pm.Bernoulli('y', theta[practitioner_idx], observed=df.y)
pm.model_to_graphviz(hierarchical_model)
with hierarchical_model:
trace = pm.sample(5000, cores=4, nuts_kwargs={'target_accept': 0.95})
pm.traceplot(trace, ['omega','kappa', 'theta']);
pm.summary(trace)
# Note that theta is indexed starting with 0 and not 1, as is the case in Kruschke (2015).
Explanation: Model (Kruschke, 2015)
End of explanation
plt.figure(figsize=(10,12))
# Define gridspec
gs = gridspec.GridSpec(4, 6)
ax1 = plt.subplot(gs[0,:3])
ax2 = plt.subplot(gs[0,3:])
ax3 = plt.subplot(gs[1,:2])
ax4 = plt.subplot(gs[1,2:4])
ax5 = plt.subplot(gs[1,4:6])
ax6 = plt.subplot(gs[2,:2])
ax7 = plt.subplot(gs[2,2:4])
ax8 = plt.subplot(gs[2,4:6])
ax9 = plt.subplot(gs[3,:2])
ax10 = plt.subplot(gs[3,2:4])
ax11 = plt.subplot(gs[3,4:6])
# thetas and theta pairs to plot
thetas = (0, 13, 27)
theta_pairs = ((0,13),(0,27),(13,27))
font_d = {'size':14}
# kappa & omega posterior plots
for var, ax in zip(['kappa', 'omega'], [ax1, ax2]):
pm.plot_posterior(trace[var], point_estimate='mode', ax=ax, color=color, round_to=2)
ax.set_xlabel('$\{}$'.format(var), fontdict={'size':20, 'weight':'bold'})
ax1.set(xlim=(0,500))
# theta posterior plots
for var, ax in zip(thetas,[ax3, ax7, ax11]):
pm.plot_posterior(trace['theta'][:,var], point_estimate='mode', ax=ax, color=color)
ax.set_xlabel('theta[{}]'.format(var), fontdict=font_d)
# theta scatter plots
for var, ax in zip(theta_pairs,[ax6, ax9, ax10]):
ax.scatter(trace['theta'][::10,var[0]], trace['theta'][::10,var[1]], alpha=0.75, color=color, facecolor='none')
ax.plot([0, 1], [0, 1], ':k', transform=ax.transAxes, alpha=0.5)
ax.set_xlabel('theta[{}]'.format(var[0]), fontdict=font_d)
ax.set_ylabel('theta[{}]'.format(var[1]), fontdict=font_d)
ax.set(xlim=(0,1), ylim=(0,1), aspect='equal')
# theta posterior differences plots
for var, ax in zip(theta_pairs,[ax4, ax5, ax8]):
pm.plot_posterior(trace['theta'][:,var[0]]-trace['theta'][:,var[1]], point_estimate='mode', ax=ax, color=color)
ax.set_xlabel('theta[{}] - theta[{}]'.format(*var), fontdict=font_d)
plt.tight_layout()
Explanation: Figure 9.10 - Marginal posterior distributions
End of explanation
with pm.Model() as unpooled_model:
theta = pm.Beta('theta', 1, 1, shape=n_practitioners)
y = pm.Bernoulli('y', theta[practitioner_idx], observed=df.y)
pm.model_to_graphviz(unpooled_model)
with unpooled_model:
unpooled_trace = pm.sample(5000, cores=4)
Explanation: Shrinkage
Let's create a model with just the theta estimations per practitioner, without the influence of a higher level distribution. Then we can compare the theta values with the hierarchical model above.
End of explanation
df_shrinkage = (pd.concat([pm.summary(unpooled_trace).iloc[:,0],
pm.summary(trace).iloc[3:,0]],
axis=1)
.reset_index())
df_shrinkage.columns = ['theta', 'unpooled', 'hierarchical']
df_shrinkage = pd.melt(df_shrinkage, 'theta', ['unpooled', 'hierarchical'], var_name='Model')
df_shrinkage.head()
Explanation: Here we concatenate the trace results (thetas) from both models into a dataframe. Next we shape the data into a format that we can use with Seaborn's pointplot.
End of explanation
plt.figure(figsize=(10,9))
plt.scatter(1, pm.summary(trace).iloc[0,0], s=100, c='r', marker='x', zorder=999, label='Group mean')
sns.pointplot(x='Model', y='value', hue='theta', data=df_shrinkage);
Explanation: The below plot shows that the theta estimates on practitioner level are pulled towards the group mean of the hierarchical model.
End of explanation
df2 = pd.read_csv('data/BattingAverage.csv', usecols=[0,1,2,3], dtype={'PriPos':'category'})
df2.info()
Explanation: 9.5.1 - Example: Baseball batting abilities by position
End of explanation
df2['BatAv'] = df2.Hits.divide(df2.AtBats)
df2.head(10)
# Batting average by primary field positions calculated from the data
df2.groupby('PriPos')['Hits','AtBats'].sum().pipe(lambda x: x.Hits/x.AtBats)
Explanation: The DataFrame contains records for 948 players in the 2012 regular season of Major League Baseball.
- One record per player
- 9 primary field positions
End of explanation
Image('images/fig9_13.png', width=300)
pripos_idx = df2.PriPos.cat.codes.values
pripos_codes = df2.PriPos.cat.categories
n_pripos = pripos_codes.size
# df2 contains one entry per player
n_players = df2.index.size
with pm.Model() as hierarchical_model2:
# Hyper parameters
omega = pm.Beta('omega', 1, 1)
kappa_minus2 = pm.Gamma('kappa_minus2', 0.01, 0.01)
kappa = pm.Deterministic('kappa', kappa_minus2 + 2)
# Parameters for categories (Primary field positions)
omega_c = pm.Beta('omega_c',
omega*(kappa-2)+1, (1-omega)*(kappa-2)+1,
shape = n_pripos)
kappa_c_minus2 = pm.Gamma('kappa_c_minus2',
0.01, 0.01,
shape = n_pripos)
kappa_c = pm.Deterministic('kappa_c', kappa_c_minus2 + 2)
# Parameter for individual players
theta = pm.Beta('theta',
omega_c[pripos_idx]*(kappa_c[pripos_idx]-2)+1,
(1-omega_c[pripos_idx])*(kappa_c[pripos_idx]-2)+1,
shape = n_players)
y2 = pm.Binomial('y2', n=df2.AtBats.values, p=theta, observed=df2.Hits)
pm.model_to_graphviz(hierarchical_model2)
with hierarchical_model2:
trace2 = pm.sample(3000, cores=4)
pm.traceplot(trace2, ['omega', 'kappa', 'omega_c', 'kappa_c']);
Explanation: Model (Kruschke, 2015)
End of explanation
pm.plot_posterior(trace2['omega'], point_estimate='mode', color=color)
plt.title('Overall', fontdict={'fontsize':16, 'fontweight':'bold'})
plt.xlabel('omega', fontdict={'fontsize':14});
Explanation: Figure 9.17
Posterior distribution of hyper parameter omega after sampling.
End of explanation
fig, axes = plt.subplots(3,3, figsize=(14,8))
for i, ax in enumerate(axes.T.flatten()):
pm.plot_posterior(trace2['omega_c'][:,i], ax=ax, point_estimate='mode', color=color)
ax.set_title(pripos_codes[i], fontdict={'fontsize':16, 'fontweight':'bold'})
ax.set_xlabel('omega_c__{}'.format(i), fontdict={'fontsize':14})
ax.set_xlim(0.10,0.30)
plt.tight_layout(h_pad=3)
Explanation: Posterior distributions of the omega_c parameters after sampling.
End of explanation |
3,551 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Error Handling Using Try & Except
Errors should never pass silently.
Unless explicitly silenced. ~ Zen of Python
Hi guys, last lecture we looked at common error messages, in this lecture we shall look at a neat way to handle those errors. The Syntax
Step1: So what’s going on here? Well, we have list which contains several different data-types. For every 'item' we try to multiply item by itself. If 'item' is a number this makes sense and so we print item * item. However, if we try to multiply a string by a string we get a TypeError, which the except statement catches. So if we receive a TypeError we try something else, in this particular case we add item to item, and thus "aa", "bb", etc get printed.
Now it is important to note this current code is only set up to handle TypeErrors. What happens if we change the above bit of code to divide rather than multiply and feed it 0?
Step2: In this case Python didn't receive a TypeError and thus the except block of code failed to execute. Now, we can fix this code in one of two ways
Step4: The bad fix just leaves a blank except statement, this catches ALL errors. The good fix meanwhile specifically states what errors it should catch, but the code will still fail if the error is something other than Type or dividing by zero.
So why is it bad to leave a bare except statement? Well, as I've stated elsewhere in these lecture series it is often better to crash than it is to output junk. And please trust me when I say bare except clauses are a great way to output junk.
In the second case we specifically state the errors we expect to sometimes receive. And this is nice for a few reasons, first, by naming the exceptions the code is a bit more readable. Secondly, expressly stating the errors forces you to be much more mindful when writing the code in the first place. And thirdly, if something unexpected does happen then, unless you have bare except statements throughout all of your code eventually you'll pass junk data to some function and get an error there. So basically in many cases you are not really solving the problem, you end up delaying the problem.
In short, for most applications you should be looking to handle the minimum number of cases you need for the code to function and in all other cases just let it crash.
Okay, how about one more example? | Python Code:
a_list = [10, 32.4, -14.2, "a", "b", [], [1,2]]
for item in a_list:
try:
print(item * item)
except TypeError:
print(item + item)
Explanation: Error Handling Using Try & Except
Errors should never pass silently.
Unless explicitly silenced. ~ Zen of Python
Hi guys, last lecture we looked at common error messages, in this lecture we shall look at a neat way to handle those errors. The Syntax:
try:
{code block}
except {Error}:
{code block}
Okay, so what does try & except actually do? Well basically, Python tries to execute a statement, but if in the process of executing that statement an error (of type Error) occurs then we to something else instead. In terms of logic, try/except works a bit like if/elif work. Here is a simple example:
End of explanation
item = 0
try:
item / item
except TypeError:
print(item + item)
Explanation: So what’s going on here? Well, we have list which contains several different data-types. For every 'item' we try to multiply item by itself. If 'item' is a number this makes sense and so we print item * item. However, if we try to multiply a string by a string we get a TypeError, which the except statement catches. So if we receive a TypeError we try something else, in this particular case we add item to item, and thus "aa", "bb", etc get printed.
Now it is important to note this current code is only set up to handle TypeErrors. What happens if we change the above bit of code to divide rather than multiply and feed it 0?
End of explanation
x = 0
# The bad fix first...
try:
x / x
except:
print("Bad ", x + x)
# The Good fix...
try:
item / item
except (TypeError, ZeroDivisionError): # please note the "SnakeCase".
print("Good", x + x)
Explanation: In this case Python didn't receive a TypeError and thus the except block of code failed to execute. Now, we can fix this code in one of two ways:
End of explanation
def character_movement(x, y):
where (x,y) is the position on a 2-d plane
return [("start", (x, y)),
("left", (x -1, y)),("right", (x + 1, y)),
("up", (x, y - 1)), ("down", (x, y + 1))]
the_map = [ [0, 0, 0],
[0, 0, 0],
[0, 0, 1]] # 1 denotes our character
moves = character_movement(2, 2)
print("Starting square = (2,2)")
for (direction, position) in moves[1:]:
print("Trying to move '{}' to square {}:".format(direction, position))
try:
the_map[position[1]][position[0]] = 2
print(*the_map, sep="\n")
print("\n")
except IndexError:
print("Square {}, is out of bounds. IndexError sucessfully caught.\n".format(position))
Explanation: The bad fix just leaves a blank except statement, this catches ALL errors. The good fix meanwhile specifically states what errors it should catch, but the code will still fail if the error is something other than Type or dividing by zero.
So why is it bad to leave a bare except statement? Well, as I've stated elsewhere in these lecture series it is often better to crash than it is to output junk. And please trust me when I say bare except clauses are a great way to output junk.
In the second case we specifically state the errors we expect to sometimes receive. And this is nice for a few reasons, first, by naming the exceptions the code is a bit more readable. Secondly, expressly stating the errors forces you to be much more mindful when writing the code in the first place. And thirdly, if something unexpected does happen then, unless you have bare except statements throughout all of your code eventually you'll pass junk data to some function and get an error there. So basically in many cases you are not really solving the problem, you end up delaying the problem.
In short, for most applications you should be looking to handle the minimum number of cases you need for the code to function and in all other cases just let it crash.
Okay, how about one more example?
End of explanation |
3,552 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Example
Step2: The data look like they follow a quadratic function. We can set up the following Vandermonde system and use unconstrained least-squares to estimate parameters for a quadratic function.
$$A = \begin{bmatrix}
1 & x_0 & x_0^2\
1 & x_1 & x_1^2\
1 & x_2 & x_2^2\
1 & x_3 & x_3^2\
1 & x_4 & x_4^2\
\end{bmatrix}$$
Solving the following least-squares problem for $\beta$ will give us parameters for a quadratic model
Step3: Let's check the solution to see how we did
Step4: Example
Step5: Example
Step6: Example | Python Code:
import numpy as np # we can use np.array to specify problem data
import matplotlib.pyplot as plt
%matplotlib inline
import cvxpy as cvx
Explanation: <a href="https://colab.research.google.com/github/stephenbeckr/convex-optimization-class/blob/master/Demos/CVX_demo/cvxpy_intro.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Introduction to CVXPY
CVXPY is a Python-embedded modeling language for (disciplined) convex optimization problems. Much like CVX in MATLAB, it allows you to express the problem in a natural way that follows the math, instead of expressing the problem in a way that conforms to a specific solver's syntax.
Note: originally written by James Folberth, 2017. Some updates Sept 2018 by Stephen Becker, to work with current cvxpy (ver 1.0) -- not all bugs are fixed though. Updated Jan 2021 to work with Google colab (which, as of Jan 25 2021, has cvxpy version 1.0.31 pre-installed), mainly fixing size issues, like (n,) vs (n,1).
CVXPY Homepage
CVXPY Tutorial Documentation
CVXPY Examples
- 2021 update: the CVXPY Examples now have google colab notebooks. Highly recommended! Look at those in addition to (or instead of) this notebook
End of explanation
x = np.array([-3, -1, 0, 1, 2])
y = np.array([0.5, -1, 1.5, 5, 11])
plt.scatter(x,y)
plt.xlabel('x'); plt.ylabel('y'); plt.title('Example Data')
plt.show()
Explanation: Example: Least-Squares Curve Fitting
End of explanation
A = np.column_stack((np.ones(5,), x, x**2))
# now setup and solve with CVXPY
beta = cvx.Variable(3)
# CVXPY's norm behaves like np.linalg.norm
obj = cvx.Minimize(cvx.norm(A*beta-y))
prob = cvx.Problem(obj)
# Assuming the problem follows the DCP ruleset,
# CVXPY will select a solver and try to solve the problem.
# We can check if the problem is a disciplined convex program
# with prob.is_dcp().
prob.solve()
print("Problem status: ", prob.status)
print("Optimal value: ", prob.value)
print("Optimal var:\n", beta.value)
Explanation: The data look like they follow a quadratic function. We can set up the following Vandermonde system and use unconstrained least-squares to estimate parameters for a quadratic function.
$$A = \begin{bmatrix}
1 & x_0 & x_0^2\
1 & x_1 & x_1^2\
1 & x_2 & x_2^2\
1 & x_3 & x_3^2\
1 & x_4 & x_4^2\
\end{bmatrix}$$
Solving the following least-squares problem for $\beta$ will give us parameters for a quadratic model:
$$\min_\beta \|A\beta - y\|_2$$
Note that we could easily solve this simple problem with a QR factorization (\ in MATLAB, np.linalg.lstsq in python/numpy).
End of explanation
_beta = beta.value # get the optimal vars
_x = np.linspace(x.min(), x.max(), 100)
_y = _beta[0]*np.ones_like(_x) + _beta[1]*_x + _beta[2]*_x**2
plt.scatter(x,y)
plt.plot(_x,_y,'-b')
plt.xlabel('x'); plt.ylabel('y'); plt.title('Example Data with Least-Squares Fit')
plt.show()
Explanation: Let's check the solution to see how we did:
End of explanation
# make a bogus sparse solution and RHS
m = 200; n = 100;
A = np.random.randn(m,n)
_x = np.zeros((n,1)) # just using this notation to show that this is something we're going to pretend we don't actually have
# _x = np.zeros(n) # better, fewer headaches later, but adjust line 8 too
_k = 10
_I = np.random.permutation(n)[0:_k]
_x[_I] = np.random.randn(_k,1)
y = np.dot(A,_x) # this is (200,1), as is A.dot(_x)
# Here's an essential step, change from (200,1) to (200,) size
# if we defined _x=np.zeros((n,1))
y = y.ravel() # https://www.geeksforgeeks.org/differences-flatten-ravel-numpy/
x = cvx.Variable(n) # a bit like sympy. Shape is (n,) not (n,1)
# Even though the cvx.norm function behaves very similarly to
# the np.linalg.norm function, we CANNOT use the np.linalg.norm
# function on CVXPY objects. If we do, we'll probably get a strange
# error message.
obj = cvx.Minimize(cvx.norm(x,1))
# specify a list of constraints
# constraints = [ A*x == y ] # A*x is (200,), as is A@x. This is OK
constraints = [ A@x == y ]
# constraints = [ A.dot(x) == y ] # No, not OK. This is (200,100). CVXPY issue.
# specify and solve the problem
prob = cvx.Problem(obj, constraints)
prob.solve(verbose=True) # let's see the underlying solver's output
print("Problem status: ", prob.status)
print("Optimal value: ", prob.value)
print("True nonzero inds: ", sorted(_I))
print("Recovered nonzero inds: ", sorted(np.where(abs(x.value) > 1e-14)[0]))
# Note: we cannot access "x", we need to do "x.value"
# (also, turn _x to right shape)
err = np.linalg.norm(x.value -_x.ravel())
print(f'Norm of error, ||x-x_est|| is {err:e}')
Explanation: Example: $\ell_1$-norm minimization
Consider the basis pursuit problem
$$\begin{array}{cc} \text{minimize}& \|x\|_1\\text{subject to} & Ax=y.\end{array}$$
This is a least $\ell_1$-norm problem that will hopefully yield a sparse solution $x$.
We now have an objective, $\|x\|_1$, and an equality constraint $Ax=y$.
End of explanation
m = 300; n = 100;
A = np.random.rand(m,n)
# b = A.dot(np.ones((n,1)))/2.
# c = -np.random.rand(n,1)
b = A.dot(np.ones((n)))/2.
c = -np.random.rand(n)
x_rlx = cvx.Variable(n)
obj = cvx.Minimize(c.T*x_rlx)
constraints = [ A@x_rlx <= b,
0 <= x_rlx,
x_rlx <= 1 ]
prob = cvx.Problem(obj, constraints)
prob.solve()
print("Problem status: ", prob.status)
print("Optimal value: ", prob.value)
plt.hist(x_rlx.value)
plt.xlabel('x_rlx'); plt.ylabel('Count')
plt.title('Histogram of elements of x_rlx')
plt.show()
Explanation: Example: Relaxation of Boolean LP
Consider the Boolean linear program
$$\begin{array}{cl} \text{minimize} & c^Tx\\text{subject to} & Ax \preceq b\ & x_i \in{0,1}, \quad i=1,...,n.\end{array}$$
Note: the generalized inequality $\preceq$ is just element-wise $\le$ on vectors.
This is not a convex problem, but we can relax it to a linear program and hope that a solution to the relaxed, convex problem is "close" to a solution to the original Boolean LP. A relaxation of the Boolean LP is the following LP:
$$\begin{array}{cl} \text{minimize} & c^Tx\\text{subject to} & Ax \preceq b\ & \mathbf{0} \preceq x \preceq \mathbf{1}.\end{array}$$
The relaxed solution $x^\text{rlx}$ can be used to guess a Boolean point $\hat{x}$ by rounding based on a threshold $t\in[0,1]$:
$$ \hat{x}_i = \left{\begin{array}{cc} 1 & x_i^\text{rlx} \ge t\0 & \text{otherwise,}\end{array}\right. $$
for $i=1,...,n$. However, the Boolean point $\hat{x}$ might not satisfy $Ax\preceq b$ (i.e., $\hat{x}$ might be infeasible).
From Boyd and Vandenberghe:
You can think of $x_i$ as a job we either accept or decline, and $-c_i$ as the (positive) revenue we generate if we accept job $i$. We can think of $Ax\preceq b$ as a set of limits on $m$ resources. $A_{ij}$, which is positive, is the amount of resource $i$ consumed if we accept job $j$; $b_i$, which is positive, is the amount of recourse $i$ available.
End of explanation
# Generate some data
np.random.seed(271828) # solver='CVXOPT' reaches max_iters
m = 2; n = 50
x = np.random.randn(m,n)
# A = cvx.Variable(2,2) # This is old notation, doesn't work anymore
A = cvx.Variable((2,2))
b = cvx.Variable(2)
obj = cvx.Maximize(cvx.log_det(A))
constraints = [ cvx.norm(A*x[:,i] + b) <= 1 for i in range(n) ]
prob = cvx.Problem(obj, constraints)
#prob.solve(solver='CVXOPT', verbose=True) # progress stalls
#prob.solve(solver='CVXOPT', kktsolver='robust', verbose=True) # progress still stalls
prob.solve(solver='SCS', verbose=False) # seems to work, although it's not super accurate
# plot the ellipse and data
angles = np.linspace(0, 2*np.pi, 200)
rhs = np.row_stack((np.cos(angles) - b.value[0], np.sin(angles) - b.value[1]))
ellipse = np.linalg.solve(A.value, rhs)
plt.scatter(x[0,:], x[1,:])
plt.plot(ellipse[0,:].T, ellipse[1,:].T)
plt.xlabel('Dimension 1'); plt.ylabel('Dimension 2')
plt.title('Minimum Volume Ellipsoid')
plt.show()
Explanation: Example: Minimum Volume Ellipsoid
Sometimes an example is particularly hard and we might need to adjust solver options, or use a different solver.
Consider the problem of finding the minimum volume ellipsoid (described by the matrix $A$ and vector $b$) that covers a finite set of points ${x_i}_{i=1}^n$ in $\mathbb{R}^2$. The MVE can be found by solving
$$\begin{array}{cl} \text{maximize} & \log(\det(A))\
\text{subject to} & \|A x_i + b\| \le 1, \quad i=1,...,n.
\end{array}$$
To allow CVXPY to see that the problem conforms to the DCP ruleset, we should use the function cvx.log_det(A) instead of something like log(det(A)).
End of explanation |
3,553 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create Data
Step2: View Table
Step3: Drop Row Based On A Conditional | Python Code:
# Ignore
%load_ext sql
%sql sqlite://
%config SqlMagic.feedback = False
Explanation: Title: Select First X Rows
Slug: select_first_x_rows
Summary: Drop rows in SQL.
Date: 2017-01-16 12:00
Category: SQL
Tags: Basics
Authors: Chris Albon
Note: This tutorial was written using Catherine Devlin's SQL in Jupyter Notebooks library. If you have not using a Jupyter Notebook, you can ignore the two lines of code below and any line containing %%sql. Furthermore, this tutorial uses SQLite's flavor of SQL, your version might have some differences in syntax.
For more, check out Learning SQL by Alan Beaulieu.
End of explanation
%%sql
-- Create a table of criminals
CREATE TABLE criminals (pid, name, age, sex, city, minor);
INSERT INTO criminals VALUES (412, 'James Smith', 15, 'M', 'Santa Rosa', 1);
INSERT INTO criminals VALUES (234, 'Bill James', 22, 'M', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (632, 'Stacy Miller', 23, 'F', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (621, 'Betty Bob', NULL, 'F', 'Petaluma', 1);
INSERT INTO criminals VALUES (162, 'Jaden Ado', 49, 'M', NULL, 0);
INSERT INTO criminals VALUES (901, 'Gordon Ado', 32, 'F', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (512, 'Bill Byson', 21, 'M', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (411, 'Bob Iton', NULL, 'M', 'San Francisco', 0);
Explanation: Create Data
End of explanation
%%sql
-- Select all
SELECT *
-- From the criminals table
FROM criminals
Explanation: View Table
End of explanation
%%sql
-- Select all
SELECT *
-- From the criminals table
FROM criminals
-- Only return the first two rows
LIMIT 2;
Explanation: Drop Row Based On A Conditional
End of explanation |
3,554 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
OpenMC's general tally system accommodates a wide range of tally filters. While most filters are meant to identify regions of phase space that contribute to a tally, there are a special set of functional expansion filters that will multiply the tally by a set of orthogonal functions, e.g. Legendre polynomials, so that continuous functions of space or angle can be reconstructed from the tallied moments.
In this example, we will determine the spatial dependence of the flux along the $z$ axis by making a Legendre polynomial expansion. Let us represent the flux along the z axis, $\phi(z)$, by the function
$$ \phi(z') = \sum\limits_{n=0}^N a_n P_n(z') $$
where $z'$ is the position normalized to the range [-1, 1]. Since $P_n(z')$ are known functions, our only task is to determine the expansion coefficients, $a_n$. By the orthogonality properties of the Legendre polynomials, one can deduce that the coefficients, $a_n$, are given by
$$ a_n = \frac{2n + 1}{2} \int_{-1}^1 dz' P_n(z') \phi(z').$$
Thus, the problem reduces to finding the integral of the flux times each Legendre polynomial -- a problem which can be solved by using a Monte Carlo tally. By using a Legendre polynomial filter, we obtain stochastic estimates of these integrals for each polynomial order.
Step1: Now that the run is finished, we need to load the results from the statepoint file.
Step2: We've used the get_pandas_dataframe() method that returns tally data as a Pandas dataframe. Let's see what the raw data looks like.
Step3: Since the expansion coefficients are given as
$$ a_n = \frac{2n + 1}{2} \int_{-1}^1 dz' P_n(z') \phi(z')$$
we just need to multiply the Legendre moments by $(2n + 1)/2$.
Step4: To plot the flux distribution, we can use the numpy.polynomial.Legendre class which represents a truncated Legendre polynomial series. Since we really want to plot $\phi(z)$ and not $\phi(z')$ we first need to perform a change of variables. Since
$$ \lvert \phi(z) dz \rvert = \lvert \phi(z') dz' \rvert $$
and, for this case, $z = 10z'$, it follows that
$$ \phi(z) = \frac{\phi(z')}{10} = \sum_{n=0}^N \frac{a_n}{10} P_n(z'). $$
Step5: Let's plot it and see how our flux looks!
Step6: As you might expect, we get a rough cosine shape but with a flux depression in the middle due to the boron slab that we introduced. To get a more accurate distribution, we'd likely need to use a higher order expansion.
One more thing we can do is confirm that integrating the distribution gives us the same value as the first moment (since $P_0(z') = 1$). This can easily be done by numerically integrating using the trapezoidal rule | Python Code:
%matplotlib inline
import openmc
import numpy as np
import matplotlib.pyplot as plt
# Define fuel and B4C materials
fuel = openmc.Material()
fuel.add_element('U', 1.0, enrichment=4.5)
fuel.add_nuclide('O16', 2.0)
fuel.set_density('g/cm3', 10.0)
b4c = openmc.Material()
b4c.add_element('B', 4.0)
b4c.add_nuclide('C0', 1.0)
b4c.set_density('g/cm3', 2.5)
# Define surfaces used to construct regions
zmin, zmax = -10., 10.
box = openmc.model.get_rectangular_prism(10., 10., boundary_type='reflective')
bottom = openmc.ZPlane(z0=zmin, boundary_type='vacuum')
boron_lower = openmc.ZPlane(z0=-0.5)
boron_upper = openmc.ZPlane(z0=0.5)
top = openmc.ZPlane(z0=zmax, boundary_type='vacuum')
# Create three cells and add them to geometry
fuel1 = openmc.Cell(fill=fuel, region=box & +bottom & -boron_lower)
absorber = openmc.Cell(fill=b4c, region=box & +boron_lower & -boron_upper)
fuel2 = openmc.Cell(fill=fuel, region=box & +boron_upper & -top)
geom = openmc.Geometry([fuel1, absorber, fuel2])
settings = openmc.Settings()
spatial_dist = openmc.stats.Box(*geom.bounding_box)
settings.source = openmc.Source(space=spatial_dist)
settings.batches = 10
settings.inactive = 0
settings.particles = 1000
# Create a flux tally
flux_tally = openmc.Tally()
flux_tally2 = openmc.Tally()
flux_tally.scores = ['flux']
flux_tally2.scores = ['flux']
# Create a Legendre polynomial expansion filter and add to tally
order = 8
expand_filter = openmc.SpatialLegendreFilter(order, 'z', zmin, zmax)
cell_filter = openmc.CellFilter([absorber, fuel2])
cell_filter2 = openmc.CellFilter([fuel2, fuel1])
flux_tally.filters.append(cell_filter)
flux_tally.filters.append(expand_filter)
flux_tally2.filters.append(expand_filter)
flux_tally2.filters.append(cell_filter2)
tallies = openmc.Tallies([flux_tally, flux_tally2])
model = openmc.model.Model(geometry=geom, settings=settings, tallies=tallies)
model.export_to_xml()
import openmc.capi
openmc.capi.init()
openmc.capi.run()
openmc.capi.finalize()
tallies = openmc.capi.tallies
tallies
openmc.capi.cells
results(tallies[3], 4)
results(tallies[3], 5)
results(tallies[3], 6)
results(tallies[4],4)
results(tallies[4],5)
results(tallies[4],6)
from ctypes import c_int, c_int32, POINTER
from openmc.capi.filter import SpatialLegendreFilter, ZernikeFilter, SphericalHarmonicsFilter, CellFilter
expansion_types = (SpatialLegendreFilter, ZernikeFilter, SphericalHarmonicsFilter)
def results(tally, cell_id):
filters = tally.filters
if len(filters) != 2:
raise("We expect there to be two filters, "
"one a cell filter and the other an expansion filter")
index_to_id = {}
for key,value in openmc.capi.cells.items():
index_to_id[value._index] = key
if isinstance(filters[0], CellFilter):
cells = filters[0].bins
cell_ids = [index_to_id[cell_index] for cell_index in cells]
if cell_id not in cell_ids:
raise RuntimeError("Requested cell_id not in the passed tally")
stride_integer = cell_ids.index(cell_id)
if not isinstance(filters[1], expansion_types):
raise TypeError("Expected an expansion filter "
"as the second filter")
num_bins = filters[1].order + 1
starting_point = num_bins * stride_integer
return tally.mean[starting_point:starting_point+num_bins]
elif isinstance(filters[0], expansion_types):
num_bins = filters[0].order + 1
if not isinstance(filters[1], CellFilter):
raise TypeError("Expected a cell filter as the second filter")
cells = filters[1].bins
cell_ids = [index_to_id[cell_index] for cell_index in cells]
if cell_id not in cell_ids:
raise RuntimeError("Requested cell_id not in the passed tally")
stride_integer = cell_ids.index(cell_id)
total_bins = cells.size * num_bins
return tally.mean[stride_integer:stride_integer+total_bins+cells.size:cells.size]
model.run(output=True, openmc_exec='/Users/lindad/projects/Okapi/openmc/build/bin/openmc')
Explanation: OpenMC's general tally system accommodates a wide range of tally filters. While most filters are meant to identify regions of phase space that contribute to a tally, there are a special set of functional expansion filters that will multiply the tally by a set of orthogonal functions, e.g. Legendre polynomials, so that continuous functions of space or angle can be reconstructed from the tallied moments.
In this example, we will determine the spatial dependence of the flux along the $z$ axis by making a Legendre polynomial expansion. Let us represent the flux along the z axis, $\phi(z)$, by the function
$$ \phi(z') = \sum\limits_{n=0}^N a_n P_n(z') $$
where $z'$ is the position normalized to the range [-1, 1]. Since $P_n(z')$ are known functions, our only task is to determine the expansion coefficients, $a_n$. By the orthogonality properties of the Legendre polynomials, one can deduce that the coefficients, $a_n$, are given by
$$ a_n = \frac{2n + 1}{2} \int_{-1}^1 dz' P_n(z') \phi(z').$$
Thus, the problem reduces to finding the integral of the flux times each Legendre polynomial -- a problem which can be solved by using a Monte Carlo tally. By using a Legendre polynomial filter, we obtain stochastic estimates of these integrals for each polynomial order.
End of explanation
with openmc.StatePoint('statepoint.210.h5') as sp:
df = sp.tallies[flux_tally.id].get_pandas_dataframe()
Explanation: Now that the run is finished, we need to load the results from the statepoint file.
End of explanation
df
Explanation: We've used the get_pandas_dataframe() method that returns tally data as a Pandas dataframe. Let's see what the raw data looks like.
End of explanation
n = np.arange(order + 1)
a_n = (2*n + 1)/2 * df['mean']
Explanation: Since the expansion coefficients are given as
$$ a_n = \frac{2n + 1}{2} \int_{-1}^1 dz' P_n(z') \phi(z')$$
we just need to multiply the Legendre moments by $(2n + 1)/2$.
End of explanation
phi = np.polynomial.Legendre(a_n/10, domain=(zmin, zmax))
Explanation: To plot the flux distribution, we can use the numpy.polynomial.Legendre class which represents a truncated Legendre polynomial series. Since we really want to plot $\phi(z)$ and not $\phi(z')$ we first need to perform a change of variables. Since
$$ \lvert \phi(z) dz \rvert = \lvert \phi(z') dz' \rvert $$
and, for this case, $z = 10z'$, it follows that
$$ \phi(z) = \frac{\phi(z')}{10} = \sum_{n=0}^N \frac{a_n}{10} P_n(z'). $$
End of explanation
z = np.linspace(zmin, zmax, 1000)
plt.plot(z, phi(z))
plt.xlabel('Z position [cm]')
plt.ylabel('Flux [n/src]')
Explanation: Let's plot it and see how our flux looks!
End of explanation
np.trapz(phi(z), z)
Explanation: As you might expect, we get a rough cosine shape but with a flux depression in the middle due to the boron slab that we introduced. To get a more accurate distribution, we'd likely need to use a higher order expansion.
One more thing we can do is confirm that integrating the distribution gives us the same value as the first moment (since $P_0(z') = 1$). This can easily be done by numerically integrating using the trapezoidal rule:
End of explanation |
3,555 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Evaluation, Cross-Validation, and Model Selection
By Heiko Strathmann - [email protected] - http
Step1: Types of splitting strategies
As said earlier Cross-validation is based upon splitting the data into multiple partitions. Shogun has various strategies for this. The base class for them is CSplittingStrategy.
K-fold cross-validation
Formally, this is achieved via partitioning a dataset $X$ of size $|X|=n$ into $k \leq n$ disjoint partitions $X_i\subseteq X$ such that $X_1 \cup X_2 \cup \dots \cup X_n = X$ and $X_i\cap X_j=\emptyset$ for all $i\neq j$. Then, the algorithm is executed on all $k$ possibilities of merging $k-1$ partitions and subsequently tested on the remaining partition. This results in $k$ performances which are evaluated in some metric of choice (Shogun support multiple ones). The procedure can be repeated (on different splits) in order to obtain less variance in the estimate. See [1] for a nice review on cross-validation using different performance measures.
Step2: Stratified cross-validation
On classificaiton data, the best choice is stratified cross-validation. This divides the data in such way that the fraction of labels in each partition is roughly the same, which reduces the variance of the performance estimate quite a bit, in particular for data with more than two classes. In Shogun this is implemented by CStratifiedCrossValidationSplitting class.
Step3: Leave One Out cross-validation
Leave One Out Cross-validation holds out one sample as the validation set. It is thus a special case of K-fold cross-validation with $k=n$ where $n$ is number of samples. It is implemented in LOOCrossValidationSplitting class.
Let us visualize the generated folds on the toy data.
Step4: Stratified splitting takes care that each fold has almost the same number of samples from each class. This is not the case with normal splitting which usually leads to imbalanced folds.
Toy example
Step5: Ok, we now have performed classification on the training data. How good did this work? We can easily do this for many different performance measures.
Step6: Note how for example error rate is 1-accuracy. All of those numbers represent the training error, i.e. the ability of the classifier to explain the given data.
Now, the training error is zero. This seems good at first. But is this setting of the parameters a good idea? No! A good performance on the training data alone does not mean anything. A simple look up table is able to produce zero error on training data. What we want is that our methods generalises the input data somehow to perform well on unseen data. We will now use cross-validation to estimate the performance on such.
We will use CStratifiedCrossValidationSplitting, which accepts a reference to the labels and the number of partitions as parameters. This instance is then passed to the class CCrossValidation, which does the estimation using the desired splitting strategy. The latter class can take all algorithms that are implemented against the CMachine interface.
Step7: Now this is incredibly bad compared to the training error. In fact, it is very close to random performance (0.5). The lesson
Step8: It is better to average a number of different runs of cross-validation in this case. A nice side effect of this is that the results can be used to estimate error intervals for a given confidence rate.
Step9: Using this machinery, it is very easy to compare multiple kernel parameters against each other to find the best one. It is even possible to compare a different kernel.
Step10: This gives a brute-force way to select paramters of any algorithm implemented under the CMachine interface. The cool thing about this is, that it is also possible to compare different model families against each other. Below, we compare a a number of regression models in Shogun on the Boston Housing dataset.
Regression problem and cross-validation
Various regression models in Shogun are now used to predict house prices using the boston housing dataset. Cross-validation is used to find best parameters and also test the performance of the models.
Step11: Let us use cross-validation to compare various values of tau paramter for ridge regression (Regression notebook). We will use MeanSquaredError as the performance metric. Note that normal splitting is used since it might be impossible to generate "good" splits using Stratified splitting in case of regression since we have continous values for labels.
Step12: A low value of error certifies a good pick for the tau paramter which should be easy to conclude from the plots. In case of Ridge Regression the value of tau i.e. the amount of regularization doesn't seem to matter but does seem to in case of Kernel Ridge Regression. One interpretation of this could be the lack of over fitting in the feature space for ridge regression and the occurence of over fitting in the new kernel space in which Kernel Ridge Regression operates. </br> Next we will compare a range of values for the width of Gaussian Kernel used in Kernel Ridge Regression
Step13: The values for the kernel parameter and tau may not be independent of each other, so the values we have may not be optimal. A brute force way to do this would be to try all the pairs of these values but it is only feasible for a low number of parameters.
Step14: Let us approximately pick the good parameters using the plots. Now that we have the best parameters, let us compare the various regression models on the data set.
Step15: Model selection using Grid Search
A standard way of selecting the best parameters of a learning algorithm is by Grid Search. This is done by an exhaustive search of a specified parameter space. CModelSelectionParameters is used to select various parameters and their ranges to be used for model selection. A tree like structure is used where the nodes can be CSGObject or the parameters to the object. The range of values to be searched for the parameters is set using build_values() method.
Step16: Next we will create CModelSelectionParameters instance with a kernel object which has to be appended the root node. The kernel object itself will be append with a kernel width parameter which is the parameter we wish to search. | Python Code:
%pylab inline
%matplotlib inline
# include all Shogun classes
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
from modshogun import *
# generate some ultra easy training data
gray()
n=20
title('Toy data for binary classification')
X=hstack((randn(2,n), randn(2,n)+1))
Y=hstack((-ones(n), ones(n)))
_=scatter(X[0], X[1], c=Y , s=100)
p1 = Rectangle((0, 0), 1, 1, fc="w")
p2 = Rectangle((0, 0), 1, 1, fc="k")
legend((p1, p2), ["Class 1", "Class 2"], loc=2)
# training data in Shogun representation
features=RealFeatures(X)
labels=BinaryLabels(Y)
Explanation: Evaluation, Cross-Validation, and Model Selection
By Heiko Strathmann - [email protected] - http://github.com/karlnapf - http://herrstrathmann.de.
Based on the model selection framework of his Google summer of code 2011 project | Saurabh Mahindre - github.com/Saurabh7 as a part of Google Summer of Code 2014 project mentored by - Heiko Strathmann
This notebook illustrates the evaluation of prediction algorithms in Shogun using <a href="http://en.wikipedia.org/wiki/Cross-validation_(statistics)">cross-validation</a>, and selecting their parameters using <a href="http://en.wikipedia.org/wiki/Hyperparameter_optimization">grid-search</a>. We demonstrate this for a toy example on <a href="http://en.wikipedia.org/wiki/Binary_classification">Binary Classification</a> using <a href="http://en.wikipedia.org/wiki/Support_vector_machine">Support Vector Machines</a> and also a regression problem on a real world dataset.
General Idea
Splitting Strategies
K-fold cross-validation
Stratified cross-validation
Example: Binary classification
Example: Regression
Model Selection: Grid Search
General Idea
Cross validation aims to estimate an algorithm's performance on unseen data. For example, one might be interested in the average classification accuracy of a Support Vector Machine when being applied to new data, that it was not trained on. This is important in order to compare the performance different algorithms on the same target. Most crucial is the point that the data that was used for running/training the algorithm is not used for testing. Different algorithms here also can mean different parameters of the same algorithm. Thus, cross-validation can be used to tune parameters of learning algorithms, as well as comparing different families of algorithms against each other. Cross-validation estimates are related to the marginal likelihood in Bayesian statistics in the sense that using them for selecting models avoids overfitting.
Evaluating an algorithm's performance on training data should be avoided since the learner may adjust to very specific random features of the training data which are not very important to the general relation. This is called overfitting. Maximising performance on the training examples usually results in algorithms explaining the noise in data (rather than actual patterns), which leads to bad performance on unseen data. This is one of the reasons behind splitting the data and using different splits for training and testing, which can be done using cross-validation.
Let us generate some toy data for binary classification to try cross validation on.
End of explanation
k=5
normal_split=CrossValidationSplitting(labels, k)
Explanation: Types of splitting strategies
As said earlier Cross-validation is based upon splitting the data into multiple partitions. Shogun has various strategies for this. The base class for them is CSplittingStrategy.
K-fold cross-validation
Formally, this is achieved via partitioning a dataset $X$ of size $|X|=n$ into $k \leq n$ disjoint partitions $X_i\subseteq X$ such that $X_1 \cup X_2 \cup \dots \cup X_n = X$ and $X_i\cap X_j=\emptyset$ for all $i\neq j$. Then, the algorithm is executed on all $k$ possibilities of merging $k-1$ partitions and subsequently tested on the remaining partition. This results in $k$ performances which are evaluated in some metric of choice (Shogun support multiple ones). The procedure can be repeated (on different splits) in order to obtain less variance in the estimate. See [1] for a nice review on cross-validation using different performance measures.
End of explanation
stratified_split=StratifiedCrossValidationSplitting(labels, k)
Explanation: Stratified cross-validation
On classificaiton data, the best choice is stratified cross-validation. This divides the data in such way that the fraction of labels in each partition is roughly the same, which reduces the variance of the performance estimate quite a bit, in particular for data with more than two classes. In Shogun this is implemented by CStratifiedCrossValidationSplitting class.
End of explanation
split_strategies=[stratified_split, normal_split]
#code to visualize splitting
def get_folds(split, num):
split.build_subsets()
x=[]
y=[]
lab=[]
for j in range(num):
indices=split.generate_subset_indices(j)
x_=[]
y_=[]
lab_=[]
for i in range(len(indices)):
x_.append(X[0][indices[i]])
y_.append(X[1][indices[i]])
lab_.append(Y[indices[i]])
x.append(x_)
y.append(y_)
lab.append(lab_)
return x, y, lab
def plot_folds(split_strategies, num):
for i in range(len(split_strategies)):
x, y, lab=get_folds(split_strategies[i], num)
figure(figsize=(18,4))
gray()
suptitle(split_strategies[i].get_name(), fontsize=12)
for j in range(0, num):
subplot(1, num, (j+1), title='Fold %s' %(j+1))
scatter(x[j], y[j], c=lab[j], s=100)
_=plot_folds(split_strategies, 4)
Explanation: Leave One Out cross-validation
Leave One Out Cross-validation holds out one sample as the validation set. It is thus a special case of K-fold cross-validation with $k=n$ where $n$ is number of samples. It is implemented in LOOCrossValidationSplitting class.
Let us visualize the generated folds on the toy data.
End of explanation
# define SVM with a small rbf kernel (always normalise the kernel!)
C=1
kernel=GaussianKernel(2, 0.001)
kernel.init(features, features)
kernel.set_normalizer(SqrtDiagKernelNormalizer())
classifier=LibSVM(C, kernel, labels)
# train
_=classifier.train()
Explanation: Stratified splitting takes care that each fold has almost the same number of samples from each class. This is not the case with normal splitting which usually leads to imbalanced folds.
Toy example: Binary Support Vector Classification
Following the example from above, we will tune the performance of a SVM on the binary classification problem. We will
demonstrate how to evaluate a loss function or metric on a given algorithm
then learn how to estimate this metric for the algorithm performing on unseen data
and finally use those techniques to tune the parameters to obtain the best possible results.
The involved methods are
LibSVM as the binary classification algorithms
the area under the ROC curve (AUC) as performance metric
three different kernels to compare
End of explanation
# instanciate a number of Shogun performance measures
metrics=[ROCEvaluation(), AccuracyMeasure(), ErrorRateMeasure(), F1Measure(), PrecisionMeasure(), RecallMeasure(), SpecificityMeasure()]
for metric in metrics:
print metric.get_name(), metric.evaluate(classifier.apply(features), labels)
Explanation: Ok, we now have performed classification on the training data. How good did this work? We can easily do this for many different performance measures.
End of explanation
metric=AccuracyMeasure()
cross=CrossValidation(classifier, features, labels, stratified_split, metric)
# perform the cross-validation, note that this call involved a lot of computation
result=cross.evaluate()
# the result needs to be casted to CrossValidationResult
result=CrossValidationResult.obtain_from_generic(result)
# this class contains a field "mean" which contain the mean performance metric
print "Testing", metric.get_name(), result.mean
Explanation: Note how for example error rate is 1-accuracy. All of those numbers represent the training error, i.e. the ability of the classifier to explain the given data.
Now, the training error is zero. This seems good at first. But is this setting of the parameters a good idea? No! A good performance on the training data alone does not mean anything. A simple look up table is able to produce zero error on training data. What we want is that our methods generalises the input data somehow to perform well on unseen data. We will now use cross-validation to estimate the performance on such.
We will use CStratifiedCrossValidationSplitting, which accepts a reference to the labels and the number of partitions as parameters. This instance is then passed to the class CCrossValidation, which does the estimation using the desired splitting strategy. The latter class can take all algorithms that are implemented against the CMachine interface.
End of explanation
print "Testing", metric.get_name(), [CrossValidationResult.obtain_from_generic(cross.evaluate()).mean for _ in range(10)]
Explanation: Now this is incredibly bad compared to the training error. In fact, it is very close to random performance (0.5). The lesson: Never judge your algorithms based on the performance on training data!
Note that for small data sizes, the cross-validation estimates are quite noisy. If we run it multiple times, we get different results.
End of explanation
# 25 runs and 95% confidence intervals
cross.set_num_runs(25)
# perform x-validation (now even more expensive)
cross.evaluate()
result=cross.evaluate()
result=CrossValidationResult.obtain_from_generic(result)
print "Testing cross-validation mean %.2f " \
% (result.mean)
Explanation: It is better to average a number of different runs of cross-validation in this case. A nice side effect of this is that the results can be used to estimate error intervals for a given confidence rate.
End of explanation
widths=2**linspace(-5,25,10)
results=zeros(len(widths))
for i in range(len(results)):
kernel.set_width(widths[i])
result=CrossValidationResult.obtain_from_generic(cross.evaluate())
results[i]=result.mean
plot(log2(widths), results, 'blue')
xlabel("log2 Kernel width")
ylabel(metric.get_name())
_=title("Accuracy for different kernel widths")
print "Best Gaussian kernel width %.2f" % widths[results.argmax()], "gives", results.max()
# compare this with a linear kernel
classifier.set_kernel(LinearKernel())
lin_k=CrossValidationResult.obtain_from_generic(cross.evaluate())
plot([log2(widths[0]), log2(widths[len(widths)-1])], [lin_k.mean,lin_k.mean], 'r')
# please excuse this horrible code :)
print "Linear kernel gives", lin_k.mean
_=legend(["Gaussian", "Linear"], loc="lower center")
Explanation: Using this machinery, it is very easy to compare multiple kernel parameters against each other to find the best one. It is even possible to compare a different kernel.
End of explanation
feats=RealFeatures(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'uci/housing/fm_housing.dat')))
labels=RegressionLabels(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'uci/housing/housing_label.dat')))
preproc=RescaleFeatures()
preproc.init(feats)
feats.add_preprocessor(preproc)
feats.apply_preprocessor(True)
#Regression models
ls=LeastSquaresRegression(feats, labels)
tau=1
rr=LinearRidgeRegression(tau, feats, labels)
width=1
tau=1
kernel=GaussianKernel(feats, feats, width)
kernel.set_normalizer(SqrtDiagKernelNormalizer())
krr=KernelRidgeRegression(tau, kernel, labels)
regression_models=[ls, rr, krr]
Explanation: This gives a brute-force way to select paramters of any algorithm implemented under the CMachine interface. The cool thing about this is, that it is also possible to compare different model families against each other. Below, we compare a a number of regression models in Shogun on the Boston Housing dataset.
Regression problem and cross-validation
Various regression models in Shogun are now used to predict house prices using the boston housing dataset. Cross-validation is used to find best parameters and also test the performance of the models.
End of explanation
n=30
taus = logspace(-4, 1, n)
#5-fold cross-validation
k=5
split=CrossValidationSplitting(labels, k)
metric=MeanSquaredError()
cross=CrossValidation(rr, feats, labels, split, metric)
cross.set_num_runs(50)
errors=[]
for tau in taus:
#set necessary parameter
rr.set_tau(tau)
result=cross.evaluate()
result=CrossValidationResult.obtain_from_generic(result)
#Enlist mean error for all runs
errors.append(result.mean)
figure(figsize=(20,6))
suptitle("Finding best (tau) parameter using cross-validation", fontsize=12)
p=subplot(121)
title("Ridge Regression")
plot(taus, errors, linewidth=3)
p.set_xscale('log')
p.set_ylim([0, 80])
xlabel("Taus")
ylabel("Mean Squared Error")
cross=CrossValidation(krr, feats, labels, split, metric)
cross.set_num_runs(50)
errors=[]
for tau in taus:
krr.set_tau(tau)
result=cross.evaluate()
result=CrossValidationResult.obtain_from_generic(result)
#print tau, "error", result.mean
errors.append(result.mean)
p2=subplot(122)
title("Kernel Ridge regression")
plot(taus, errors, linewidth=3)
p2.set_xscale('log')
xlabel("Taus")
_=ylabel("Mean Squared Error")
Explanation: Let us use cross-validation to compare various values of tau paramter for ridge regression (Regression notebook). We will use MeanSquaredError as the performance metric. Note that normal splitting is used since it might be impossible to generate "good" splits using Stratified splitting in case of regression since we have continous values for labels.
End of explanation
n=50
widths=logspace(-2, 3, n)
krr.set_tau(0.1)
metric=MeanSquaredError()
k=5
split=CrossValidationSplitting(labels, k)
cross=CrossValidation(krr, feats, labels, split, metric)
cross.set_num_runs(10)
errors=[]
for width in widths:
kernel.set_width(width)
result=cross.evaluate()
result=CrossValidationResult.obtain_from_generic(result)
#print width, "error", result.mean
errors.append(result.mean)
figure(figsize=(15,5))
p=subplot(121)
title("Finding best width using cross-validation")
plot(widths, errors, linewidth=3)
p.set_xscale('log')
xlabel("Widths")
_=ylabel("Mean Squared Error")
Explanation: A low value of error certifies a good pick for the tau paramter which should be easy to conclude from the plots. In case of Ridge Regression the value of tau i.e. the amount of regularization doesn't seem to matter but does seem to in case of Kernel Ridge Regression. One interpretation of this could be the lack of over fitting in the feature space for ridge regression and the occurence of over fitting in the new kernel space in which Kernel Ridge Regression operates. </br> Next we will compare a range of values for the width of Gaussian Kernel used in Kernel Ridge Regression
End of explanation
n=40
taus = logspace(-3, 0, n)
widths=logspace(-1, 4, n)
cross=CrossValidation(krr, feats, labels, split, metric)
cross.set_num_runs(1)
x, y=meshgrid(taus, widths)
grid=array((ravel(x), ravel(y)))
print grid.shape
errors=[]
for i in range(0, n*n):
krr.set_tau(grid[:,i][0])
kernel.set_width(grid[:,i][1])
result=cross.evaluate()
result=CrossValidationResult.obtain_from_generic(result)
errors.append(result.mean)
errors=array(errors).reshape((n, n))
from mpl_toolkits.mplot3d import Axes3D
#taus = logspace(0.5, 1, n)
jet()
fig=figure(figsize(15,7))
ax=subplot(121)
c=pcolor(x, y, errors)
_=contour(x, y, errors, linewidths=1, colors='black')
_=colorbar(c)
xlabel('Taus')
ylabel('Widths')
ax.set_xscale('log')
ax.set_yscale('log')
ax1=fig.add_subplot(122, projection='3d')
ax1.plot_wireframe(log10(y),log10(x), errors, linewidths=2, alpha=0.6)
ax1.view_init(30,-40)
xlabel('Taus')
ylabel('Widths')
_=ax1.set_zlabel('Error')
Explanation: The values for the kernel parameter and tau may not be independent of each other, so the values we have may not be optimal. A brute force way to do this would be to try all the pairs of these values but it is only feasible for a low number of parameters.
End of explanation
#use the best parameters
rr.set_tau(1)
krr.set_tau(0.05)
kernel.set_width(2)
title_='Performance on Boston Housing dataset'
print "%50s" %title_
for machine in regression_models:
metric=MeanSquaredError()
cross=CrossValidation(machine, feats, labels, split, metric)
cross.set_num_runs(25)
result=cross.evaluate()
result=CrossValidationResult.obtain_from_generic(result)
print "-"*80
print "|", "%30s" % machine.get_name(),"|", "%20s" %metric.get_name(),"|","%20s" %result.mean ,"|"
print "-"*80
Explanation: Let us approximately pick the good parameters using the plots. Now that we have the best parameters, let us compare the various regression models on the data set.
End of explanation
#Root
param_tree_root=ModelSelectionParameters()
#Parameter tau
tau=ModelSelectionParameters("tau")
param_tree_root.append_child(tau)
# also R_LINEAR/R_LOG is available as type
min_value=0.01
max_value=1
type_=R_LINEAR
step=0.05
base=2
tau.build_values(min_value, max_value, type_, step, base)
Explanation: Model selection using Grid Search
A standard way of selecting the best parameters of a learning algorithm is by Grid Search. This is done by an exhaustive search of a specified parameter space. CModelSelectionParameters is used to select various parameters and their ranges to be used for model selection. A tree like structure is used where the nodes can be CSGObject or the parameters to the object. The range of values to be searched for the parameters is set using build_values() method.
End of explanation
#kernel object
param_gaussian_kernel=ModelSelectionParameters("kernel", kernel)
gaussian_kernel_width=ModelSelectionParameters("log_width")
gaussian_kernel_width.build_values(0.1, 6.0, R_LINEAR, 0.5, 2.0)
#kernel parameter
param_gaussian_kernel.append_child(gaussian_kernel_width)
param_tree_root.append_child(param_gaussian_kernel)
# cross validation instance used
cross_validation=CrossValidation(krr, feats, labels, split, metric)
cross_validation.set_num_runs(1)
# model selection instance
model_selection=GridSearchModelSelection(cross_validation, param_tree_root)
print_state=False
# TODO: enable it once crossval has been fixed
#best_parameters=model_selection.select_model(print_state)
#best_parameters.apply_to_machine(krr)
#best_parameters.print_tree()
result=cross_validation.evaluate()
result=CrossValidationResult.obtain_from_generic(result)
print 'Error with Best parameters:', result.mean
Explanation: Next we will create CModelSelectionParameters instance with a kernel object which has to be appended the root node. The kernel object itself will be append with a kernel width parameter which is the parameter we wish to search.
End of explanation |
3,556 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Programmierbeispiel
Fallbeispiel
IntelliJ IDEA
IDE für Java-Entwickler
Fast komplett in Java geschrieben
Großes und lang aktives Projekt
I. Fragestellung (1/3)
Schreibe die Frage explizit auf
Erkläre die Anayseidee verständlich
I. Fragestellung (2/3)
<b>Frage</b>
* Welche Quellcodedateien sind besonders komplex und änderten sich in letzter Zeit häufig?
I. Fragestellung (3/3)
Umsetzungsideen
Werkzeuge
Step1: Wir sehen uns Basisinfos über den Datensatz an.
Step2: <b>1</b> DataFrame (~ programmierbares Excel-Arbeitsblatt), <b>6</b> Series (= Spalten), <b>1128819</b> entries (= Reihen)
Wir wandeln die Zeitstempel von Texte in Objekte um.
Step3: Wir sehen uns nur die jüngsten Änderungen an.
Step4: Wir wollen nur Java-Code verwenden.
Step5: III. Formale Modellierung
Schaffe neue Sichten
Verschneide weitere Daten
Wir zählen die Anzahl der Änderungen je Datei.
Step6: Wir holen Infos über die Code-Zeilen hinzu...
Step7: ...und verschneiden diese mit den vorhandenen Daten.
Step8: VI. Interpretation
Erarbeite das Kernergebnis der Analyse heraus
Mache die zentrale Botschaft / neuen Erkenntnisse deutlich
Wir zeigen nur die TOP 10 Hotspots im Code an.
Step9: V. Kommunikation
Transformiere die Erkenntnisse in eine verständliche Visualisierung
Kommuniziere die nächsten Schritte nach der Analyse
Wir erzeugen ein XY-Diagramm aus der TOP 10 Liste.
Step10: Bonus
Step11: Wir visualisieren dies mit einem einfachen Liniendiagramm. | Python Code:
import pandas as pd
log = pd.read_csv("dataset/git_log_intellij.csv.gz")
log.head()
Explanation: Programmierbeispiel
Fallbeispiel
IntelliJ IDEA
IDE für Java-Entwickler
Fast komplett in Java geschrieben
Großes und lang aktives Projekt
I. Fragestellung (1/3)
Schreibe die Frage explizit auf
Erkläre die Anayseidee verständlich
I. Fragestellung (2/3)
<b>Frage</b>
* Welche Quellcodedateien sind besonders komplex und änderten sich in letzter Zeit häufig?
I. Fragestellung (3/3)
Umsetzungsideen
Werkzeuge: Jupyter, Python, pandas, matplotlib
Heuristiken:
"komplex": viele Quellcodezeilen
"ändert ... häufig": hohe Anzahl Commits
"in letzter Zeit": letzte 90 Tage
Meta-Ziel: Grundmechaniken kennenlernen.
II. Explorative Datenanalyse
Finde und lade mögliche Softwaredaten
Bereinige und filtere die Rohdaten
Wir laden einen Datenexport aus einem Git-Repository.
End of explanation
log.info()
Explanation: Wir sehen uns Basisinfos über den Datensatz an.
End of explanation
log['timestamp'] = pd.to_datetime(log['timestamp'])
log.head()
Explanation: <b>1</b> DataFrame (~ programmierbares Excel-Arbeitsblatt), <b>6</b> Series (= Spalten), <b>1128819</b> entries (= Reihen)
Wir wandeln die Zeitstempel von Texte in Objekte um.
End of explanation
# use log['timestamp'].max() instead of pd.Timedelta('today') to avoid outdated data in the future
recent = log[log['timestamp'] > log['timestamp'].max() - pd.Timedelta('90 days')]
recent.head()
Explanation: Wir sehen uns nur die jüngsten Änderungen an.
End of explanation
java = recent[recent['filename'].str.endswith(".java")].copy()
java.head()
Explanation: Wir wollen nur Java-Code verwenden.
End of explanation
changes = java.groupby('filename')[['sha']].count()
changes.head()
Explanation: III. Formale Modellierung
Schaffe neue Sichten
Verschneide weitere Daten
Wir zählen die Anzahl der Änderungen je Datei.
End of explanation
loc = pd.read_csv("dataset/cloc_intellij.csv.gz", index_col=1)
loc.head()
Explanation: Wir holen Infos über die Code-Zeilen hinzu...
End of explanation
hotspots = changes.join(loc[['code']]).dropna(subset=['code'])
hotspots.head()
Explanation: ...und verschneiden diese mit den vorhandenen Daten.
End of explanation
top10 = hotspots.sort_values(by="sha", ascending=False).head(10)
top10
Explanation: VI. Interpretation
Erarbeite das Kernergebnis der Analyse heraus
Mache die zentrale Botschaft / neuen Erkenntnisse deutlich
Wir zeigen nur die TOP 10 Hotspots im Code an.
End of explanation
ax = top10.plot.scatter('sha', 'code');
for k, v in top10.iterrows():
ax.annotate(k.split("/")[-1], v)
Explanation: V. Kommunikation
Transformiere die Erkenntnisse in eine verständliche Visualisierung
Kommuniziere die nächsten Schritte nach der Analyse
Wir erzeugen ein XY-Diagramm aus der TOP 10 Liste.
End of explanation
most_changes = hotspots['sha'].sort_values(ascending=False)
most_changes.head(10)
Explanation: Bonus: Welche Dateien ändern sich besonders häufig im Allgemeinen?
End of explanation
most_changes.plot(rot=90);
Explanation: Wir visualisieren dies mit einem einfachen Liniendiagramm.
End of explanation |
3,557 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Annotating continuous data
This tutorial describes adding annotations to a
Step1:
Step2: Notice that orig_time is None, because we haven't specified it. In
those cases, when you add the annotations to a
Step3: Since the example data comes from a Neuromag system that starts counting
sample numbers before the recording begins, adding my_annot to the
Step4: If you know that your annotation onsets are relative to some other time, you
can set orig_time before you call
Step5: <div class="alert alert-info"><h4>Note</h4><p>If your annotations fall outside the range of data times in the
Step6: The three annotations appear as differently colored rectangles because they
have different description values (which are printed along the top
edge of the plot area). Notice also that colored spans appear in the small
scroll bar at the bottom of the plot window, making it easy to quickly view
where in a
Step7: The colored rings are clickable, and determine which existing label will be
created by the next click-and-drag operation in the main plot window. New
annotation descriptions can be added by typing the new description,
clicking the
Step8: Notice that it is possible to create overlapping annotations, even when they
share the same description. This is not possible when annotating
interactively; click-and-dragging to create a new annotation that overlaps
with an existing annotation with the same description will cause the old and
new annotations to be merged.
Individual annotations can be accessed by indexing an
Step9: You can also iterate over the annotations within an
Step10: Note that iterating, indexing and slicing
Step11: Reading and writing Annotations to/from a file | Python Code:
import os
from datetime import timedelta
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
raw.crop(tmax=60).load_data()
Explanation: Annotating continuous data
This tutorial describes adding annotations to a :class:~mne.io.Raw object,
and how annotations are used in later stages of data processing.
:depth: 1
As usual we'll start by importing the modules we need, loading some
example data <sample-dataset>, and (since we won't actually analyze the
raw data in this tutorial) cropping the :class:~mne.io.Raw object to just 60
seconds before loading it into RAM to save memory:
End of explanation
my_annot = mne.Annotations(onset=[3, 5, 7],
duration=[1, 0.5, 0.25],
description=['AAA', 'BBB', 'CCC'])
print(my_annot)
Explanation: :class:~mne.Annotations in MNE-Python are a way of storing short strings of
information about temporal spans of a :class:~mne.io.Raw object. Below the
surface, :class:~mne.Annotations are :class:list-like <list> objects,
where each element comprises three pieces of information: an onset time
(in seconds), a duration (also in seconds), and a description (a text
string). Additionally, the :class:~mne.Annotations object itself also keeps
track of orig_time, which is a POSIX timestamp_ denoting a real-world
time relative to which the annotation onsets should be interpreted.
Creating annotations programmatically
If you know in advance what spans of the :class:~mne.io.Raw object you want
to annotate, :class:~mne.Annotations can be created programmatically, and
you can even pass lists or arrays to the :class:~mne.Annotations
constructor to annotate multiple spans at once:
End of explanation
raw.set_annotations(my_annot)
print(raw.annotations)
# convert meas_date (a tuple of seconds, microseconds) into a float:
meas_date = raw.info['meas_date']
orig_time = raw.annotations.orig_time
print(meas_date == orig_time)
Explanation: Notice that orig_time is None, because we haven't specified it. In
those cases, when you add the annotations to a :class:~mne.io.Raw object,
it is assumed that the orig_time matches the time of the first sample of
the recording, so orig_time will be set to match the recording
measurement date (raw.info['meas_date']).
End of explanation
time_of_first_sample = raw.first_samp / raw.info['sfreq']
print(my_annot.onset + time_of_first_sample)
print(raw.annotations.onset)
Explanation: Since the example data comes from a Neuromag system that starts counting
sample numbers before the recording begins, adding my_annot to the
:class:~mne.io.Raw object also involved another automatic change: an offset
equalling the time of the first recorded sample (raw.first_samp /
raw.info['sfreq']) was added to the onset values of each annotation
(see time-as-index for more info on raw.first_samp):
End of explanation
time_format = '%Y-%m-%d %H:%M:%S.%f'
new_orig_time = (meas_date + timedelta(seconds=50)).strftime(time_format)
print(new_orig_time)
later_annot = mne.Annotations(onset=[3, 5, 7],
duration=[1, 0.5, 0.25],
description=['DDD', 'EEE', 'FFF'],
orig_time=new_orig_time)
raw2 = raw.copy().set_annotations(later_annot)
print(later_annot.onset)
print(raw2.annotations.onset)
Explanation: If you know that your annotation onsets are relative to some other time, you
can set orig_time before you call :meth:~mne.io.Raw.set_annotations,
and the onset times will get adjusted based on the time difference between
your specified orig_time and raw.info['meas_date'], but without the
additional adjustment for raw.first_samp. orig_time can be specified
in various ways (see the documentation of :class:~mne.Annotations for the
options); here we'll use an ISO 8601_ formatted string, and set it to be 50
seconds later than raw.info['meas_date'].
End of explanation
fig = raw.plot(start=2, duration=6)
Explanation: <div class="alert alert-info"><h4>Note</h4><p>If your annotations fall outside the range of data times in the
:class:`~mne.io.Raw` object, the annotations outside the data range will
not be added to ``raw.annotations``, and a warning will be issued.</p></div>
Now that your annotations have been added to a :class:~mne.io.Raw object,
you can see them when you visualize the :class:~mne.io.Raw object:
End of explanation
fig.canvas.key_press_event('a')
Explanation: The three annotations appear as differently colored rectangles because they
have different description values (which are printed along the top
edge of the plot area). Notice also that colored spans appear in the small
scroll bar at the bottom of the plot window, making it easy to quickly view
where in a :class:~mne.io.Raw object the annotations are so you can easily
browse through the data to find and examine them.
Annotating Raw objects interactively
Annotations can also be added to a :class:~mne.io.Raw object interactively
by clicking-and-dragging the mouse in the plot window. To do this, you must
first enter "annotation mode" by pressing :kbd:a while the plot window is
focused; this will bring up the annotation controls window:
End of explanation
new_annot = mne.Annotations(onset=3.75, duration=0.75, description='AAA')
raw.set_annotations(my_annot + new_annot)
raw.plot(start=2, duration=6)
Explanation: The colored rings are clickable, and determine which existing label will be
created by the next click-and-drag operation in the main plot window. New
annotation descriptions can be added by typing the new description,
clicking the :guilabel:Add label button; the new description will be added
to the list of descriptions and automatically selected.
During interactive annotation it is also possible to adjust the start and end
times of existing annotations, by clicking-and-dragging on the left or right
edges of the highlighting rectangle corresponding to that annotation.
<div class="alert alert-danger"><h4>Warning</h4><p>Calling :meth:`~mne.io.Raw.set_annotations` **replaces** any annotations
currently stored in the :class:`~mne.io.Raw` object, so be careful when
working with annotations that were created interactively (you could lose
a lot of work if you accidentally overwrite your interactive
annotations). A good safeguard is to run
``interactive_annot = raw.annotations`` after you finish an interactive
annotation session, so that the annotations are stored in a separate
variable outside the :class:`~mne.io.Raw` object.</p></div>
How annotations affect preprocessing and analysis
You may have noticed that the description for new labels in the annotation
controls window defaults to BAD_. The reason for this is that annotation
is often used to mark bad temporal spans of data (such as movement artifacts
or environmental interference that cannot be removed in other ways such as
projection <tut-projectors-background> or filtering). Several
MNE-Python operations
are "annotation aware" and will avoid using data that is annotated with a
description that begins with "bad" or "BAD"; such operations typically have a
boolean reject_by_annotation parameter. Examples of such operations are
independent components analysis (:class:mne.preprocessing.ICA), functions
for finding heartbeat and blink artifacts
(:func:~mne.preprocessing.find_ecg_events,
:func:~mne.preprocessing.find_eog_events), and creation of epoched data
from continuous data (:class:mne.Epochs). See tut-reject-data-spans
for details.
Operations on Annotations objects
:class:~mne.Annotations objects can be combined by simply adding them with
the + operator, as long as they share the same orig_time:
End of explanation
print(raw.annotations[0]) # just the first annotation
print(raw.annotations[:2]) # the first two annotations
print(raw.annotations[(3, 2)]) # the fourth and third annotations
Explanation: Notice that it is possible to create overlapping annotations, even when they
share the same description. This is not possible when annotating
interactively; click-and-dragging to create a new annotation that overlaps
with an existing annotation with the same description will cause the old and
new annotations to be merged.
Individual annotations can be accessed by indexing an
:class:~mne.Annotations object, and subsets of the annotations can be
achieved by either slicing or indexing with a list, tuple, or array of
indices:
End of explanation
for ann in raw.annotations:
descr = ann['description']
start = ann['onset']
end = ann['onset'] + ann['duration']
print("'{}' goes from {} to {}".format(descr, start, end))
Explanation: You can also iterate over the annotations within an :class:~mne.Annotations
object:
End of explanation
# later_annot WILL be changed, because we're modifying the first element of
# later_annot.onset directly:
later_annot.onset[0] = 99
# later_annot WILL NOT be changed, because later_annot[0] returns a copy
# before the 'onset' field is changed:
later_annot[0]['onset'] = 77
print(later_annot[0]['onset'])
Explanation: Note that iterating, indexing and slicing :class:~mne.Annotations all
return a copy, so changes to an indexed, sliced, or iterated element will not
modify the original :class:~mne.Annotations object.
End of explanation
raw.annotations.save('saved-annotations.csv')
annot_from_file = mne.read_annotations('saved-annotations.csv')
print(annot_from_file)
Explanation: Reading and writing Annotations to/from a file
:class:~mne.Annotations objects have a :meth:~mne.Annotations.save method
which can write :file:.fif, :file:.csv, and :file:.txt formats (the
format to write is inferred from the file extension in the filename you
provide). There is a corresponding :func:~mne.read_annotations function to
load them from disk:
End of explanation |
3,558 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step1: Train your first neural network
Step2: Import the Fashion MNIST dataset
This guide uses the Fashion MNIST dataset which contains 70,000 grayscale images in 10 categories. The images show individual articles of clothing at low resolution (28 by 28 pixels), as seen here
Step3: Loading the dataset returns four NumPy arrays
Step4: Explore the data
Let's explore the format of the dataset before training the model. The following shows there are 60,000 images in the training set, with each image represented as 28 x 28 pixels
Step5: Likewise, there are 60,000 labels in the training set
Step6: Each label is an integer between 0 and 9
Step7: There are 10,000 images in the test set. Again, each image is represented as 28 x 28 pixels
Step8: And the test set contains 10,000 images labels
Step9: Preprocess the data
The data must be preprocessed before training the network. If you inspect the first image in the training set, you will see that the pixel values fall in the range of 0 to 255
Step10: We scale these values to a range of 0 to 1 before feeding to the neural network model. For this, cast the datatype of the image components from an integer to a float, and divide by 255. Here's the function to preprocess the images
Step11: Display the first 25 images from the training set and display the class name below each image. Verify that the data is in the correct format and we're ready to build and train the network.
Step12: Build the model
Building the neural network requires configuring the layers of the model, then compiling the model.
Setup the layers
The basic building block of a neural network is the layer. Layers extract representations from the data fed into them. And, hopefully, these representations are more meaningful for the problem at hand.
Most of deep learning consists of chaining together simple layers. Most layers, like tf.keras.layers.Dense, have parameters that are learned during training.
Step13: The first layer in this network, tf.keras.layers.Flatten, transforms the format of the images from a 2d-array (of 28 by 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. Think of this layer as unstacking rows of pixels in the image and lining them up. This layer has no parameters to learn; it only reformats the data.
After the pixels are flattened, the network consists of a sequence of two tf.keras.layers.Dense layers. These are densely-connected, or fully-connected, neural layers. The first Dense layer has 128 nodes (or neurons). The second (and last) layer is a 10-node softmax layer—this returns an array of 10 probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the 10 classes.
Compile the model
Before the model is ready for training, it needs a few more settings. These are added during the model's compile step
Step14: Train the model
Training the neural network model requires the following steps
Step15: As the model trains, the loss and accuracy metrics are displayed. This model reaches an accuracy of about 0.88 (or 88%) on the training data.
Evaluate accuracy
Next, compare how the model performs on the test dataset
Step16: It turns out, the accuracy on the test dataset is a little less than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of overfitting. Overfitting is when a machine learning model performs worse on new data than on their training data.
Make predictions
With the model trained, we can use it to make predictions about some images.
Step17: Here, the model has predicted the label for each image in the testing set. Let's take a look at the first prediction
Step18: A prediction is an array of 10 numbers. These describe the "confidence" of the model that the image corresponds to each of the 10 different articles of clothing. We can see which label has the highest confidence value
Step19: So the model is most confident that this image is an ankle boot, or class_names[9]. And we can check the test label to see this is correct
Step20: We can graph this to look at the full set of 10 channels
Step21: Let's look at the 0th image, predictions, and prediction array.
Step22: Let's plot several images with their predictions. Correct prediction labels are blue and incorrect prediction labels are red. The number gives the percent (out of 100) for the predicted label. Note that it can be wrong even when very confident.
Step23: Finally, use the trained model to make a prediction about a single image.
Step24: tf.keras models are optimized to make predictions on a batch, or collection, of examples at once. So even though we're using a single image, we need to add it to a list
Step25: Now predict the image
Step26: model.predict returns a list of lists, one for each image in the batch of data. Grab the predictions for our (only) image in the batch | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
# TensorFlow and tf.keras
import tensorflow as tf
from tensorflow import keras
# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
Explanation: Train your first neural network: basic classification
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/keras/basic_classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/keras/basic_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/keras/basic_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
This guide trains a neural network model to classify images of clothing, like sneakers and shirts. It's okay if you don't understand all the details, this is a fast-paced overview of a complete TensorFlow program with the details explained as we go.
This guide uses tf.keras, a high-level API to build and train models in TensorFlow.
End of explanation
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
Explanation: Import the Fashion MNIST dataset
This guide uses the Fashion MNIST dataset which contains 70,000 grayscale images in 10 categories. The images show individual articles of clothing at low resolution (28 by 28 pixels), as seen here:
<table>
<tr><td>
<img src="https://tensorflow.org/images/fashion-mnist-sprite.png"
alt="Fashion MNIST sprite" width="600">
</td></tr>
<tr><td align="center">
<b>Figure 1.</b> <a href="https://github.com/zalandoresearch/fashion-mnist">Fashion-MNIST samples</a> (by Zalando, MIT License).<br/>
</td></tr>
</table>
Fashion MNIST is intended as a drop-in replacement for the classic MNIST dataset—often used as the "Hello, World" of machine learning programs for computer vision. The MNIST dataset contains images of handwritten digits (0, 1, 2, etc) in an identical format to the articles of clothing we'll use here.
This guide uses Fashion MNIST for variety, and because it's a slightly more challenging problem than regular MNIST. Both datasets are relatively small and are used to verify that an algorithm works as expected. They're good starting points to test and debug code.
We will use 60,000 images to train the network and 10,000 images to evaluate how accurately the network learned to classify images. You can access the Fashion MNIST directly from TensorFlow, just import and load the data:
End of explanation
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
Explanation: Loading the dataset returns four NumPy arrays:
The train_images and train_labels arrays are the training set—the data the model uses to learn.
The model is tested against the test set, the test_images, and test_labels arrays.
The images are 28x28 NumPy arrays, with pixel values ranging between 0 and 255. The labels are an array of integers, ranging from 0 to 9. These correspond to the class of clothing the image represents:
<table>
<tr>
<th>Label</th>
<th>Class</th>
</tr>
<tr>
<td>0</td>
<td>T-shirt/top</td>
</tr>
<tr>
<td>1</td>
<td>Trouser</td>
</tr>
<tr>
<td>2</td>
<td>Pullover</td>
</tr>
<tr>
<td>3</td>
<td>Dress</td>
</tr>
<tr>
<td>4</td>
<td>Coat</td>
</tr>
<tr>
<td>5</td>
<td>Sandal</td>
</tr>
<tr>
<td>6</td>
<td>Shirt</td>
</tr>
<tr>
<td>7</td>
<td>Sneaker</td>
</tr>
<tr>
<td>8</td>
<td>Bag</td>
</tr>
<tr>
<td>9</td>
<td>Ankle boot</td>
</tr>
</table>
Each image is mapped to a single label. Since the class names are not included with the dataset, store them here to use later when plotting the images:
End of explanation
train_images.shape
Explanation: Explore the data
Let's explore the format of the dataset before training the model. The following shows there are 60,000 images in the training set, with each image represented as 28 x 28 pixels:
End of explanation
len(train_labels)
Explanation: Likewise, there are 60,000 labels in the training set:
End of explanation
train_labels
Explanation: Each label is an integer between 0 and 9:
End of explanation
test_images.shape
Explanation: There are 10,000 images in the test set. Again, each image is represented as 28 x 28 pixels:
End of explanation
len(test_labels)
Explanation: And the test set contains 10,000 images labels:
End of explanation
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
Explanation: Preprocess the data
The data must be preprocessed before training the network. If you inspect the first image in the training set, you will see that the pixel values fall in the range of 0 to 255:
End of explanation
train_images = train_images / 255.0
test_images = test_images / 255.0
Explanation: We scale these values to a range of 0 to 1 before feeding to the neural network model. For this, cast the datatype of the image components from an integer to a float, and divide by 255. Here's the function to preprocess the images:
It's important that the training set and the testing set are preprocessed in the same way:
End of explanation
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
Explanation: Display the first 25 images from the training set and display the class name below each image. Verify that the data is in the correct format and we're ready to build and train the network.
End of explanation
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation=tf.nn.relu),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
Explanation: Build the model
Building the neural network requires configuring the layers of the model, then compiling the model.
Setup the layers
The basic building block of a neural network is the layer. Layers extract representations from the data fed into them. And, hopefully, these representations are more meaningful for the problem at hand.
Most of deep learning consists of chaining together simple layers. Most layers, like tf.keras.layers.Dense, have parameters that are learned during training.
End of explanation
model.compile(optimizer=tf.train.AdamOptimizer(),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
Explanation: The first layer in this network, tf.keras.layers.Flatten, transforms the format of the images from a 2d-array (of 28 by 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. Think of this layer as unstacking rows of pixels in the image and lining them up. This layer has no parameters to learn; it only reformats the data.
After the pixels are flattened, the network consists of a sequence of two tf.keras.layers.Dense layers. These are densely-connected, or fully-connected, neural layers. The first Dense layer has 128 nodes (or neurons). The second (and last) layer is a 10-node softmax layer—this returns an array of 10 probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the 10 classes.
Compile the model
Before the model is ready for training, it needs a few more settings. These are added during the model's compile step:
Loss function —This measures how accurate the model is during training. We want to minimize this function to "steer" the model in the right direction.
Optimizer —This is how the model is updated based on the data it sees and its loss function.
Metrics —Used to monitor the training and testing steps. The following example uses accuracy, the fraction of the images that are correctly classified.
End of explanation
model.fit(train_images, train_labels, epochs=5)
Explanation: Train the model
Training the neural network model requires the following steps:
Feed the training data to the model—in this example, the train_images and train_labels arrays.
The model learns to associate images and labels.
We ask the model to make predictions about a test set—in this example, the test_images array. We verify that the predictions match the labels from the test_labels array.
To start training, call the model.fit method—the model is "fit" to the training data:
End of explanation
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
Explanation: As the model trains, the loss and accuracy metrics are displayed. This model reaches an accuracy of about 0.88 (or 88%) on the training data.
Evaluate accuracy
Next, compare how the model performs on the test dataset:
End of explanation
predictions = model.predict(test_images)
Explanation: It turns out, the accuracy on the test dataset is a little less than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of overfitting. Overfitting is when a machine learning model performs worse on new data than on their training data.
Make predictions
With the model trained, we can use it to make predictions about some images.
End of explanation
predictions[0]
Explanation: Here, the model has predicted the label for each image in the testing set. Let's take a look at the first prediction:
End of explanation
np.argmax(predictions[0])
Explanation: A prediction is an array of 10 numbers. These describe the "confidence" of the model that the image corresponds to each of the 10 different articles of clothing. We can see which label has the highest confidence value:
End of explanation
test_labels[0]
Explanation: So the model is most confident that this image is an ankle boot, or class_names[9]. And we can check the test label to see this is correct:
End of explanation
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
Explanation: We can graph this to look at the full set of 10 channels
End of explanation
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
Explanation: Let's look at the 0th image, predictions, and prediction array.
End of explanation
# Plot the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions, test_labels)
Explanation: Let's plot several images with their predictions. Correct prediction labels are blue and incorrect prediction labels are red. The number gives the percent (out of 100) for the predicted label. Note that it can be wrong even when very confident.
End of explanation
# Grab an image from the test dataset
img = test_images[0]
print(img.shape)
Explanation: Finally, use the trained model to make a prediction about a single image.
End of explanation
# Add the image to a batch where it's the only member.
img = (np.expand_dims(img,0))
print(img.shape)
Explanation: tf.keras models are optimized to make predictions on a batch, or collection, of examples at once. So even though we're using a single image, we need to add it to a list:
End of explanation
predictions_single = model.predict(img)
print(predictions_single)
plot_value_array(0, predictions_single, test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
Explanation: Now predict the image:
End of explanation
np.argmax(predictions_single[0])
Explanation: model.predict returns a list of lists, one for each image in the batch of data. Grab the predictions for our (only) image in the batch:
End of explanation |
3,559 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have a csv file without headers which I'm importing into python using pandas. The last column is the target class, while the rest of the columns are pixel values for images. How can I go ahead and split this dataset into a training set and a testing set (3 : 2)? | Problem:
import numpy as np
import pandas as pd
dataset = load_data()
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(dataset.iloc[:, :-1], dataset.iloc[:, -1], test_size=0.4,
random_state=42) |
3,560 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spam detection
The main aim of this project is to build a machine learning classifier that is able to automatically detect
spammy articles, based on their content.
Step1: Custom Helper Function Definitions
Step2: Modeling
We tried out various models and selected the best performing models (with the best performing parameter settings for each model). At the end, we retained 3 models which are
Step3: logistic regression
Step4: random forest
Step5: Combination 1
We decided to try combining these models in order to construct a better and more consistent one.
voting system
Step6: customizing
Step7: Here you can see that benefited from the good behavior of the logistic regression and the random forest. By contrast,
we couldn't do the same with the naive bayse, because, this makes as missclassify a lot of OK articles, which leads to
a low precision.
Combination 2
Now, we would like the capture more of the not-OK articles. To this end, we decided to include a few false positives
in the training datasets. In order so in an intelligent way and to select some representative samples, we first
analyzed these false positives.
Step8: This means that we have two big clusters of false positives (green and red). We have chosen to pick up
randomly 50 samples of each cluster.
Step9: Now we do the prediction again
random forest
Step10: logistic regression
Step11: Naive Bayse
Step12: Voting
Step13: Customizing | Python Code:
! sh bootstrap.sh
from sklearn.cluster import KMeans
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import random
from sklearn.utils import shuffle
from sklearn.metrics import f1_score
from sklearn.cross_validation import KFold
from sklearn.metrics import recall_score
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import confusion_matrix
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import BernoulliNB
%matplotlib inline
#Load testing dataset
df_test = pd.read_csv("enwiki.draft_quality.50k_stratified.feature_labels.tsv", sep="\t")
#Replace strings with integers : 1 for OK and 0 for Not OK
df_test["draft_quality"] = df_test["draft_quality"].replace({"OK" : 1, "vandalism" : 0, "spam" : 0, "attack" : 0})
#Put features and labels on differents dataframes
X_test=df_test.drop(["draft_quality"], 1)
Y_test=df_test["draft_quality"]
# Loading training dataset
df_train = pd.read_csv("enwiki.draft_quality.201608-201701.feature_labels.tsv", sep="\t")
df_train["draft_quality"] = df_train["draft_quality"].replace({"OK" : 1, "vandalism" : 0, "spam" : 0, "attack" : 0})
X_train=df_train.drop(["draft_quality"], 1)
Y_train=df_train["draft_quality"]
# Converting dataframes to array
X_test=np.array(X_test)
Y_test=np.array(Y_test)
X_train=np.array(X_train)
Y_train=np.array(Y_train)
#lenghts of boths datasets
print("Test set length: %d" % len(X_test))
print("Train set length: %d" % len(X_train))
Explanation: Spam detection
The main aim of this project is to build a machine learning classifier that is able to automatically detect
spammy articles, based on their content.
End of explanation
from sklearn.metrics import roc_curve, auc
# Compute ROC curve and ROC area
def compute_roc_and_auc(y_predict, y_true):
fpr = dict()
tpr = dict()
roc_auc = dict()
fpr, tpr, _ = roc_curve(y_predict, y_true)
roc_auc = auc(fpr, tpr)
return roc_auc, fpr, tpr
# Plot of a ROC curve
def plot_roc(roc_auc, fpr, tpr):
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
plt.show()
Explanation: Custom Helper Function Definitions
End of explanation
weights=np.array([0.7,1-0.7])
clf = BernoulliNB(alpha=22, class_prior=weights)
clf.fit(X_train, Y_train)
prediction_nb=clf.predict(X_test)
confusion=confusion_matrix(Y_test, prediction_nb, labels=None)
print(confusion)
recall=confusion[0,0]/(confusion[0,0]+confusion[0,1])
precision=confusion[0,0]/(confusion[0,0]+confusion[1,0])
print("Over all the not-OK articles included in the dataset, we detect:")
print(recall)
print("Over all the articles predicted as being not-OK, only this proportion is really not-OK:")
print(precision)
roc_auc, fpr, tpr = compute_roc_and_auc(prediction_nb, Y_test)
print (plot_roc(roc_auc, fpr, tpr))
Explanation: Modeling
We tried out various models and selected the best performing models (with the best performing parameter settings for each model). At the end, we retained 3 models which are:
1. Naïve Bayes Gaussian
2. Random forest
3. Logistic regression
Naïve Bayes Gaussian
End of explanation
clf2 = LogisticRegression(penalty='l1', random_state=0, class_weight={1:0.1, 0: 0.9})
clf2.fit(X_train, Y_train)
prediction_lr=clf2.predict(X_test)
confusion=confusion_matrix(Y_test, prediction_lr, labels=None)
print(confusion)
recall=confusion[0,0]/(confusion[0,0]+confusion[0,1])
precision=confusion[0,0]/(confusion[0,0]+confusion[1,0])
print("Over all the not-OK articles included in the dataset, we detect:")
print(recall)
print("Over all the articles predicted as being not-OK, only this proportion is really not-OK:")
print(precision)
roc_auc, fpr, tpr = compute_roc_and_auc(prediction_lr, Y_test)
print (plot_roc(roc_auc, fpr, tpr))
Explanation: logistic regression
End of explanation
clf3 = RandomForestClassifier(n_jobs=16, n_estimators=2, min_samples_leaf=1, random_state=25, class_weight={1:0.9, 0: 0.1})
clf3.fit(X_train, Y_train)
prediction_rf=clf3.predict(X_test)
confusion=confusion_matrix(Y_test, prediction_rf, labels=None)
print(confusion)
recall=confusion[0,0]/(confusion[0,0]+confusion[0,1])
precision=confusion[0,0]/(confusion[0,0]+confusion[1,0])
print("Over all the not-OK articles included in the dataset, we detect:")
print(recall)
print("Over all the articles predicted as being not-OK, only this proportion is really not-OK:")
print(precision)
roc_auc, fpr, tpr = compute_roc_and_auc(prediction_rf, Y_test)
print (plot_roc(roc_auc, fpr, tpr))
Explanation: random forest
End of explanation
#Here we construct our voting function
def voting(pred1, pred2, pred3):
final_prediction=np.zeros_like(pred1)
for i in range(len(pred1)):
if pred1[i]==pred2[i]:
final_prediction[i]=pred1[i]
elif pred1[i]==pred3[i]:
final_prediction[i]=pred1[i]
elif pred2[i]==pred3[i]:
final_prediction[i]=pred2[i]
return final_prediction
#Here we make the prediction using voting function (with the three models defined above)
prediction= voting(prediction_lr, prediction_nb, prediction_rf)
from sklearn.metrics import confusion_matrix
confusion=confusion_matrix(Y_test, prediction, labels=None)
print(confusion)
recall=confusion[0,0]/(confusion[0,0]+confusion[0,1])
precision=confusion[0,0]/(confusion[0,0]+confusion[1,0])
print("Over all the not-OK articles included in the dataset, we detect:")
print(recall)
print("Over all the articles predicted as being not-OK, only this proportion is really not-OK:")
print(precision)
roc_auc, fpr, tpr = compute_roc_and_auc(prediction, Y_test)
print (plot_roc(roc_auc, fpr, tpr))
Explanation: Combination 1
We decided to try combining these models in order to construct a better and more consistent one.
voting system
End of explanation
#Since we are interested in negatives (not-OK) we will analyze how many times a model detects a not-OK article while
#the others don't
def get_missclasified_indexes(pred1, Y_true, Class):
index_list=[]
a=0
b=1
if Class=="negative":
a=1
b=0
for i in range(len(pred1)):
if pred1[i]==a and Y_true[i]==b:
index_list.append(i)
return index_list
false_negative_indexes=get_missclasified_indexes(prediction, Y_test, "negative")
print(len(prediction[false_negative_indexes]))
print(np.sum(prediction_nb[false_negative_indexes]!=prediction[false_negative_indexes]))
print(np.sum(prediction_rf[false_negative_indexes]!=prediction[false_negative_indexes]))
print(np.sum(prediction_lr[false_negative_indexes]!=prediction[false_negative_indexes]))
##Here we define our function based on the results above
def voting_customized(pred1, pred2, pred3):
final_prediction=np.zeros_like(pred1)
for i in range(len(pred1)):
if pred1[i]==0:
final_prediction[i]=0
else:
final_prediction[i]=pred3[i]
return final_prediction
#making a prediction with our new function
prediction= voting_customized(prediction_lr, prediction_nb, prediction_rf)
confusion=confusion_matrix(Y, prediction, labels=None)
print(confusion)
recall=confusion[0,0]/(confusion[0,0]+confusion[0,1])
precision=confusion[0,0]/(confusion[0,0]+confusion[1,0])
print("Over all the not-OK articles included in the dataset, we detect:")
print(recall)
print("Over all the articles predicted as being not-OK, only this proportion is really not-OK:")
print(precision)
roc_auc, fpr, tpr = compute_roc_and_auc(prediction, Y_test)
print (plot_roc(roc_auc, fpr, tpr))
false_negative_indexes=get_missclasified_indexes(prediction, Y, "negative")
print(len(prediction[false_negative_indexes]))
print(np.sum(prediction_nb[false_negative_indexes]!=prediction[false_negative_indexes]))
print(np.sum(prediction_rf[false_negative_indexes]!=prediction[false_negative_indexes]))
print(np.sum(prediction_lr[false_negative_indexes]!=prediction[false_negative_indexes]))
Explanation: customizing
End of explanation
from scipy.cluster.hierarchy import dendrogram, linkage
Z = linkage(X[false_negative_indexes], 'ward')
plt.figure(figsize=(25, 25))
plt.title('Hierarchical Clustering Dendrogram')
plt.xlabel('sample index')
plt.ylabel('distance')
dendrogram(
Z,
leaf_rotation=90.,
leaf_font_size=11.,
)
plt.show()
Explanation: Here you can see that benefited from the good behavior of the logistic regression and the random forest. By contrast,
we couldn't do the same with the naive bayse, because, this makes as missclassify a lot of OK articles, which leads to
a low precision.
Combination 2
Now, we would like the capture more of the not-OK articles. To this end, we decided to include a few false positives
in the training datasets. In order so in an intelligent way and to select some representative samples, we first
analyzed these false positives.
End of explanation
#we perform a kmeans clustering with 2 clusters
kmeans = KMeans(n_clusters=2, random_state=0).fit(X[false_negative_indexes])
cluster_labels=kmeans.labels_
print(cluster_labels)
print(np.unique(cluster_labels))
#Picking up the sapmles from theclusters and adding them to the training dataset.
false_negatives_cluster0=[]
false_negatives_cluster1=[]
for i in range(1,11):
random.seed(a=i)
false_negatives_cluster0.append(random.choice([w for index_w, w in enumerate(false_negative_indexes) if cluster_labels[index_w] == 0]))
for i in range(1,11):
random.seed(a=i)
false_negatives_cluster1.append(random.choice([w for index_w, w in enumerate(false_negative_indexes) if cluster_labels[index_w] == 1]))
#adding 1st cluster's samples
Y_train=np.reshape(np.dstack(Y_train), (len(Y_train),1))
temp_arr=np.array([Y_test[false_negatives_cluster0]])
temp_arr=np.reshape(np.dstack(temp_arr), (10,1))
X_train_new = np.vstack((X_train, X_test[false_negatives_cluster0]))
Y_train_new=np.vstack((Y_train, temp_arr))
# Second
temp_arr2=np.array([Y_test[false_negatives_cluster1]])
temp_arr2=np.reshape(np.dstack(temp_arr2), (10,1))
X_train_new = np.vstack((X_train_new, X_test[false_negatives_cluster1]))
Y_train_new=np.vstack((Y_train_new, temp_arr2))
Y_train_new=np.reshape(np.dstack(Y_train_new), (len(Y_train_new),))
X_train = X_train_new
Y_train = Y_train_new
Explanation: This means that we have two big clusters of false positives (green and red). We have chosen to pick up
randomly 50 samples of each cluster.
End of explanation
clf3.fit(X_train, Y_train)
prediction_rf_new=clf3.predict(X_test)
confusion=confusion_matrix(Y_test, prediction_rf_new, labels=None)
print(confusion)
recall=confusion[0,0]/(confusion[0,0]+confusion[0,1])
precision=confusion[0,0]/(confusion[0,0]+confusion[1,0])
print("Over all the not-OK articles included in the dataset, we detect:")
print(recall)
print("Over all the articles predicted as being not-OK, only this proportion is really not-OK:")
print(precision)
roc_auc, fpr, tpr = compute_roc_and_auc(prediction_rf_new, Y_test)
print (plot_roc(roc_auc, fpr, tpr))
Explanation: Now we do the prediction again
random forest
End of explanation
clf2.fit(X_train, Y_train)
prediction_lr_new=clf2.predict(X_test)
confusion=confusion_matrix(Y_test, prediction_lr_new, labels=None)
print(confusion)
recall=confusion[0,0]/(confusion[0,0]+confusion[0,1])
precision=confusion[0,0]/(confusion[0,0]+confusion[1,0])
print("Over all the not-OK articles included in the dataset, we detect:")
print(recall)
print("Over all the articles predicted as being not-OK, only this proportion is really not-OK:")
print(precision)
roc_auc, fpr, tpr = compute_roc_and_auc(prediction_lr_new, Y_test)
print (plot_roc(roc_auc, fpr, tpr))
Explanation: logistic regression
End of explanation
from sklearn.naive_bayes import BernoulliNB
weights=np.array([0.7,1-0.7])
clf = BernoulliNB(alpha=22, class_prior=weights)
clf.fit(X_train, Y_train)
prediction_nb_new=clf.predict(X_test)
confusion=confusion_matrix(Y_test, prediction_nb_new, labels=None)
print(confusion)
recall=confusion[0,0]/(confusion[0,0]+confusion[0,1])
precision=confusion[0,0]/(confusion[0,0]+confusion[1,0])
print("Over all the not-OK articles included in the dataset, we detect:")
print(recall)
print("Over all the articles predicted as being not-OK, only this proportion is really not-OK:")
print(precision)
roc_auc, fpr, tpr = compute_roc_and_auc(prediction_nb_new, Y_test)
print (plot_roc(roc_auc, fpr, tpr))
Explanation: Naive Bayse
End of explanation
prediction= voting(prediction_lr_new, prediction_nb_new, prediction_rf_new)
confusion=confusion_matrix(Y_test, prediction, labels=None)
print(confusion)
recall=confusion[0,0]/(confusion[0,0]+confusion[0,1])
precision=confusion[0,0]/(confusion[0,0]+confusion[1,0])
print("Over all the not-OK articles included in the dataset, we detect:")
print(recall)
print("Over all the articles predicted as being not-OK, only this proportion is really not-OK:")
print(precision)
roc_auc, fpr, tpr = compute_roc_and_auc(prediction, Y_test)
print (plot_roc(roc_auc, fpr, tpr))
Explanation: Voting
End of explanation
def voting_customized2(pred1, pred2, pred3):
final_prediction=np.zeros_like(pred1)
for i in range(len(pred1)):
if pred1[i]==0:
final_prediction[i]=0
else:
final_prediction[i]=pred2[i]
return final_prediction
prediction= voting_customized2(prediction_lr_new, prediction_nb_new, prediction_rf_new)
confusion=confusion_matrix(Y, prediction, labels=None)
print(confusion)
recall=confusion[0,0]/(confusion[0,0]+confusion[0,1])
precision=confusion[0,0]/(confusion[0,0]+confusion[1,0])
print("Over all the not-OK articles included in the dataset, we detect:")
print(recall)
print("Over all the articles predicted as being not-OK, only this proportion is really not-OK:")
print(precision)
roc_auc, fpr, tpr = compute_roc_and_auc(prediction, Y_test)
print (plot_roc(roc_auc, fpr, tpr))
Explanation: Customizing
End of explanation |
3,561 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Average Reward over time
Step1: Visualizing what the agent is seeing
Starting with the ray pointing all the way right, we have one row per ray in clockwise order.
The numbers for each ray are the following | Python Code:
g.plot_reward(smoothing=100)
Explanation: Average Reward over time
End of explanation
g.__class__ = KarpathyGame
np.set_printoptions(formatter={'float': (lambda x: '%.2f' % (x,))})
x = g.observe()
new_shape = (x[:-2].shape[0]//g.eye_observation_size, g.eye_observation_size)
print(x[:-2].reshape(new_shape))
print(x[-2:])
g.to_html()
Explanation: Visualizing what the agent is seeing
Starting with the ray pointing all the way right, we have one row per ray in clockwise order.
The numbers for each ray are the following:
- first three numbers are normalized distances to the closest visible (intersecting with the ray) object. If no object is visible then all of them are $1$. If there's many objects in sight, then only the closest one is visible. The numbers represent distance to friend, enemy and wall in order.
- the last two numbers represent the speed of moving object (x and y components). Speed of wall is ... zero.
Finally the last two numbers in the representation correspond to speed of the hero.
End of explanation |
3,562 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced
Step1: And we'll attach some dummy datasets. See Datasets for more details.
Step2: Available Backends
See the Compute Tutorial for details on adding compute options and using them to create synthetic models.
The following list in any version of PHOEBE can always be accessed via phoebe.list_available_computes.
Note also that all of these are listed on the backends page and their available functionality is compared in the compute backend comparison table.
Step3: PHOEBE 1.0 Legacy
For more details, see Comparing PHOEBE 2.0 vs PHOEBE Legacy and the legacy compute API docs.
Step4: ellc
For more details, see the the ellc compute API docs.
Step5: jktebop
For more details, see the the jktebop compute API docs.
Step6: Using Alternate Backends
Adding Compute Options
Adding a set of compute options, via b.add_compute for an alternate backend is just as easy as for the PHOEBE backend. Simply provide the function or name of the function in phoebe.parameters.compute that points to the parameters for that backend.
Here we'll add the default PHOEBE backend as well as the PHOEBE 1.0 (legacy) backend. Note that in order to use an alternate backend, that backend must be installed on your machine.
Step7: Running Compute
Nothing changes when calling b.run_compute - simply provide the compute tag for those options. Do note, however, that not all backends support all dataset types.
But, since the legacy backend doesn't support ck2004 atmospheres and interpolated limb-darkening, we do need to choose a limb-darkening law. We can do this for all passband-component combinations by using set_value_all.
For more information on limb-darkening options, see the limb-darkening tutorial.
Step8: Running Multiple Backends Simultaneously
Running multiple backends simultaneously is just as simple as running the PHOEBE backend with multiple sets of compute options (see Compute).
We just need to make sure that each dataset is only enabled for one (or none) of the backends that we want to use, and then send a list of the compute tags to run_compute. Here we'll use the PHOEBE backend to compute orbits and the legacy backend to compute light curves.
Step9: The parameters inside the returned model even remember which set of compute options (and therefore, in this case, which backend) were used to compute them. | Python Code:
#!pip install -I "phoebe>=2.4,<2.5"
import phoebe
from phoebe import u # units
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: Advanced: Alternate Backends
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
b.add_dataset('orb',
compute_times=phoebe.linspace(0,10,1000),
dataset='orb01')
b.add_dataset('lc',
compute_times=phoebe.linspace(0,10,1000),
dataset='lc01')
Explanation: And we'll attach some dummy datasets. See Datasets for more details.
End of explanation
phoebe.list_available_computes()
Explanation: Available Backends
See the Compute Tutorial for details on adding compute options and using them to create synthetic models.
The following list in any version of PHOEBE can always be accessed via phoebe.list_available_computes.
Note also that all of these are listed on the backends page and their available functionality is compared in the compute backend comparison table.
End of explanation
b.add_compute('legacy', compute='legacybackend')
print(b.get_compute('legacybackend'))
Explanation: PHOEBE 1.0 Legacy
For more details, see Comparing PHOEBE 2.0 vs PHOEBE Legacy and the legacy compute API docs.
End of explanation
b.add_compute('ellc', compute='ellcbackend')
print(b.get_compute('ellcbackend'))
Explanation: ellc
For more details, see the the ellc compute API docs.
End of explanation
b.add_compute('jktebop', compute='jktebopcompute')
print(b.get_compute('jktebopcompute'))
Explanation: jktebop
For more details, see the the jktebop compute API docs.
End of explanation
b.add_compute('phoebe', compute='phoebebackend')
print(b.get_compute('phoebebackend'))
Explanation: Using Alternate Backends
Adding Compute Options
Adding a set of compute options, via b.add_compute for an alternate backend is just as easy as for the PHOEBE backend. Simply provide the function or name of the function in phoebe.parameters.compute that points to the parameters for that backend.
Here we'll add the default PHOEBE backend as well as the PHOEBE 1.0 (legacy) backend. Note that in order to use an alternate backend, that backend must be installed on your machine.
End of explanation
b.set_value_all(qualifier='ld_mode', value='manual')
b.set_value_all(qualifier='ld_func', value='logarithmic')
b.run_compute('legacybackend', model='legacyresults')
Explanation: Running Compute
Nothing changes when calling b.run_compute - simply provide the compute tag for those options. Do note, however, that not all backends support all dataset types.
But, since the legacy backend doesn't support ck2004 atmospheres and interpolated limb-darkening, we do need to choose a limb-darkening law. We can do this for all passband-component combinations by using set_value_all.
For more information on limb-darkening options, see the limb-darkening tutorial.
End of explanation
b.set_value_all(qualifier='enabled', dataset='lc01', compute='phoebebackend', value=False)
#b.set_value_all(qualifier='enabled', dataset='orb01', compute='legacybackend', value=False) # don't need this since legacy NEVER computes orbits
print(b.filter(qualifier='enabled'))
b.run_compute(['phoebebackend', 'legacybackend'], model='mixedresults')
Explanation: Running Multiple Backends Simultaneously
Running multiple backends simultaneously is just as simple as running the PHOEBE backend with multiple sets of compute options (see Compute).
We just need to make sure that each dataset is only enabled for one (or none) of the backends that we want to use, and then send a list of the compute tags to run_compute. Here we'll use the PHOEBE backend to compute orbits and the legacy backend to compute light curves.
End of explanation
print(b.filter(model='mixedresults').computes)
b.filter(model='mixedresults', compute='phoebebackend').datasets
b.filter(model='mixedresults', compute='legacybackend').datasets
Explanation: The parameters inside the returned model even remember which set of compute options (and therefore, in this case, which backend) were used to compute them.
End of explanation |
3,563 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Kirkwood-Buff example
Step1: Load gromacs trajectory/topology
Gromacs was used to sample a dilute solution of sodium chloride in SPC/E water for 100 ns.
The trajectory and .gro loaded below have been stripped from hydrogens to reduce disk space.
Step2: Calculate average number densities for solute and solvent
Step3: Compute and plot RDFs
Note
Step4: Calculate KB integrals
Here we calculate the number of solute molecules around other solute molecules (cc) and around water (wc).
For example,
$$ N_{cc} = 4\pi\rho_c\int_0^{\infty} \left ( g(r)_{cc} -1 \right ) r^2 dr$$
The preferential binding parameter is subsequently calculated as $\Gamma = N_{cc}-N_{wc}$.
Step5: Finite system size corrected KB integrals
As can be seen in the above figure, the KB integrals do not converge since in a finite sized $NVT$ simulation,
$g(r)$ can never exactly go to unity at large separations.
To correct for this, a simple scaling factor can be applied, as describe in the link on top of the page,
$$ g_{gc}^{\prime} (r) = g_{jc}(r) \cdot
\frac{N_j\left (1-V(r)/V\right )}{N_j\left (1-V(r)/V\right )-\Delta N_{jc}(r)-\delta_{jc}} $$
Lastly, we take a little extra care in producing a refined PDF file for the uncorrected and
corrected integrals. | Python Code:
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import mdtraj as md
from math import pi
from scipy import integrate
plt.rcParams.update({'font.size': 16})
Explanation: Kirkwood-Buff example: NaCl in water
In this example we calculate Kirkwood-Buff integrals in a solute (c) and solvent (w) system and correct for finite size effects as described at http://dx.doi.org/10.1073/pnas.0902904106 (see Supporting Information).
End of explanation
traj = md.load('gmx/traj_noh.xtc', top='gmx/conf_noh.gro')
traj
Explanation: Load gromacs trajectory/topology
Gromacs was used to sample a dilute solution of sodium chloride in SPC/E water for 100 ns.
The trajectory and .gro loaded below have been stripped from hydrogens to reduce disk space.
End of explanation
volume=0
for vec in traj.unitcell_lengths:
volume = volume + vec[0]*vec[1]*vec[2] / traj.n_frames
N_c = len(traj.topology.select('name NA or name CL'))
N_w = len(traj.topology.select('name O'))
rho_c = N_c / volume
rho_w = N_w / volume
print "Simulation time = ", traj.time[-1]*1e-3, 'ns'
print "Average volume = ", volume, 'nm-3'
print "Average side-length = ", volume**(1/3.), 'nm'
print "Number of solute molecules = ", N_c
print "Number of water molecules = ", N_w
print "Solute density = ", rho_c, 'nm-3'
print "Water density = ", rho_w, 'nm-3'
steps=range(traj.n_frames)
plt.xlabel('steps')
plt.ylabel('box sidelength, x (nm)')
plt.plot(traj.unitcell_lengths[:,0])
Explanation: Calculate average number densities for solute and solvent
End of explanation
rmax = (volume)**(1/3.)/2
select_cc = traj.topology.select_pairs('name NA or name CL', 'name NA or name CL')
select_wc = traj.topology.select_pairs('name NA or name CL', 'name O')
r, g_cc = md.compute_rdf(traj, select_cc, r_range=[0.0,rmax], bin_width=0.01, periodic=True)
r, g_wc = md.compute_rdf(traj, select_wc, r_range=[0.0,rmax], bin_width=0.01, periodic=True)
g_cc = g_cc * len(select_cc) / (0.5*N_c**2) # re-scale to account for diagonal in pair matrix
np.savetxt('g_cc.dat', np.column_stack( (r,g_cc) ))
np.savetxt('g_wc.dat', np.column_stack( (r,g_wc) ))
plt.xlabel('$r$/nm')
plt.ylabel('$g(r)$')
plt.plot(r, g_cc, 'r-')
plt.plot(r, g_wc, 'b-')
Explanation: Compute and plot RDFs
Note: The radial distribution function in mdtraj differs from i.e. Gromacs g_rdf in
the way data is normalized and the $g(r)$ may need rescaling. It seems that densities
are calculated by the number of selected pairs which for the cc case exclude all the
self terms. This can be easily corrected and is obviously not needed for the wc case.
End of explanation
dr = r[1]-r[0]
N_cc = rho_c * 4*pi*np.cumsum( ( g_cc - 1 )*r**2*dr )
N_wc = rho_c * 4*pi*np.cumsum( ( g_wc - 1 )*r**2*dr )
Gamma = N_cc - N_wc
plt.xlabel('$r$/nm')
plt.ylabel('$\\Gamma = N_{cc}-N_{wc}$')
plt.plot(r, Gamma, 'r-')
Explanation: Calculate KB integrals
Here we calculate the number of solute molecules around other solute molecules (cc) and around water (wc).
For example,
$$ N_{cc} = 4\pi\rho_c\int_0^{\infty} \left ( g(r)_{cc} -1 \right ) r^2 dr$$
The preferential binding parameter is subsequently calculated as $\Gamma = N_{cc}-N_{wc}$.
End of explanation
Vn = 4*pi/3*r**3 / volume
g_ccc = g_cc * N_c * (1-Vn) / ( N_c*(1-Vn)-N_cc-1)
g_wcc = g_wc * N_w * (1-Vn) / ( N_w*(1-Vn)-N_wc-0)
N_ccc = rho_c * 4*pi*dr*np.cumsum( ( g_ccc - 1 )*r**2 )
N_wcc = rho_c * 4*pi*dr*np.cumsum( ( g_wcc - 1 )*r**2 )
Gammac = N_ccc - N_wcc
plt.xlabel('$r$/nm')
plt.ylabel('$\\Gamma = N_{cc}-N_{wc}$')
plt.plot(r, Gamma, color='red', ls='-', lw=2, label='uncorrected')
plt.plot(r, Gammac, color='green', lw=2, label='corrected')
plt.legend(loc=0,frameon=False, fontsize=16)
plt.yticks( np.arange(-0.4, 0.5, 0.1))
plt.ylim((-0.45,0.45))
plt.savefig('gamma.pdf', bbox_inches='tight')
Explanation: Finite system size corrected KB integrals
As can be seen in the above figure, the KB integrals do not converge since in a finite sized $NVT$ simulation,
$g(r)$ can never exactly go to unity at large separations.
To correct for this, a simple scaling factor can be applied, as describe in the link on top of the page,
$$ g_{gc}^{\prime} (r) = g_{jc}(r) \cdot
\frac{N_j\left (1-V(r)/V\right )}{N_j\left (1-V(r)/V\right )-\Delta N_{jc}(r)-\delta_{jc}} $$
Lastly, we take a little extra care in producing a refined PDF file for the uncorrected and
corrected integrals.
End of explanation |
3,564 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step3: Why do we need to improve the traing method?
In the previous note, we managed to get the neural net to
1. converge to any value at a given input
2. emulate a step function.
However, the training failed to emulate functions such as absolute value and sine.
Step4: Now let's train the data set the way before, to validate our new class.
Step5: Now a sine function
Step6: Now an absolute function?
Well as it turned out, to encode an absolute value function is
hard. You can play with the code below and try to learn it, but
for less than 10 hidden neurons the result is usually pretty
terrible.
It is possible however, to learn half of the absolute function,
and encode only a straight line.
Step7: Now equiped with this set of hyper-parameters, I thought | Python Code:
%pylab inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
from random import random
from IPython.display import FileLink, FileLinks
def σ(z):
return 1/(1 + np.e**(-z))
def σ_prime(z):
return np.e**(z) / (np.e**z + 1)**2
def Plot(fn, *args, **kwargs):
argLength = len(args);
if argLength == 1:
start = args[0][0]
end = args[0][1]
points = None
try:
points = args[0][2]
except:
pass
if not points: points = 30
xs = linspace(start, end, points);
plot(xs, list(map(fn, xs)), **kwargs);
Plot(σ, [-2, 2])
y = lambda neuron, input: neuron[0] * input + neuron[1]
α = lambda neuron, input: σ(y(neuron, input))
partial_w = lambda neuron, input: \
σ_prime(y(neuron, input)) * input
partial_y = lambda neuron, input: \
σ_prime(y(neuron, input))
class Neuron():
def __init__(self, neuron):
self.neuron = neuron
def output(self, input):
return α(self.neuron, input)
def set_η(self, η):
self.η = η
def train(self, input, target, η=None):
result = self.output(input);
δ = result - target
p_w = partial_w(self.neuron, input)
p_y = partial_y(self.neuron, input)
gradient = np.array([p_w, p_y])#/np.sqrt(p_w**2 + p_y**2)
if η is None:
η = self.η
self.neuron = - η * δ * gradient + self.neuron;
return result
class Network():
def __init__(self, shape, parameters=None):
self.shape = shape;
self.zs = {};
self.αs = {};
self.weights = {};
self.biases = {};
self.δs = {};
self.partial_ws = {};
if parameters is not None:
weights, biases = parameters;
self.weights = weights;
self.biases = biases;
else:
for i in range(1, len(shape)):
self.create_network(i, shape[i])
def create_network(self, ind, size):
if ind is 0: return;
self.weights[ind] = np.random.random(self.shape[ind-1:ind+1][::-1]) - 0.5
self.biases[ind] = np.random.random(self.shape[ind]) - 0.5
def get_partials_placeholder(self):
partial_ws = {};
δs = {};
for ind in range(1, len(self.shape)):
partial_ws[ind] = np.zeros(self.shape[ind-1:ind+1][::-1])
δs[ind] = np.zeros(self.shape[ind])
return partial_ws, δs;
def output(self, input=None):
if input is not None:
self.forward_pass(input);
return self.αs[len(self.shape) - 1]
def set_η(self, η=None):
if η is None: return
self.η = η
def train(self, input, target, η=None):
if η is None:
η = self.η
self.forward_pass(input)
self.back_propagation(target)
self.gradient_descent(η)
# done: generate a mini batch of training data,
# take an average of the gradeitn from the mini-batch
def train_batch(self, inputs, targets, η=None):
inputs_len = np.shape(inputs)[0]
targets_len = np.shape(targets)[0]
assert inputs_len == targets_len, \
"input and target need to have the same first dimension"
N = inputs_len
partial_ws, δs = self.get_partials_placeholder()
# print(partial_ws, δs)
for input, target in zip(inputs, targets):
# print(input, target)
self.forward_pass(input)
self.back_propagation(target)
for ind in range(1, len(self.shape)):
partial_ws[ind] += self.partial_ws[ind] / float(N)
δs[ind] += self.δs[ind] / float(N)
self.partial_ws = partial_ws
self.δs = δs
self.gradient_descent(η)
def forward_pass(self, input):
# forward passing
self.αs[0] = input;
for i in range(1, len(self.shape)):
self.forward_pass_layer(i);
def back_propagation(self, target):
# back-propagation
ind_last = len(self.shape) - 1
self.δs[ind_last] = σ_prime(self.zs[ind_last]) * \
(self.αs[ind_last] - target);
for i in list(range(1, len(self.shape)))[::-1]:
self.back_propagation_layer(i)
def gradient_descent(self, η):
# gradient descent
for i in range(1, len(self.shape)):
self.gradient_descent_layer(i, η)
def forward_pass_layer(self, ind):
ind is the index of the current network
self.zs[ind] = self.biases[ind] + \
np.tensordot(self.weights[ind], self.αs[ind - 1], axes=1)
self.αs[ind] = σ(self.zs[ind])
def back_propagation_layer(self, ind):
ind \in [len(self.shape) - 1, 1]
if ind > 1:
self.δs[ind - 1] = σ_prime(self.zs[ind-1]) * \
np.tensordot(self.δs[ind], self.weights[ind], axes=1)
self.partial_ws[ind] = np.tensordot(self.δs[ind], self.αs[ind - 1], axes=0)
def gradient_descent_layer(self, ind, η):
ind \in [1, ...len(shape) - 1]
self.weights[ind] = self.weights[ind] - η * self.partial_ws[ind]
self.biases[ind] = self.biases[ind] - η * self.δs[ind]
Explanation: Why do we need to improve the traing method?
In the previous note, we managed to get the neural net to
1. converge to any value at a given input
2. emulate a step function.
However, the training failed to emulate functions such as absolute value and sine.
End of explanation
# train as a simple neuron
target_func = lambda x: 1 if x < 0.5 else 0
nw = Network([1, 4, 1])
figure(figsize=(16, 4))
subplot(131)
# todo: generate a mini batch of training data,
# take an average of the gradeitn from the mini-batch
inputs = [[x] for x in np.linspace(0, 1, 100)]
targets = [[target_func(x)] for x in np.linspace(0, 1, 100)]
for ind in range(10000):
x = np.random.random()
nw.train([x], [target_func(x)], 10)
scatter(x, target_func(x))
Plot(lambda x: nw.output([x])[0], [0, 1], label="neural net")
Plot(lambda x: target_func(x), [0, 1], color='r', linewidth=4, alpha=0.3, label="target function")
xlim(-0.25, 1.25)
ylim(-0.25, 1.25)
legend(loc=3, frameon=False)
subplot(132)
imshow(nw.weights[1], interpolation='none', aspect=1);colorbar();
subplot(133)
imshow(nw.weights[2], interpolation='none', aspect=1);colorbar()
# subplot(144)
# imshow(nw.weights[3], interpolation='none', aspect=1);colorbar()
# train as a simple neuron
target_func = lambda x: 1 if x < 0.5 else 0
nw = Network([1, 4, 1])
figure(figsize=(4, 4))
#subplot(141)
batch_size = 10
inputs = [[x] for x in np.linspace(0, 1, batch_size)]
targets = [[target_func(x)] for x in np.linspace(0, 1, batch_size)]
n = 0
for i in range(3):
for ind in range(40):
n += 1;
nw.train_batch(inputs, targets, 10)
Plot(lambda x: nw.output([x])[0], [0, 1], label="NN {} batches".format(n))
plot([i[0] for i in inputs], [t[0] for t in targets], 'r.', label="training data")
xlim(-0.25, 1.25)
ylim(-0.25, 1.25)
_title = "Training Progress Through\nMini-batches (4 hidden neurons)"
title(_title, fontsize=15)
legend(loc=(1.2, 0.25), frameon=False)
fn = "004 batch training " + _title.replace('\n', ' ') + ".png"
savefig(fn, dpi=300,
bbox_inches='tight',
transparent=True,
pad_inches=0)
FileLink(fn)
Explanation: Now let's train the data set the way before, to validate our new class.
End of explanation
# train as a simple neuron
target_func = lambda x: np.cos(x)**2
nw = Network([1, 10, 1])
figure(figsize=(16, 4))
#subplot(141)
batch_size = 100
grid = np.linspace(0, 10, batch_size)
inputs = [[x] for x in grid]
targets = [[target_func(x)] for x in grid]
n = 0
for i in range(4):
for ind in range(500):
n += 1;
nw.train_batch(inputs, targets, 40)
Plot(lambda x: nw.output([x])[0], [0, 10], label="NN {} batches".format(n))
plot([i[0] for i in inputs], [t[0] for t in targets], 'r.', label="training data")
_title = "Training Progress Through Mini-batches (10 hidden neurons)"
title(_title)
xlim(-0.25, 10.25)
ylim(-0.25, 1.25)
legend(loc=4, frameon=False)
fn = "004 batch training " + _title + ".png"
savefig(fn, dpi=300,
bbox_inches='tight',
transparent=True,
pad_inches=0)
FileLink(fn)
Explanation: Now a sine function
End of explanation
# train as a simple neuron
target_func = lambda x: np.abs(x - 0.5)
nw = Network([1, 20, 1])
figure(figsize=(6, 6))
batch_size = 40
grid = np.linspace(0, 0.5, batch_size)
inputs = [[x] for x in grid]
targets = [[target_func(x)] for x in grid]
n = 0
for i in range(4):
for ind in range(1000):
n += 1;
nw.train_batch(inputs, targets, 23)
Plot(lambda x: nw.output([x])[0], [0, 1.0], label="NN {} batches".format(n))
plot([i[0] for i in inputs], [t[0] for t in targets], 'r.', label="training data")
_title = "Emulate Half of An Absolute Value Function"
title(_title)
xlim(-0.25, 1.25)
ylim(-0.25, 1.25)
legend(loc=1, frameon=False)
fn = "004 batch training " + _title.replace('\n', ' ') + ".png"
savefig(fn,
dpi=300,
bbox_inches='tight',
transparent=True,
pad_inches=0)
FileLink(fn)
Explanation: Now an absolute function?
Well as it turned out, to encode an absolute value function is
hard. You can play with the code below and try to learn it, but
for less than 10 hidden neurons the result is usually pretty
terrible.
It is possible however, to learn half of the absolute function,
and encode only a straight line.
End of explanation
# train as a simple neuron
target_func = lambda x: np.abs(x - 0.5)
nw = Network([1, 40, 1])
figure(figsize=(6, 6))
batch_size = 80
grid = np.linspace(0, 1, batch_size)
inputs = [[x] for x in grid]
targets = [[target_func(x)] for x in grid]
n = 0
for i in range(4):
for ind in range(4000):
n += 1;
nw.train_batch(inputs, targets, 10)
Plot(lambda x: nw.output([x])[0], [0, 1.0], label="NN {} batches".format(n))
plot([i[0] for i in inputs], [t[0] for t in targets], 'r.', label="training data")
_title = "Emulate An Absolute\nFunction (2 times of hidden neurons)"
title(_title)
xlim(-0.25, 1.25)
ylim(-0.25, 1.25)
legend(loc=1, frameon=False)
fn = "004 batch training " + _title.replace('\n', ' ') + ".png"
savefig(fn,
dpi=300,
bbox_inches='tight',
transparent=True,
pad_inches=0)
FileLink(fn)
Explanation: Now equiped with this set of hyper-parameters, I thought:
"If I can train both of the two halfs of the
*absolute function* separately, I can build
the entire function but adding these two
half together, right?"
Then I tried 2 $\times$ of the number of hidden neurons.
And amazingly, it just worked.
End of explanation |
3,565 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Find the author that published the most papers on Drosophila virilis.
Step1: We first want to know now many publications have D. virilis in their title or abstract. We use the NCBI history function in order to refer to this search in our subsequent efetch call.
Step2: Retrieve the PubMed entries using our search history
Step3: We construct a dictionary with all authors as keys and author occurance as value.
Step4: Dictionaries do not have a natural order but we can sort a dictionary based on the values. | Python Code:
from Bio import Entrez
import re
Explanation: Find the author that published the most papers on Drosophila virilis.
End of explanation
# Remember to edit the e-mail address
Entrez.email = "[email protected]" # Always tell NCBI who you are
handle = Entrez.esearch(db="pubmed", term="Drosophila virilis[Title/Abstract]", usehistory="y")
record = Entrez.read(handle)
# generate a Python list with all Pubmed IDs of articles about D. virilis
id_list = record["IdList"]
record["Count"]
webenv = record["WebEnv"]
query_key = record["QueryKey"]
Explanation: We first want to know now many publications have D. virilis in their title or abstract. We use the NCBI history function in order to refer to this search in our subsequent efetch call.
End of explanation
handle = Entrez.efetch(db="pubmed",rettype="medline", retmode="text", retstart=0,
retmax=528, webenv=webenv, query_key=query_key)
out_handle = open("D_virilis_pubs.txt", "w")
data = handle.read()
handle.close()
out_handle.write(data)
out_handle.close()
Explanation: Retrieve the PubMed entries using our search history
End of explanation
with open("D_virilis_pubs.txt") as datafile:
author_dict = {}
for line in datafile:
if re.match("AU", line):
# capture author
author = line.split("-", 1)[1]
# remove leading and trailing whitespace
author = author.strip()
# if key is present, add 1
# if it's not present, initialize at 1
author_dict[author] = 1 + author_dict.get(author, 0)
Explanation: We construct a dictionary with all authors as keys and author occurance as value.
End of explanation
# use the values (retrieved by author_dict.get) for sorting the dictionary
# The function "sorted" returns a list that can be indexed to return only some elements, e.g. top 5
for author in sorted(author_dict, key = author_dict.get, reverse = True)[:5]:
print(author, ":", author_dict[author])
Explanation: Dictionaries do not have a natural order but we can sort a dictionary based on the values.
End of explanation |
3,566 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Best report ever
Everything you see here is either markdown, LaTex, Python or BASH.
The spectral function
It looks like this
Step1: Now I can run my script
Step2: Not very elegant, I know. It's just for demo pourposes.
Step3: I have first to import a few modules/set up a few things
Step4: Next I can read the data from a local folder
Step5: Now I can plot the stored arrays.
Step6: Creating a PDF document
I can create a PDF version of this notebook from itself, using the command line | Python Code:
!gvim data/SF_Si_bulk/invar.in
Explanation: Best report ever
Everything you see here is either markdown, LaTex, Python or BASH.
The spectral function
It looks like this:
\begin{equation}
A(\omega) = \mathrm{Im}|G(\omega)|
\end{equation}
GW vs Cumulant
Mathematically very different:
\begin{equation}
G^{GW} (\omega) = \frac1{ \omega - \epsilon - \Sigma (\omega) }
\end{equation}
\begin{equation}
G^C(t_1, t_2) = G^0(t_1, t_2) e^{ i \int_{t_1}^{t_2} \int_{t'}^{t_2} dt' dt'' W (t', t'') }
\end{equation}
BUT they connect through $\mathrm{Im} W (\omega) = \frac1\pi \mathrm{Im} \Sigma ( \epsilon - \omega )$.
Implementation
Using a multi-pole representation for $\Sigma^{GW}$:
\begin{equation}
\mathrm{Im} W (\omega) = \frac1\pi \mathrm{Im} \Sigma ( \epsilon - \omega )
\end{equation}
\begin{equation}
W (\tau) = - i \lambda \bigl[ e^{ i \omega_p \tau } \theta ( - \tau ) + e^{ - i \omega_p \tau } \theta ( \tau ) \bigr]
\end{equation}
GW vs Cumulant
GW:
\begin{equation}
A(\omega) = \frac1\pi \frac{\mathrm{Im}\Sigma (\omega)}
{ [ \omega - \epsilon - \mathrm{Re}\Sigma (\omega) ]^2 +
[ \mathrm{Im}\Sigma (\omega) ]^2}
\end{equation}
Cumulant:
\begin{equation}
A(\omega) = \frac1\pi \sum_{n=0}^{\infty} \frac{a^n}{n!} \frac{\Gamma}{ (\omega - \epsilon + n \omega_p)^2 + \Gamma^2 }
\end{equation}
Now some executable code (Python)
I have implemented the formulas above in my Python code.
I can just run it from here., but before let me check
if my input file is correct...
End of explanation
%cd data/SF_Si_bulk/
%run ../../../../../Code/SF/sf.py
Explanation: Now I can run my script:
End of explanation
cd ../../../
Explanation: Not very elegant, I know. It's just for demo pourposes.
End of explanation
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
# plt.rcParams['figure.figsize'] = (9., 6.)
%matplotlib inline
Explanation: I have first to import a few modules/set up a few things:
End of explanation
sf_c = np.genfromtxt(
'data/SF_Si_bulk/Spfunctions/spftot_exp_kpt_1_19_bd_1_4_s1.0_p1.0_800ev_np1.dat')
sf_gw = np.genfromtxt(
'data/SF_Si_bulk/Spfunctions/spftot_gw_s1.0_p1.0_800ev.dat')
#!gvim spftot_exp_kpt_1_19_bd_1_4_s1.0_p1.0_800ev_np1.dat
Explanation: Next I can read the data from a local folder:
End of explanation
plt.plot(sf_c[:,0], sf_c[:,1], label='1-pole cumulant')
plt.plot(sf_gw[:,0], sf_gw[:,1], label='GW')
plt.xlim(-50, 0)
plt.ylim(0, 300)
plt.title("Bulk Si - Spectral function - ib=1, ikpt=1")
plt.xlabel("Energy (eV)")
plt.grid(); plt.legend(loc='best')
Explanation: Now I can plot the stored arrays.
End of explanation
!jupyter-nbconvert --to pdf cumulant-to-pdf.ipynb
pwd
!xpdf cumulant-to-pdf.pdf
Explanation: Creating a PDF document
I can create a PDF version of this notebook from itself, using the command line:
End of explanation |
3,567 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Repaso (Módulo 2)
El tema principal en este módulo fueron simulaciones Montecarlo. Al finalizar este módulo, se espera que ustedes tengan las siguientes competencias
- Evaluar integrales (o encontrar áreas) numéricamente mendiante métodos Montecarlo.
- Poder replicar fractales aleatorios símples (como los de Barnsley), dadas las características del mismo.
- Realizar evaluaciones de probabilidad precio-umbral.
Ejemplo 1. Evaluación numérica de integrales utilizando Montecarlo
En la clase de evaluación de integrales numéricas por montecarlo vimos dos tipos de evaluación de integrales.
El tipo 1 se basaba en la definición de valor promedio de una función.
El tipo 2 se basaba en probabilidades y una variable aleatoria de bernoulli (para encontrar áreas).
En clase desarrollamos funciones para la evaluación de integrales con ambos métodos (explicar porqué la segunda se puede ver como una integral). Las funciones son las siguientes
Step1: Considere las funciones $f_1(x)=\sqrt{1+x^{4}}$, $f_2(x)=\ln(\ln x)$, $f_3(x)=\frac {1}{\ln x}$, $f_4(x)=e^{e^{x}}$, $f_5(x)=e^{-{\frac {x^{2}}{2}}}$ y $f_6(x)=\sin(x^{2})$.
Utilizar las funciones anteriores para realizar la evaluación numérica de las integrales de las funciones anteriores en el intervalo $(4,5)$. Poner los resultados en una tabla cuyas filas correspondan a la cantidad de términos utilizados en la aproximación (usar 10, 100, 1000, 10000 y 100000 términos) y cuyas columnas correspondan a las funciones.
Hacer una tabla por cada método.
¿Se pueden ver diferencias notables en la velocidad de convergencia de los métodos?
Step2: Ejemplo 2. Fractal aleatorio tipo Barnsley
En la clase de fractales aleatorios vimos que el fractal helecho de Barnsley se generaba a través de cuatro transformaciones afines que se elegían con cierta probabilidad.
Vimos que este helecho representaba de manera muy aproximada helechos reales.
Vimos que modificando parámetros de la tabla, se podían generar mutaciones de el helecho.
Pues bien, usando la misma idea de transformaciones afines que se escogen con cierta probabilidad, se pueden generar una infinidad inimaginable de fractales. Incluso, se pueden generar fractales aleatorios que poseen un atractor determinístico (¿Qué es esto?).
Como en la clase de fractales, repliquemos el fractal tipo Barnsley descrito por la siguiente tabla...
Referencia | Python Code:
def int_montecarlo1(f, a, b, N):
# Evaluación numérica de integrales por Montecarlo tipo 1
# f=f(x) es la función a integrar (debe ser declarada previamente) que devuelve para cada x su valor imagen,
# a y b son los límites inferior y superior del intervalo donde se integrará la función, y N es el número
# de puntos con que se aproximará.
return (b-a)/N*np.sum(f(np.random.uniform(a, b, N)))
def int_montecarlo2(region, a1, b1, a2, b2, N):
# Evaluación numérica de integrales por Montecarlo tipo 2
# region=region(x,y) retorna True si la coordenada (x,y) pertenece a la región a integrar y False de lo
# contrario , a1, b1, a2, b2 son los límites del rectángulo que contiene la región, y N es el número de
# puntos con que se aproximará.
return (b-a)/N*np.sum(f(np.random.uniform(a, b, N)))
A_R = (b1-a1)*(b2-a2)
x = np.random.uniform(a1, b1, N.astype(int))
y = np.random.uniform(a2, b2, N.astype(int))
return A_R*np.sum(region(x,y))/N
Explanation: Repaso (Módulo 2)
El tema principal en este módulo fueron simulaciones Montecarlo. Al finalizar este módulo, se espera que ustedes tengan las siguientes competencias
- Evaluar integrales (o encontrar áreas) numéricamente mendiante métodos Montecarlo.
- Poder replicar fractales aleatorios símples (como los de Barnsley), dadas las características del mismo.
- Realizar evaluaciones de probabilidad precio-umbral.
Ejemplo 1. Evaluación numérica de integrales utilizando Montecarlo
En la clase de evaluación de integrales numéricas por montecarlo vimos dos tipos de evaluación de integrales.
El tipo 1 se basaba en la definición de valor promedio de una función.
El tipo 2 se basaba en probabilidades y una variable aleatoria de bernoulli (para encontrar áreas).
En clase desarrollamos funciones para la evaluación de integrales con ambos métodos (explicar porqué la segunda se puede ver como una integral). Las funciones son las siguientes:
End of explanation
# Importamos librerías
import numpy as np
import pandas as pd
import random
Explanation: Considere las funciones $f_1(x)=\sqrt{1+x^{4}}$, $f_2(x)=\ln(\ln x)$, $f_3(x)=\frac {1}{\ln x}$, $f_4(x)=e^{e^{x}}$, $f_5(x)=e^{-{\frac {x^{2}}{2}}}$ y $f_6(x)=\sin(x^{2})$.
Utilizar las funciones anteriores para realizar la evaluación numérica de las integrales de las funciones anteriores en el intervalo $(4,5)$. Poner los resultados en una tabla cuyas filas correspondan a la cantidad de términos utilizados en la aproximación (usar 10, 100, 1000, 10000 y 100000 términos) y cuyas columnas correspondan a las funciones.
Hacer una tabla por cada método.
¿Se pueden ver diferencias notables en la velocidad de convergencia de los métodos?
End of explanation
import pandas as pd
import numpy as np
i = np.arange(4)
df = pd.DataFrame(index=i,columns=['$a_i$', '$b_i$', '$c_i$', '$d_i$', '$e_i$', '$f_i$', '$p_i$'], dtype='float')
df.index.name = "$i$"
df['$a_i$'] = [0.5, 0.5, 0.5, 0.5]
df['$b_i$'] = [0.0, 0.0, 0.0, 0.0]
df['$c_i$'] = [0.0, 0.0, 0.0, 0.0]
df['$d_i$'] = [0.5, 0.5, 0.5, 0.5]
df['$e_i$'] = [1.0, 50.0, 1.0, 50.0]
df['$f_i$'] = [1.0, 1.0, 50.0, 50.0]
df['$p_i$'] = [0.1, 0.2, 0.3, 0.4]
df.round(2)
Explanation: Ejemplo 2. Fractal aleatorio tipo Barnsley
En la clase de fractales aleatorios vimos que el fractal helecho de Barnsley se generaba a través de cuatro transformaciones afines que se elegían con cierta probabilidad.
Vimos que este helecho representaba de manera muy aproximada helechos reales.
Vimos que modificando parámetros de la tabla, se podían generar mutaciones de el helecho.
Pues bien, usando la misma idea de transformaciones afines que se escogen con cierta probabilidad, se pueden generar una infinidad inimaginable de fractales. Incluso, se pueden generar fractales aleatorios que poseen un atractor determinístico (¿Qué es esto?).
Como en la clase de fractales, repliquemos el fractal tipo Barnsley descrito por la siguiente tabla...
Referencia:
- Barnsley, Michael F. Fractals Everywhere: New Edition, ISBN: 9780486320342.
End of explanation |
3,568 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
https
Step1: ^ Looks like Augusto de Campos' poems ^.^
Step2: The Python Programming Language | Python Code:
def add_numbers(x,y):
return x+y
a = add_numbers
a(1,2)
x = [1, 2, 4]
x.insert(2, 3) # list.insert(position, item)
x
x = 'This is a string'
print(x[0]) #first character
print(x[0:1]) #first character, but we have explicitly set the end character
print(x[0:2]) #first two characters
x = 'This is a string'
pos = 0
for i in range(len(x) + 1):
print(x[0:pos])
pos += 1
pos -= 2
for i in range(len(x) + 1):
print(x[0:pos])
pos -= 1
Explanation: https://hub.coursera-notebooks.org/user/fhfmrxmooxezwpdotusxza/notebooks/Week%201.ipynb#
End of explanation
firstname = 'Christopher Arthur Hansen Brooks'.split(' ')[0] # [0] selects the first element of the list
lastname = 'Christopher Arthur Hansen Brooks'.split(' ')[-1] # [-1] selects the last element of the list
print(firstname)
print(lastname)
secondname = 'Christopher Arthur Hansen Brooks'.split(' ')[1]
secondname
thirdname = 'Christopher Arthur Hansen Brooks'.split(' ')[2]
thirdname
dict = {'Manuel' : '[email protected]', 'Bill' : '[email protected]'}
dict['Manuel']
for email in dict:
print(dict[email])
for email in dict.values():
print(email)
for name in dict.keys():
print(name)
for name, email in dict.items():
print(name)
print(email)
sales_record = {
'price': 3.24,
'num_items': 4,
'person': 'Chris'}
sales_statement = '{} bought {} item(s) at a price of {} each for a total of {}'
print(sales_statement.format(sales_record['person'],
sales_record['num_items'],
sales_record['price'],
sales_record['num_items']*sales_record['price']))
import csv
%precision 2 # float point precision for printing to 2
with open('mpg.csv') as csvfile: #read the csv file
mpg = list(csv.DictReader(csvfile)) # https://docs.python.org/2/library/csv.html
# csv.DictReader(csvfile, fieldnames=None, restkey=None, restval=None, dialect='excel', *args, **kwds)
mpg[:3] # The first three dictionaries in our list.
len(mpg) # list of 234 dictionaries
mpg[0].keys() # the names of the colums
# How to find the average cty fuel economy across all cars.
# All values in the dictionaries are strings, so we need to
# convert to float.
sum(float(d['cty']) for d in mpg) / len(mpg)
# Similarly this is how to find the average hwy fuel economy across
# all cars.
sum(float(d['hwy']) for d in mpg) / len(mpg)
# Use set to return the unique values for the number of cylinders
# the cars in our dataset have.
cylinders = set(d['cyl'] for d in mpg)
cylinders
# A set is an unordered collection of items. Every element is unique
# (no duplicates) and must be immutable (which cannot be changed).
# >>> x = [1, 1, 2, 2, 2, 2, 2, 3, 3]
# >>> set(x)
# set([1, 2, 3])
CtyMpgByCyl = [] # empty list to start the calculations
for c in cylinders: # iterate over all the cylinder levels
summpg = 0
cyltypecount = 0
for d in mpg: # iterate over all dictionaries
if d['cyl'] == c: # if the cylinder level type matches,
summpg += float(d['cty']) # add the cty mpg
cyltypecount += 1 # increment the count
CtyMpgByCyl.append((c, summpg / cyltypecount)) # append the tuple ('cylinder', 'avg mpg')
CtyMpgByCyl.sort(key=lambda x: x[0])
CtyMpgByCyl
# the city fuel economy appears to be decreasing as the number of cylinders increases
# Use set to return the unique values for the class types in our dataset.
vehicleclass = set(d['class'] for d in mpg) # what are the class types
vehicleclass
# how to find the average hwy mpg for each class of vehicle in our dataset.
HwyMpgByClass = []
for t in vehicleclass: # iterate over all the vehicle classes
summpg = 0
vclasscount = 0
for d in mpg: # iterate over all dictionaries
if d['class'] == t: # if the cylinder amount type matches,
summpg += float(d['hwy']) # add the hwy mpg
vclasscount += 1 # increment the count
HwyMpgByClass.append((t, summpg / vclasscount)) # append the tuple ('class', 'avg mpg')
HwyMpgByClass.sort(key=lambda x: x[1])
HwyMpgByClass
Explanation: ^ Looks like Augusto de Campos' poems ^.^
End of explanation
import datetime as dt
import time as tm
# time returns the current time in seconds since the Epoch. (January 1st, 1970)
tm.time()
# Convert the timestamp to datetime.
dtnow = dt.datetime.fromtimestamp(tm.time())
dtnow
dtnow.year, dtnow.month, dtnow.day, dtnow.hour, dtnow.minute, dtnow.second # get year, month, day, etc.from a datetime
# timedelta is a duration expressing the difference between two dates.
delta = dt.timedelta(days = 100) # create a timedelta of 100 days
delta
a = (1, 2)
type(a)
['a', 'b', 'c'] + [1, 2, 3]
type(lambda x: x+1)
[x**2 for x in range(10)]
str = "Python é muito legal"
lista = []
soma = 0
lista = str.split()
lista
len(lista[0])
for i in lista:
soma += len(i)
soma
Explanation: The Python Programming Language: Dates and Times
End of explanation |
3,569 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Coords 2
Step1: Section 0
Step2: We can get the right ascension and declination components of the object directly by accessing those attributes.
Step3: Section 1
Step4: There are three different ways of transforming coordinates. Each has its pros and cons, but all should give you the same result. The first way to transform to other built-in frames is by specifying those attributes. For instance, let's see the location of HCG 7 in Galactic coordinates.
Transforming coordinates using attributes
Step5: Transforming coordinates using the transform_to() method and other coordinate object
The above is actually a special "quick-access" form that internally does the same as what's in the cell below
Step6: Transforming coordinates using the transform_to() method and a string
Finally, we can transform using the transform_to() method and a string with the name of a built-in coordinate system.
Step7: We can transform to many coordinate frames and equinoxes.
These coordinates are available by default
Step8: And, as with the Galactic coordinates, we can acheive the same result by importing the FK5 class from the astropy.coordinates package. This also allows us to change the equinox.
Step9: <div class="alert alert-warning">
**Beware
Step10: Instead, we now have access to the l and b attributes
Step11: Section 2
Step12: Let's first see the sky position at Kitt Peak National Observatory in Arizona.
Step13: For known observing sites we can enter the name directly.
Step14: We can see the list of observing sites
Step15: Let's check the altitude at 1 AM UTC, which is 6 PM AZ mountain time
Step16: Now we use these to create an AltAz frame object. Note that this frame has some other information about the atmosphere, which can be used to correct for atmospheric refraction. Here we leave that alone, because the default is to ignore this effect (by setting the pressure to 0).
Step17: Now we can transform our ICRS SkyCoord to AltAz to get the location in the sky over Kitt Peak at the requested time.
Step18: To look at just the altitude we can alt attribute
Step19: Alright, it's at 55 degrees at 6 PM, but that's pretty early to be observing. We could try various times one at a time to see if the airmass is at a darker time, but we can do better
Step20: Great! Looks like the lowest airmass is in another hour or so (7 PM). But that might still be twilight... When should we start observing for proper dark skies? Fortunately, Astropy provides a get_sun function that can be used to check this. Let's use it to check if we're in 18-degree twilight or not.
Step21: Looks like it's just below 18 degrees at 7 PM, so you should be good to go!
We can also look at the object altitude at the present time and date | Python Code:
# Third-party dependencies
from astropy import units as u
from astropy.coordinates import SkyCoord
import numpy as np
# Set up matplotlib and use a nicer set of plot parameters
from astropy.visualization import astropy_mpl_style
import matplotlib.pyplot as plt
plt.style.use(astropy_mpl_style)
%matplotlib inline
Explanation: Coords 2: Transforming between coordinate systems
Authors
Erik Tollerud, Kelle Cruz, Stephen Pardy, Stephanie T. Douglas
Learning Goals
Create astropy.coordinates.SkyCoord objects
Transform to different coordinate systems on the sky
Transform to altitude/azimuth coordinates from a specific observing site
Keywords
coordinates, units, observational astronomy
Summary
In this tutorial we demonstrate how to define astronomical coordinates using the astropy.coordinates "frame" classes. We then show how to transform between the different built-in coordinate frames, such as from ICRS (RA, Dec) to Galactic (l, b). Finally, we show how to compute altitude and azimuth from a specific observing site.
Imports
End of explanation
hcg7_center = SkyCoord(9.81625*u.deg, 0.88806*u.deg, frame='icrs') # using degrees directly
print(hcg7_center)
hcg7_center = SkyCoord('0h39m15.9s', '0d53m17.016s', frame='icrs') # passing in string format
print(hcg7_center)
Explanation: Section 0: Quickstart
<div class="alert alert-info">
**Note:** If you already worked through [Coords 1](http://learn.astropy.org/rst-tutorials/Coordinates-Intro.html?highlight=coordinates) you can feel free to skip to [Section 1](#Section-1:).
</div>
In Astropy, the most common object you'll work with for coordinates is SkyCoord. A SkyCoord can most easily be created directly from angles as shown below.
In this tutorial we'll be converting between frames. Let's start in the ICRS frame (which happens to be the default.)
For much of this tutorial we'll work with the Hickson Compact Group 7. We can create an object either by passing the degrees explicitly (using the astropy units library) or by passing in strings. The two coordinates below are equivalent:
End of explanation
print(hcg7_center.ra)
print(hcg7_center.dec)
Explanation: We can get the right ascension and declination components of the object directly by accessing those attributes.
End of explanation
hcg7_center = SkyCoord(9.81625*u.deg, 0.88806*u.deg, frame='icrs')
Explanation: Section 1:
Introducing frame transformations
astropy.coordinates provides many tools to transform between different coordinate systems. For instance, we can use it to transform from ICRS coordinates (in RA and Dec) to Galactic coordinates.
To understand the code in this section, it may help to read over the overview of the astropy coordinates scheme. The key piece to understand is that all coordinates in Astropy are in particular "frames" and we can transform between a specific SkyCoord object in one frame to another. For example, we can transform our previously-defined center of HCG 7 from ICRS to Galactic coordinates:
End of explanation
hcg7_center.galactic
Explanation: There are three different ways of transforming coordinates. Each has its pros and cons, but all should give you the same result. The first way to transform to other built-in frames is by specifying those attributes. For instance, let's see the location of HCG 7 in Galactic coordinates.
Transforming coordinates using attributes:
End of explanation
from astropy.coordinates import Galactic # new coordinate baseclass
hcg7_center.transform_to(Galactic())
Explanation: Transforming coordinates using the transform_to() method and other coordinate object
The above is actually a special "quick-access" form that internally does the same as what's in the cell below: it uses the transform_to() method to convert from one frame to another. We can pass in an empty coordinate class to specify what coordinate system to transform into.
End of explanation
hcg7_center.transform_to('galactic')
Explanation: Transforming coordinates using the transform_to() method and a string
Finally, we can transform using the transform_to() method and a string with the name of a built-in coordinate system.
End of explanation
hcg7_center_fk5 = hcg7_center.transform_to('fk5')
print(hcg7_center_fk5)
Explanation: We can transform to many coordinate frames and equinoxes.
These coordinates are available by default:
ICRS
FK5
FK4
FK4NoETerms
Galactic
Galactocentric
Supergalactic
AltAz
GCRS
CIRS
ITRS
HCRS
PrecessedGeocentric
GeocentricTrueEcliptic
BarycentricTrueEcliptic
HeliocentricTrueEcliptic
SkyOffsetFrame
GalacticLSR
LSR
BaseEclipticFrame
BaseRADecFrame
Let's focus on just a few of these. We can try FK5 coordinates next:
End of explanation
from astropy.coordinates import FK5
hcg7_center_fk5.transform_to(FK5(equinox='J1975')) # precess to a different equinox
Explanation: And, as with the Galactic coordinates, we can acheive the same result by importing the FK5 class from the astropy.coordinates package. This also allows us to change the equinox.
End of explanation
hcg7_center.galactic.ra # should fail because Galactic coordinates are l/b not RA/Dec
Explanation: <div class="alert alert-warning">
**Beware:** Changing frames also changes some of the attributes of the object, but usually in a way that makes sense. The following code should fail.
</div>
End of explanation
print(hcg7_center.galactic.l, hcg7_center.galactic.b)
Explanation: Instead, we now have access to the l and b attributes:
End of explanation
from astropy.coordinates import EarthLocation
from astropy.time import Time
Explanation: Section 2:
Transform frames to get to altitude-azimuth ("AltAz")
To actually do anything with observability we need to convert to a frame local to an on-earth observer. By far the most common choice is horizontal altitude-azimuth coordinates, or "AltAz". We first need to specify both where and when we want to try to observe.
We'll need to import a few more specific modules:
End of explanation
# Kitt Peak, Arizona
kitt_peak = EarthLocation(lat='31d57.5m', lon='-111d35.8m', height=2096*u.m)
Explanation: Let's first see the sky position at Kitt Peak National Observatory in Arizona.
End of explanation
kitt_peak = EarthLocation.of_site('Kitt Peak')
Explanation: For known observing sites we can enter the name directly.
End of explanation
EarthLocation.get_site_names()
Explanation: We can see the list of observing sites:
End of explanation
observing_time = Time('2010-12-21 1:00')
Explanation: Let's check the altitude at 1 AM UTC, which is 6 PM AZ mountain time:
End of explanation
from astropy.coordinates import AltAz
aa = AltAz(location=kitt_peak, obstime=observing_time)
print(aa)
Explanation: Now we use these to create an AltAz frame object. Note that this frame has some other information about the atmosphere, which can be used to correct for atmospheric refraction. Here we leave that alone, because the default is to ignore this effect (by setting the pressure to 0).
End of explanation
hcg7_center.transform_to(aa)
Explanation: Now we can transform our ICRS SkyCoord to AltAz to get the location in the sky over Kitt Peak at the requested time.
End of explanation
hcg7_center.transform_to(aa).alt
Explanation: To look at just the altitude we can alt attribute:
End of explanation
# this gives a Time object with an *array* of times
delta_hours = np.linspace(0, 6, 100)*u.hour
full_night_times = observing_time + delta_hours
full_night_aa_frames = AltAz(location=kitt_peak, obstime=full_night_times)
full_night_aa_coos = hcg7_center.transform_to(full_night_aa_frames)
plt.plot(delta_hours, full_night_aa_coos.secz)
plt.xlabel('Hours from 6pm AZ time')
plt.ylabel('Airmass [Sec(z)]')
plt.ylim(0.9,3)
plt.tight_layout()
Explanation: Alright, it's at 55 degrees at 6 PM, but that's pretty early to be observing. We could try various times one at a time to see if the airmass is at a darker time, but we can do better: let's try to create an airmass plot.
End of explanation
from astropy.coordinates import get_sun
full_night_sun_coos = get_sun(full_night_times).transform_to(full_night_aa_frames)
plt.plot(delta_hours, full_night_sun_coos.alt.deg)
plt.axhline(-18, color='k')
plt.xlabel('Hours from 6pm AZ time')
plt.ylabel('Sun altitude')
plt.tight_layout()
Explanation: Great! Looks like the lowest airmass is in another hour or so (7 PM). But that might still be twilight... When should we start observing for proper dark skies? Fortunately, Astropy provides a get_sun function that can be used to check this. Let's use it to check if we're in 18-degree twilight or not.
End of explanation
now = Time.now()
hcg7_center = SkyCoord(9.81625*u.deg, 0.88806*u.deg, frame='icrs')
kitt_peak_aa = AltAz(location=kitt_peak, obstime=now)
print(hcg7_center.transform_to(kitt_peak_aa))
Explanation: Looks like it's just below 18 degrees at 7 PM, so you should be good to go!
We can also look at the object altitude at the present time and date:
End of explanation |
3,570 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Parallel MULTINEST with 3ML
J. Michael Burgess
MULTINEST
MULTINEST is a Bayesian posterior sampler that has two distinct advantages over traditional MCMC
Step1: Import 3ML and astromodels to the workers
Step2: Now we set up the analysis in the normal way except the following two caveats
Step3: Finally we call MULTINEST. If all is set up properly, MULTINEST will gather the distributed objects and quickly sample the posterior
Step4: Viewing the results
Now we need to bring the BayesianAnalysis object back home. Unfortunately, not all objects can be brought back. So you must save figures to the workers. Future implementations of 3ML will allow for saving of the results to a dedicated format which can then be viewed on the local machine. More soon! | Python Code:
from ipyparallel import Client
rc = Client(profile='mpi')
# Grab a view
view = rc[:]
# Activate parallel cell magics
view.activate()
Explanation: Parallel MULTINEST with 3ML
J. Michael Burgess
MULTINEST
MULTINEST is a Bayesian posterior sampler that has two distinct advantages over traditional MCMC:
* Recovering multimodal posteriors
* In the case that the posterior is does not have a single maximum, traditional MCMC
may miss other modes of the posterior
* Full marginal likelihood computation
* This allows for direct model comparison via Bayes factors
To run the MULTINEST sampler in 3ML, one must have the foloowing software installed:
* MULTINEST (http://xxx.lanl.gov/abs/0809.3437) (git it here: https://github.com/JohannesBuchner/MultiNest)
* pymultinest (https://github.com/JohannesBuchner/PyMultiNest)
Parallelization
MULTINEST can be run in a single instance, but it can be incredibly slow. Luckily, it can be built with MPI support enabling it to be run on a multicore workstation or cluster very effeciently.
There are multiple ways to invoke the parallel run of MULTINEST in 3ML: e.g., one can write a python script with all operations and invoke:
```bash
$> mpiexec -n <N> python my3MLscript.py
```
However, it is nice to be able to stay in the Jupyter environment with ipyparallel which allow the user to easily switch bewteen single instance, desktop cores, and cluster environment all with the same code.
Setup
The user is expected to have and MPI distribution installed (open-mpi, mpich) and have compiled MULTINEST against the MPI library. Additionally, the user should setup and ipyparallel profile. Instructions can be found here: http://ipython.readthedocs.io/en/2.x/parallel/parallel_mpi.html
Initialize the MPI engine
Details for luanching ipcluster on a distributed cluster are not covered here, but everything is the same otherwise.
In the directory that you want to run 3ML in the Jupyter notebook launch and ipcontroller:
```bash
$> ipcontroller start --profile=mpi --ip='*'
```
Next, launch MPI with the desired number of engines:
```bash
$> mpiexec -n <N> ipengine --mpi=mpi4py --profile=mpi
```
Now, the user can head to the notebook and begin!
Running 3ML
First we get a client and and connect it to the running profile
End of explanation
with view.sync_imports():
import threeML
import astromodels
Explanation: Import 3ML and astromodels to the workers
End of explanation
%%px
# Make GBM detector objects
src_selection = "0.-10."
nai0 = threeML.FermiGBM_TTE_Like('NAI0',
"glg_tte_n0_bn080916009_v01.fit",
"-10-0,100-200", # background selection
src_selection, # source interval
rspfile="glg_cspec_n0_bn080916009_v07.rsp")
nai3 = threeML.FermiGBM_TTE_Like('NAI3',"glg_tte_n3_bn080916009_v01.fit",
"-10-0,100-200",
src_selection,
rspfile="glg_cspec_n3_bn080916009_v07.rsp")
nai4 = threeML.FermiGBM_TTE_Like('NAI4',"glg_tte_n4_bn080916009_v01.fit",
"-10-0,100-200",
src_selection,
rspfile="glg_cspec_n4_bn080916009_v07.rsp")
bgo0 = threeML.FermiGBM_TTE_Like('BGO0',"glg_tte_b0_bn080916009_v01.fit",
"-10-0,100-200",
src_selection,
rspfile="glg_cspec_b0_bn080916009_v07.rsp")
# Select measurements
nai0.set_active_measurements("10.0-30.0", "40.0-950.0")
nai3.set_active_measurements("10.0-30.0", "40.0-950.0")
nai4.set_active_measurements("10.0-30.0", "40.0-950.0")
bgo0.set_active_measurements("250-43000")
# Set up 3ML likelihood object
triggerName = 'bn080916009'
ra = 121.8
dec = -61.3
data_list = threeML.DataList( nai0,nai3,nai4,bgo0 )
band = astromodels.Band()
GRB = threeML.PointSource( triggerName, ra, dec, spectral_shape=band )
model = threeML.Model( GRB )
# Set up Bayesian details
bayes = threeML.BayesianAnalysis(model, data_list)
band.K.prior = astromodels.Log_uniform_prior(lower_bound=1E-2, upper_bound=5)
band.xp.prior = astromodels.Log_uniform_prior(lower_bound=1E2, upper_bound=2E3)
band.alpha.prior = astromodels.Uniform_prior(lower_bound=-1.5,upper_bound=0.)
band.beta.prior = astromodels.Uniform_prior(lower_bound=-3.,upper_bound=-1.5)
Explanation: Now we set up the analysis in the normal way except the following two caveats:
* we must call the threeML module explicity because ipyparallel does not support from <> import *
* we use the %%px cell magic (or %px line magic) to perfrom operations in the workers
End of explanation
%px samples = bayes.sample_multinest(n_live_points=400,resume=False)
Explanation: Finally we call MULTINEST. If all is set up properly, MULTINEST will gather the distributed objects and quickly sample the posterior
End of explanation
# Execute commands that allow for saving figures
# grabbing samples, etc
%%px --targets ::1
samples = bayes.raw_samples()
f=bayes.get_credible_intervals()
bayes.corner_plot(plot_contours=True, plot_density=False)
# Bring the raw samples local
raw_samples=view['samples'][0]
raw_samples['bn080916009.spectrum.main.Band.K']
Explanation: Viewing the results
Now we need to bring the BayesianAnalysis object back home. Unfortunately, not all objects can be brought back. So you must save figures to the workers. Future implementations of 3ML will allow for saving of the results to a dedicated format which can then be viewed on the local machine. More soon!
End of explanation |
3,571 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Distributions
Think Bayes, Second Edition
Copyright 2020 Allen B. Downey
License
Step1: In the previous chapter we used Bayes's Theorem to solve a cookie problem; then we solved it again using a Bayes table.
In this chapter, at the risk of testing your patience, we will solve it one more time using a Pmf object, which represents a "probability mass function".
I'll explain what that means, and why it is useful for Bayesian statistics.
We'll use Pmf objects to solve some more challenging problems and take one more step toward Bayesian statistics.
But we'll start with distributions.
Distributions
In statistics a distribution is a set of possible outcomes and their corresponding probabilities.
For example, if you toss a coin, there are two possible outcomes with
approximately equal probability.
If you roll a six-sided die, the set of possible outcomes is the numbers 1 to 6, and the probability associated with each outcome is 1/6.
To represent distributions, we'll use a library called empiricaldist.
An "empirical" distribution is based on data, as opposed to a
theoretical distribution.
We'll use this library throughout the book. I'll introduce the basic features in this chapter and we'll see additional features later.
Probability Mass Functions
If the outcomes in a distribution are discrete, we can describe the distribution with a probability mass function, or PMF, which is a function that maps from each possible outcome to its probability.
empiricaldist provides a class called Pmf that represents a
probability mass function.
To use Pmf you can import it like this
Step2: If that doesn't work, you might have to install empiricaldist; try running
!pip install empiricaldist
in a code cell or
pip install empiricaldist
in a terminal window.
The following example makes a Pmf that represents the outcome of a
coin toss.
Step3: Pmf creates an empty Pmf with no outcomes.
Then we can add new outcomes using the bracket operator.
In this example, the two outcomes are represented with strings, and they have the same probability, 0.5.
You can also make a Pmf from a sequence of possible outcomes.
The following example uses Pmf.from_seq to make a Pmf that represents a six-sided die.
Step4: In this example, all outcomes in the sequence appear once, so they all have the same probability, $1/6$.
More generally, outcomes can appear more than once, as in the following example
Step5: The letter M appears once out of 11 characters, so its probability is $1/11$.
The letter i appears 4 times, so its probability is $4/11$.
Since the letters in a string are not outcomes of a random process, I'll use the more general term "quantities" for the letters in the Pmf.
The Pmf class inherits from a Pandas Series, so anything you can do with a Series, you can also do with a Pmf.
For example, you can use the bracket operator to look up a quantity and get the corresponding probability.
Step6: In the word "Mississippi", about 36% of the letters are "s".
However, if you ask for the probability of a quantity that's not in the distribution, you get a KeyError.
Step7: You can also call a Pmf as if it were a function, with a letter in parentheses.
Step8: If the quantity is in the distribution the results are the same.
But if it is not in the distribution, the result is 0, not an error.
Step9: With parentheses, you can also provide a sequence of quantities and get a sequence of probabilities.
Step10: The quantities in a Pmf can be strings, numbers, or any other type that can be stored in the index of a Pandas Series.
If you are familiar with Pandas, that will help you work with Pmf objects.
But I will explain what you need to know as we go along.
The Cookie Problem Revisited
In this section I'll use a Pmf to solve the cookie problem from <<_TheCookieProblem>>.
Here's the statement of the problem again
Step11: This distribution, which contains the prior probability for each hypothesis, is called (wait for it) the prior distribution.
To update the distribution based on new data (the vanilla cookie),
we multiply the priors by the likelihoods. The likelihood
of drawing a vanilla cookie from Bowl 1 is 3/4. The likelihood
for Bowl 2 is 1/2.
Step12: The result is the unnormalized posteriors; that is, they don't add up to 1.
To make them add up to 1, we can use normalize, which is a method provided by Pmf.
Step13: The return value from normalize is the total probability of the data, which is $5/8$.
posterior, which contains the posterior probability for each hypothesis, is called (wait now) the posterior distribution.
Step14: From the posterior distribution we can select the posterior probability for Bowl 1
Step15: And the answer is 0.6.
One benefit of using Pmf objects is that it is easy to do successive updates with more data.
For example, suppose you put the first cookie back (so the contents of the bowls don't change) and draw again from the same bowl.
If the second cookie is also vanilla, we can do a second update like this
Step16: Now the posterior probability for Bowl 1 is almost 70%.
But suppose we do the same thing again and get a chocolate cookie.
Here are the likelihoods for the new data
Step17: And here's the update.
Step18: Now the posterior probability for Bowl 1 is about 53%.
After two vanilla cookies and one chocolate, the posterior probabilities are close to 50/50.
101 Bowls
Next let's solve a cookie problem with 101 bowls
Step19: We can use this array to make the prior distribution
Step20: As this example shows, we can initialize a Pmf with two parameters.
The first parameter is the prior probability; the second parameter is a sequence of quantities.
In this example, the probabilities are all the same, so we only have to provide one of them; it gets "broadcast" across the hypotheses.
Since all hypotheses have the same prior probability, this distribution is uniform.
Here are the first few hypotheses and their probabilities.
Step21: The likelihood of the data is the fraction of vanilla cookies in each bowl, which we can calculate using hypos
Step22: Now we can compute the posterior distribution in the usual way
Step23: The following figure shows the prior distribution and the posterior distribution after one vanilla cookie.
Step24: The posterior probability of Bowl 0 is 0 because it contains no vanilla cookies.
The posterior probability of Bowl 100 is the highest because it contains the most vanilla cookies.
In between, the shape of the posterior distribution is a line because the likelihoods are proportional to the bowl numbers.
Now suppose we put the cookie back, draw again from the same bowl, and get another vanilla cookie.
Here's the update after the second cookie
Step25: And here's what the posterior distribution looks like.
Step26: After two vanilla cookies, the high-numbered bowls have the highest posterior probabilities because they contain the most vanilla cookies; the low-numbered bowls have the lowest probabilities.
But suppose we draw again and get a chocolate cookie.
Here's the update
Step27: And here's the posterior distribution.
Step28: Now Bowl 100 has been eliminated because it contains no chocolate cookies.
But the high-numbered bowls are still more likely than the low-numbered bowls, because we have seen more vanilla cookies than chocolate.
In fact, the peak of the posterior distribution is at Bowl 67, which corresponds to the fraction of vanilla cookies in the data we've observed, $2/3$.
The quantity with the highest posterior probability is called the MAP, which stands for "maximum a posteori probability", where "a posteori" is unnecessary Latin for "posterior".
To compute the MAP, we can use the Series method idxmax
Step29: Or Pmf provides a more memorable name for the same thing
Step30: As you might suspect, this example isn't really about bowls; it's about estimating proportions.
Imagine that you have one bowl of cookies.
You don't know what fraction of cookies are vanilla, but you think it is equally likely to be any fraction from 0 to 1.
If you draw three cookies and two are vanilla, what proportion of cookies in the bowl do you think are vanilla?
The posterior distribution we just computed is the answer to that question.
We'll come back to estimating proportions in the next chapter.
But first let's use a Pmf to solve the dice problem.
The Dice Problem
In the previous chapter we solved the dice problem using a Bayes table.
Here's the statement of the problem
Step31: We can make the prior distribution like this
Step32: As in the previous example, the prior probability gets broadcast across the hypotheses.
The Pmf object has two attributes
Step33: Now we're ready to do the update.
Here's the likelihood of the data for each hypothesis.
Step34: And here's the update.
Step35: The posterior probability for the 6-sided die is $4/9$.
Now suppose I roll the same die again and get a 7.
Here are the likelihoods
Step36: The likelihood for the 6-sided die is 0 because it is not possible to get a 7 on a 6-sided die.
The other two likelihoods are the same as in the previous update.
Here's the update
Step38: After rolling a 1 and a 7, the posterior probability of the 8-sided die is about 69%.
Updating Dice
The following function is a more general version of the update in the previous section
Step39: The first parameter is a Pmf that represents the possible dice and their probabilities.
The second parameter is the outcome of rolling a die.
The first line selects quantities from the Pmf which represent the hypotheses.
Since the hypotheses are integers, we can use them to compute the likelihoods.
In general, if there are n sides on the die, the probability of any possible outcome is 1/n.
However, we have to check for impossible outcomes!
If the outcome exceeds the hypothetical number of sides on the die, the probability of that outcome is 0.
impossible is a Boolean Series that is True for each impossible outcome.
I use it as an index into likelihood to set the corresponding probabilities to 0.
Finally, I multiply pmf by the likelihoods and normalize.
Here's how we can use this function to compute the updates in the previous section.
I start with a fresh copy of the prior distribution
Step40: And use update_dice to do the updates.
Step41: The result is the same. We will see a version of this function in the next chapter.
Summary
This chapter introduces the empiricaldist module, which provides Pmf, which we use to represent a set of hypotheses and their probabilities.
empiricaldist is based on Pandas; the Pmf class inherits from the Pandas Series class and provides additional features specific to probability mass functions.
We'll use Pmf and other classes from empiricaldist throughout the book because they simplify the code and make it more readable.
But we could do the same things directly with Pandas.
We use a Pmf to solve the cookie problem and the dice problem, which we saw in the previous chapter.
With a Pmf it is easy to perform sequential updates with multiple pieces of data.
We also solved a more general version of the cookie problem, with 101 bowls rather than two.
Then we computed the MAP, which is the quantity with the highest posterior probability.
In the next chapter, I'll introduce the Euro problem, and we will use the binomial distribution.
And, at last, we will make the leap from using Bayes's Theorem to doing Bayesian statistics.
But first you might want to work on the exercises.
Exercises
Exercise
Step42: Exercise
Step43: Exercise
Step44: Exercise | Python Code:
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print('Downloaded ' + local)
download('https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py')
from utils import set_pyplot_params
set_pyplot_params()
Explanation: Distributions
Think Bayes, Second Edition
Copyright 2020 Allen B. Downey
License: Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
End of explanation
from empiricaldist import Pmf
Explanation: In the previous chapter we used Bayes's Theorem to solve a cookie problem; then we solved it again using a Bayes table.
In this chapter, at the risk of testing your patience, we will solve it one more time using a Pmf object, which represents a "probability mass function".
I'll explain what that means, and why it is useful for Bayesian statistics.
We'll use Pmf objects to solve some more challenging problems and take one more step toward Bayesian statistics.
But we'll start with distributions.
Distributions
In statistics a distribution is a set of possible outcomes and their corresponding probabilities.
For example, if you toss a coin, there are two possible outcomes with
approximately equal probability.
If you roll a six-sided die, the set of possible outcomes is the numbers 1 to 6, and the probability associated with each outcome is 1/6.
To represent distributions, we'll use a library called empiricaldist.
An "empirical" distribution is based on data, as opposed to a
theoretical distribution.
We'll use this library throughout the book. I'll introduce the basic features in this chapter and we'll see additional features later.
Probability Mass Functions
If the outcomes in a distribution are discrete, we can describe the distribution with a probability mass function, or PMF, which is a function that maps from each possible outcome to its probability.
empiricaldist provides a class called Pmf that represents a
probability mass function.
To use Pmf you can import it like this:
End of explanation
coin = Pmf()
coin['heads'] = 1/2
coin['tails'] = 1/2
coin
Explanation: If that doesn't work, you might have to install empiricaldist; try running
!pip install empiricaldist
in a code cell or
pip install empiricaldist
in a terminal window.
The following example makes a Pmf that represents the outcome of a
coin toss.
End of explanation
die = Pmf.from_seq([1,2,3,4,5,6])
die
Explanation: Pmf creates an empty Pmf with no outcomes.
Then we can add new outcomes using the bracket operator.
In this example, the two outcomes are represented with strings, and they have the same probability, 0.5.
You can also make a Pmf from a sequence of possible outcomes.
The following example uses Pmf.from_seq to make a Pmf that represents a six-sided die.
End of explanation
letters = Pmf.from_seq(list('Mississippi'))
letters
Explanation: In this example, all outcomes in the sequence appear once, so they all have the same probability, $1/6$.
More generally, outcomes can appear more than once, as in the following example:
End of explanation
letters['s']
Explanation: The letter M appears once out of 11 characters, so its probability is $1/11$.
The letter i appears 4 times, so its probability is $4/11$.
Since the letters in a string are not outcomes of a random process, I'll use the more general term "quantities" for the letters in the Pmf.
The Pmf class inherits from a Pandas Series, so anything you can do with a Series, you can also do with a Pmf.
For example, you can use the bracket operator to look up a quantity and get the corresponding probability.
End of explanation
try:
letters['t']
except KeyError as e:
print(type(e))
Explanation: In the word "Mississippi", about 36% of the letters are "s".
However, if you ask for the probability of a quantity that's not in the distribution, you get a KeyError.
End of explanation
letters('s')
Explanation: You can also call a Pmf as if it were a function, with a letter in parentheses.
End of explanation
letters('t')
Explanation: If the quantity is in the distribution the results are the same.
But if it is not in the distribution, the result is 0, not an error.
End of explanation
die([1,4,7])
Explanation: With parentheses, you can also provide a sequence of quantities and get a sequence of probabilities.
End of explanation
prior = Pmf.from_seq(['Bowl 1', 'Bowl 2'])
prior
Explanation: The quantities in a Pmf can be strings, numbers, or any other type that can be stored in the index of a Pandas Series.
If you are familiar with Pandas, that will help you work with Pmf objects.
But I will explain what you need to know as we go along.
The Cookie Problem Revisited
In this section I'll use a Pmf to solve the cookie problem from <<_TheCookieProblem>>.
Here's the statement of the problem again:
Suppose there are two bowls of cookies.
Bowl 1 contains 30 vanilla cookies and 10 chocolate cookies.
Bowl 2 contains 20 vanilla cookies and 20 chocolate cookies.
Now suppose you choose one of the bowls at random and, without looking, choose a cookie at random. If the cookie is vanilla, what is the probability that it came from Bowl 1?
Here's a Pmf that represents the two hypotheses and their prior probabilities:
End of explanation
likelihood_vanilla = [0.75, 0.5]
posterior = prior * likelihood_vanilla
posterior
Explanation: This distribution, which contains the prior probability for each hypothesis, is called (wait for it) the prior distribution.
To update the distribution based on new data (the vanilla cookie),
we multiply the priors by the likelihoods. The likelihood
of drawing a vanilla cookie from Bowl 1 is 3/4. The likelihood
for Bowl 2 is 1/2.
End of explanation
posterior.normalize()
Explanation: The result is the unnormalized posteriors; that is, they don't add up to 1.
To make them add up to 1, we can use normalize, which is a method provided by Pmf.
End of explanation
posterior
Explanation: The return value from normalize is the total probability of the data, which is $5/8$.
posterior, which contains the posterior probability for each hypothesis, is called (wait now) the posterior distribution.
End of explanation
posterior('Bowl 1')
Explanation: From the posterior distribution we can select the posterior probability for Bowl 1:
End of explanation
posterior *= likelihood_vanilla
posterior.normalize()
posterior
Explanation: And the answer is 0.6.
One benefit of using Pmf objects is that it is easy to do successive updates with more data.
For example, suppose you put the first cookie back (so the contents of the bowls don't change) and draw again from the same bowl.
If the second cookie is also vanilla, we can do a second update like this:
End of explanation
likelihood_chocolate = [0.25, 0.5]
Explanation: Now the posterior probability for Bowl 1 is almost 70%.
But suppose we do the same thing again and get a chocolate cookie.
Here are the likelihoods for the new data:
End of explanation
posterior *= likelihood_chocolate
posterior.normalize()
posterior
Explanation: And here's the update.
End of explanation
import numpy as np
hypos = np.arange(101)
Explanation: Now the posterior probability for Bowl 1 is about 53%.
After two vanilla cookies and one chocolate, the posterior probabilities are close to 50/50.
101 Bowls
Next let's solve a cookie problem with 101 bowls:
Bowl 0 contains 0% vanilla cookies,
Bowl 1 contains 1% vanilla cookies,
Bowl 2 contains 2% vanilla cookies,
and so on, up to
Bowl 99 contains 99% vanilla cookies, and
Bowl 100 contains all vanilla cookies.
As in the previous version, there are only two kinds of cookies, vanilla and chocolate. So Bowl 0 is all chocolate cookies, Bowl 1 is 99% chocolate, and so on.
Suppose we choose a bowl at random, choose a cookie at random, and it turns out to be vanilla. What is the probability that the cookie came from Bowl $x$, for each value of $x$?
To solve this problem, I'll use np.arange to make an array that represents 101 hypotheses, numbered from 0 to 100.
End of explanation
prior = Pmf(1, hypos)
prior.normalize()
Explanation: We can use this array to make the prior distribution:
End of explanation
prior.head()
Explanation: As this example shows, we can initialize a Pmf with two parameters.
The first parameter is the prior probability; the second parameter is a sequence of quantities.
In this example, the probabilities are all the same, so we only have to provide one of them; it gets "broadcast" across the hypotheses.
Since all hypotheses have the same prior probability, this distribution is uniform.
Here are the first few hypotheses and their probabilities.
End of explanation
likelihood_vanilla = hypos/100
likelihood_vanilla[:5]
Explanation: The likelihood of the data is the fraction of vanilla cookies in each bowl, which we can calculate using hypos:
End of explanation
posterior1 = prior * likelihood_vanilla
posterior1.normalize()
posterior1.head()
Explanation: Now we can compute the posterior distribution in the usual way:
End of explanation
from utils import decorate
def decorate_bowls(title):
decorate(xlabel='Bowl #',
ylabel='PMF',
title=title)
prior.plot(label='prior', color='C5')
posterior1.plot(label='posterior', color='C4')
decorate_bowls('Posterior after one vanilla cookie')
Explanation: The following figure shows the prior distribution and the posterior distribution after one vanilla cookie.
End of explanation
posterior2 = posterior1 * likelihood_vanilla
posterior2.normalize()
Explanation: The posterior probability of Bowl 0 is 0 because it contains no vanilla cookies.
The posterior probability of Bowl 100 is the highest because it contains the most vanilla cookies.
In between, the shape of the posterior distribution is a line because the likelihoods are proportional to the bowl numbers.
Now suppose we put the cookie back, draw again from the same bowl, and get another vanilla cookie.
Here's the update after the second cookie:
End of explanation
posterior2.plot(label='posterior', color='C4')
decorate_bowls('Posterior after two vanilla cookies')
Explanation: And here's what the posterior distribution looks like.
End of explanation
likelihood_chocolate = 1 - hypos/100
posterior3 = posterior2 * likelihood_chocolate
posterior3.normalize()
Explanation: After two vanilla cookies, the high-numbered bowls have the highest posterior probabilities because they contain the most vanilla cookies; the low-numbered bowls have the lowest probabilities.
But suppose we draw again and get a chocolate cookie.
Here's the update:
End of explanation
posterior3.plot(label='posterior', color='C4')
decorate_bowls('Posterior after 2 vanilla, 1 chocolate')
Explanation: And here's the posterior distribution.
End of explanation
posterior3.idxmax()
Explanation: Now Bowl 100 has been eliminated because it contains no chocolate cookies.
But the high-numbered bowls are still more likely than the low-numbered bowls, because we have seen more vanilla cookies than chocolate.
In fact, the peak of the posterior distribution is at Bowl 67, which corresponds to the fraction of vanilla cookies in the data we've observed, $2/3$.
The quantity with the highest posterior probability is called the MAP, which stands for "maximum a posteori probability", where "a posteori" is unnecessary Latin for "posterior".
To compute the MAP, we can use the Series method idxmax:
End of explanation
posterior3.max_prob()
Explanation: Or Pmf provides a more memorable name for the same thing:
End of explanation
hypos = [6, 8, 12]
Explanation: As you might suspect, this example isn't really about bowls; it's about estimating proportions.
Imagine that you have one bowl of cookies.
You don't know what fraction of cookies are vanilla, but you think it is equally likely to be any fraction from 0 to 1.
If you draw three cookies and two are vanilla, what proportion of cookies in the bowl do you think are vanilla?
The posterior distribution we just computed is the answer to that question.
We'll come back to estimating proportions in the next chapter.
But first let's use a Pmf to solve the dice problem.
The Dice Problem
In the previous chapter we solved the dice problem using a Bayes table.
Here's the statement of the problem:
Suppose I have a box with a 6-sided die, an 8-sided die, and a 12-sided die.
I choose one of the dice at random, roll it, and report that the outcome is a 1.
What is the probability that I chose the 6-sided die?
Let's solve it using a Pmf.
I'll use integers to represent the hypotheses:
End of explanation
prior = Pmf(1/3, hypos)
prior
Explanation: We can make the prior distribution like this:
End of explanation
prior.qs
prior.ps
Explanation: As in the previous example, the prior probability gets broadcast across the hypotheses.
The Pmf object has two attributes:
qs contains the quantities in the distribution;
ps contains the corresponding probabilities.
End of explanation
likelihood1 = 1/6, 1/8, 1/12
Explanation: Now we're ready to do the update.
Here's the likelihood of the data for each hypothesis.
End of explanation
posterior = prior * likelihood1
posterior.normalize()
posterior
Explanation: And here's the update.
End of explanation
likelihood2 = 0, 1/8, 1/12
Explanation: The posterior probability for the 6-sided die is $4/9$.
Now suppose I roll the same die again and get a 7.
Here are the likelihoods:
End of explanation
posterior *= likelihood2
posterior.normalize()
posterior
Explanation: The likelihood for the 6-sided die is 0 because it is not possible to get a 7 on a 6-sided die.
The other two likelihoods are the same as in the previous update.
Here's the update:
End of explanation
def update_dice(pmf, data):
Update pmf based on new data.
hypos = pmf.qs
likelihood = 1 / hypos
impossible = (data > hypos)
likelihood[impossible] = 0
pmf *= likelihood
pmf.normalize()
Explanation: After rolling a 1 and a 7, the posterior probability of the 8-sided die is about 69%.
Updating Dice
The following function is a more general version of the update in the previous section:
End of explanation
pmf = prior.copy()
pmf
Explanation: The first parameter is a Pmf that represents the possible dice and their probabilities.
The second parameter is the outcome of rolling a die.
The first line selects quantities from the Pmf which represent the hypotheses.
Since the hypotheses are integers, we can use them to compute the likelihoods.
In general, if there are n sides on the die, the probability of any possible outcome is 1/n.
However, we have to check for impossible outcomes!
If the outcome exceeds the hypothetical number of sides on the die, the probability of that outcome is 0.
impossible is a Boolean Series that is True for each impossible outcome.
I use it as an index into likelihood to set the corresponding probabilities to 0.
Finally, I multiply pmf by the likelihoods and normalize.
Here's how we can use this function to compute the updates in the previous section.
I start with a fresh copy of the prior distribution:
End of explanation
update_dice(pmf, 1)
update_dice(pmf, 7)
pmf
Explanation: And use update_dice to do the updates.
End of explanation
# Solution goes here
Explanation: The result is the same. We will see a version of this function in the next chapter.
Summary
This chapter introduces the empiricaldist module, which provides Pmf, which we use to represent a set of hypotheses and their probabilities.
empiricaldist is based on Pandas; the Pmf class inherits from the Pandas Series class and provides additional features specific to probability mass functions.
We'll use Pmf and other classes from empiricaldist throughout the book because they simplify the code and make it more readable.
But we could do the same things directly with Pandas.
We use a Pmf to solve the cookie problem and the dice problem, which we saw in the previous chapter.
With a Pmf it is easy to perform sequential updates with multiple pieces of data.
We also solved a more general version of the cookie problem, with 101 bowls rather than two.
Then we computed the MAP, which is the quantity with the highest posterior probability.
In the next chapter, I'll introduce the Euro problem, and we will use the binomial distribution.
And, at last, we will make the leap from using Bayes's Theorem to doing Bayesian statistics.
But first you might want to work on the exercises.
Exercises
Exercise: Suppose I have a box with a 6-sided die, an 8-sided die, and a 12-sided die.
I choose one of the dice at random, roll it four times, and get 1, 3, 5, and 7.
What is the probability that I chose the 8-sided die?
You can use the update_dice function or do the update yourself.
End of explanation
# Solution goes here
Explanation: Exercise: In the previous version of the dice problem, the prior probabilities are the same because the box contains one of each die.
But suppose the box contains 1 die that is 4-sided, 2 dice that are 6-sided, 3 dice that are 8-sided, 4 dice that are 12-sided, and 5 dice that are 20-sided.
I choose a die, roll it, and get a 7.
What is the probability that I chose an 8-sided die?
Hint: To make the prior distribution, call Pmf with two parameters.
End of explanation
# Solution goes here
# Solution goes here
Explanation: Exercise: Suppose I have two sock drawers.
One contains equal numbers of black and white socks.
The other contains equal numbers of red, green, and blue socks.
Suppose I choose a drawer at random, choose two socks at random, and I tell you that I got a matching pair.
What is the probability that the socks are white?
For simplicity, let's assume that there are so many socks in both drawers that removing one sock makes a negligible change to the proportions.
End of explanation
# Solution goes here
# Solution goes here
# Solution goes here
Explanation: Exercise: Here's a problem from Bayesian Data Analysis:
Elvis Presley had a twin brother (who died at birth). What is the probability that Elvis was an identical twin?
Hint: In 1935, about 2/3 of twins were fraternal and 1/3 were identical.
End of explanation |
3,572 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ARDC Training
Step1: Browse the available Data Cubes
Step2: Pick a product
Use the platform and product names from the previous block to select a Data Cube.
Step3: Display Latitude-Longitude and Time Bounds of the Data Cube
Step4: Visualize Data Cube Region
Step5: Pick a smaller analysis region and display that region
Try to keep your region to less than 0.2-deg x 0.2-deg for rapid processing. You can click on the map above to find the Lat-Lon coordinates of any location. You will want to identify a region with an inland water body and some vegetation. Pick a time window of several years.
Step6: Load the dataset and the required spectral bands or other parameters
After loading, you will view the Xarray dataset. Notice the dimensions represent the number of pixels in your latitude and longitude dimension as well as the number of time slices (time) in your time series.
Step7: Preparing the data
We will filter out the clouds and the water using the Landsat pixel_qa information. Next, we will calculate the values of NDVI (vegetation index) and TSM (water quality).
Step8: Combine everything into one XARRAY for further analysis
Step9: Define a path for a transect
A transect is just a line that will run across our region of interest. Use the display map above to find the end points of your desired line. If you click on the map it will give you precise Lat-Lon positions for a point.
Start with a line across a mix of water and land
Step10: Plot the transect line
Step11: Find the nearest pixels along the transect path
Step12: Groundwork for Transect (2-D) and Hovmöller (3-D) Plots
Step13: Mask Clouds
Step14: Select an acquisition date and then plot a 2D transect without clouds
Step15: Select one of the XARRAY parameters for analysis
Step16: Create a 2D Transect plot of the "band" for one date
Step17: Create a 2D Transect plot of NDVI for one date
Step18: Create a 3D Hovmoller plot of NDVI for the entire time series
Step19: Create a 2D Transect plot of water existence for one date
Step20: Create a 3D Hovmoller plot of water extent for the entire time series
Step21: Create a 2D Transect plot of water quality (TSM) for one date
Step22: Create a 3D Hovmoller plot of water quality (TSM) for one date | Python Code:
import xarray as xr
import numpy as np
import datacube
import utils.data_cube_utilities.data_access_api as dc_api
from datacube.utils.aws import configure_s3_access
configure_s3_access(requester_pays=True)
api = dc_api.DataAccessApi()
dc = api.dc
Explanation: ARDC Training: Python Notebooks
Task-E: This notebook will demonstrate 2D transect analyses and 3D Hovmoller plots. We will run these for NDVI (land) and TSM (water quality) to show the spatial and temporal variation of data along a line (transect) for a given time slice and for the entire time series.
Import the Datacube Configuration
End of explanation
list_of_products = dc.list_products()
netCDF_products = list_of_products[list_of_products['format'] == 'NetCDF']
netCDF_products
Explanation: Browse the available Data Cubes
End of explanation
# Change the data platform and data cube here
platform = 'LANDSAT_7'
product = 'ls7_usgs_sr_scene'
collection = 'c1'
level = 'l2'
Explanation: Pick a product
Use the platform and product names from the previous block to select a Data Cube.
End of explanation
from utils.data_cube_utilities.dc_time import _n64_to_datetime, dt_to_str
extents = api.get_full_dataset_extent(platform = platform, product = product, measurements=[])
latitude_extents = (min(extents['latitude'].values),max(extents['latitude'].values))
longitude_extents = (min(extents['longitude'].values),max(extents['longitude'].values))
time_extents = (min(extents['time'].values),max(extents['time'].values))
print("Latitude Extents:", latitude_extents)
print("Longitude Extents:", longitude_extents)
print("Time Extents:", list(map(dt_to_str, map(_n64_to_datetime, time_extents))))
Explanation: Display Latitude-Longitude and Time Bounds of the Data Cube
End of explanation
## The code below renders a map that can be used to orient yourself with the region.
from utils.data_cube_utilities.dc_display_map import display_map
display_map(latitude = latitude_extents, longitude = longitude_extents)
Explanation: Visualize Data Cube Region
End of explanation
## Vietnam - Central Lam Dong Province ##
# longitude_extents = (107.0, 107.2)
# latitude_extents = (11.7, 12.0)
## Vietnam Ho Tri An Lake
# longitude_extents = (107.0, 107.2)
# latitude_extents = (11.1, 11.3)
## Sierra Leone - Delta du Saloum
latitude_extents = (13.55, 14.12)
longitude_extents = (-16.80, -16.38)
time_extents = ('2005-01-01', '2005-12-31')
display_map(latitude = latitude_extents, longitude = longitude_extents)
Explanation: Pick a smaller analysis region and display that region
Try to keep your region to less than 0.2-deg x 0.2-deg for rapid processing. You can click on the map above to find the Lat-Lon coordinates of any location. You will want to identify a region with an inland water body and some vegetation. Pick a time window of several years.
End of explanation
landsat_dataset = dc.load(latitude = latitude_extents,
longitude = longitude_extents,
platform = platform,
time = time_extents,
product = product,
measurements = ['red', 'green', 'blue', 'nir', 'swir1', 'swir2', 'pixel_qa'])
landsat_dataset
#view the dimensions and sample content from the cube
Explanation: Load the dataset and the required spectral bands or other parameters
After loading, you will view the Xarray dataset. Notice the dimensions represent the number of pixels in your latitude and longitude dimension as well as the number of time slices (time) in your time series.
End of explanation
from utils.data_cube_utilities.clean_mask import landsat_qa_clean_mask
plt_col_lvl_params = dict(platform=platform, collection=collection, level=level)
clear_xarray = landsat_qa_clean_mask(landsat_dataset, cover_types=['clear'], **plt_col_lvl_params)
water_xarray = landsat_qa_clean_mask(landsat_dataset, cover_types=['water'], **plt_col_lvl_params)
shadow_xarray = landsat_qa_clean_mask(landsat_dataset, cover_types=['cld_shd'], **plt_col_lvl_params)
cloud_xarray = landsat_qa_clean_mask(landsat_dataset, cover_types=['cloud'], **plt_col_lvl_params)
clean_xarray = (clear_xarray | water_xarray).rename("clean_mask")
def NDVI(dataset):
return ((dataset.nir - dataset.red)/(dataset.nir + dataset.red)).rename("NDVI")
ndvi_xarray = NDVI(landsat_dataset) # Vegetation Index
from utils.data_cube_utilities.dc_water_quality import tsm
tsm_xarray = tsm(landsat_dataset, clean_mask = water_xarray.values.astype(bool) ).tsm
Explanation: Preparing the data
We will filter out the clouds and the water using the Landsat pixel_qa information. Next, we will calculate the values of NDVI (vegetation index) and TSM (water quality).
End of explanation
combined_dataset = xr.merge([landsat_dataset,
clean_xarray,
clear_xarray,
water_xarray,
shadow_xarray,
cloud_xarray,
ndvi_xarray,
tsm_xarray])
# Copy original crs to merged dataset
combined_dataset = combined_dataset.assign_attrs(landsat_dataset.attrs)
Explanation: Combine everything into one XARRAY for further analysis
End of explanation
# Water and Land Mixed Examples
mid_lon = np.mean(longitude_extents)
mid_lat = np.mean(latitude_extents)
# North-South Path
start = (latitude_extents[0], mid_lon)
end = (latitude_extents[1], mid_lon)
# East-West Path
# start = (mid_lat, longitude_extents[0])
# end = (mid_lat, longitude_extents[1])
# East-West Path for Lake Ho Tri An
# start = ( 11.25, 107.02 )
# end = ( 11.25, 107.18 )
Explanation: Define a path for a transect
A transect is just a line that will run across our region of interest. Use the display map above to find the end points of your desired line. If you click on the map it will give you precise Lat-Lon positions for a point.
Start with a line across a mix of water and land
End of explanation
import folium
import numpy as np
from folium.features import CustomIcon
def plot_a_path(points , zoom = 15):
xs,ys = zip(*points)
map_center_point = (np.mean(xs), np.mean(ys))
the_map = folium.Map(location=[map_center_point[0], map_center_point[1]], zoom_start = zoom, tiles='http://mt1.google.com/vt/lyrs=y&z={z}&x={x}&y={y}', attr = "Google Attribution")
path = folium.PolyLine(locations=points, weight=5, color = 'orange')
the_map.add_child(path)
start = ( xs[0] ,ys[0] )
end = ( xs[-1],ys[-1])
return the_map
plot_a_path([start,end])
Explanation: Plot the transect line
End of explanation
from utils.data_cube_utilities.transect import line_scan
import numpy as np
def get_index_at(coords, ds):
'''Returns an integer index pair.'''
lat = coords[0]
lon = coords[1]
nearest_lat = ds.sel(latitude = lat, method = 'nearest').latitude.values
nearest_lon = ds.sel(longitude = lon, method = 'nearest').longitude.values
lat_index = np.where(ds.latitude.values == nearest_lat)[0]
lon_index = np.where(ds.longitude.values == nearest_lon)[0]
return (int(lat_index), int(lon_index))
def create_pixel_trail(start, end, ds):
a = get_index_at(start, ds)
b = get_index_at(end, ds)
indices = line_scan.line_scan(a, b)
pixels = [ ds.isel(latitude = x, longitude = y) for x, y in indices]
return pixels
list_of_pixels_along_segment = create_pixel_trail(start, end, landsat_dataset)
Explanation: Find the nearest pixels along the transect path
End of explanation
import xarray
import matplotlib.pyplot as plt
from matplotlib.ticker import FuncFormatter
from datetime import datetime
import time
def plot_list_of_pixels(list_of_pixels, band_name, y = None):
start = (
"{0:.2f}".format(float(list_of_pixels[0].latitude.values )),
"{0:.2f}".format(float(list_of_pixels[0].longitude.values))
)
end = (
"{0:.2f}".format(float(list_of_pixels[-1].latitude.values)),
"{0:.2f}".format(float(list_of_pixels[-1].longitude.values))
)
def reformat_n64(t):
return time.strftime("%Y.%m.%d", time.gmtime(t.astype(int)/1000000000))
def pixel_to_array(pixel):
return(pixel.values)
def figure_ratio(x,y, fixed_width = 10):
width = fixed_width
height = y * (fixed_width / x)
return (width, height)
pixel_array = np.transpose([pixel_to_array(pix) for pix in list_of_pixels])
#If the data has one acquisition, then plot transect (2-D), else Hovmöller (3-D)
if y.size == 1:
plt.figure(figsize = (15,5))
plt.scatter(np.arange(pixel_array.size), pixel_array)
plt.title("Transect (2-D) \n Acquisition date: {}".format(reformat_n64(y)))
plt.xlabel("Pixels along the transect \n {} - {} \n ".format(start,end))
plt.ylabel(band_name)
else:
m = FuncFormatter(lambda x :x )
figure = plt.figure(figsize = figure_ratio(len(list_of_pixels),
len(list_of_pixels[0].values),
fixed_width = 15))
number_of_y_ticks = 5
ax = plt.gca()
cax = ax.imshow(pixel_array, interpolation='none')
figure.colorbar(cax,fraction=0.110, pad=0.04)
ax.set_title("Hovmöller (3-D) \n Acquisition range: {} - {} \n ".format(reformat_n64(y[0]),reformat_n64(y[-1])))
plt.xlabel("Pixels along the transect \n {} - {} \n ".format(start,end))
ax.get_yaxis().set_major_formatter( FuncFormatter(lambda x, p: reformat_n64(list_of_pixels[0].time.values[int(x)]) if int(x) < len(list_of_pixels[0].time) else ""))
plt.ylabel("Time")
plt.show()
def transect_plot(start,
end,
da):
if type(da) is not xarray.DataArray and (type(da) is xarray.Dataset) :
raise Exception('You should be passing in a data-array, not a Dataset')
pixels = create_pixel_trail(start, end,da)
dates = da.time.values
lats = [x.latitude.values for x in pixels]
lons = [x.longitude.values for x in pixels]
plot_list_of_pixels(pixels, da.name, y = dates)
pixels = create_pixel_trail(start, end, landsat_dataset)
t = 2
subset = list( map(lambda x: x.isel(time = t), pixels))
Explanation: Groundwork for Transect (2-D) and Hovmöller (3-D) Plots
End of explanation
from utils.data_cube_utilities.clean_mask import landsat_qa_clean_mask
clean_mask = landsat_qa_clean_mask(landsat_dataset, platform=platform,
collection=collection, level=level)
cloudless_dataset = landsat_dataset.where(clean_mask)
Explanation: Mask Clouds
End of explanation
# select an acquisition number from the start (t=0) to "time" using the array limits above
acquisition_number = 10
#If plotted will create the 2-D transect
cloudless_dataset_for_acq_no = cloudless_dataset.isel(time = acquisition_number)
#If Plotted will create the 3-D Hovmoller plot for a portion of the time series (min to max)
min_acq = 1
max_acq = 4
cloudless_dataset_from_1_to_acq_no = cloudless_dataset.isel(time = slice(min_acq, max_acq))
Explanation: Select an acquisition date and then plot a 2D transect without clouds
End of explanation
band = 'green'
Explanation: Select one of the XARRAY parameters for analysis
End of explanation
transect_plot(start, end, cloudless_dataset_for_acq_no[band])
Explanation: Create a 2D Transect plot of the "band" for one date
End of explanation
transect_plot(start, end, NDVI(cloudless_dataset_for_acq_no))
Explanation: Create a 2D Transect plot of NDVI for one date
End of explanation
transect_plot(start, end, NDVI(cloudless_dataset))
Explanation: Create a 3D Hovmoller plot of NDVI for the entire time series
End of explanation
transect_plot(start, end, water_xarray.isel(time = acquisition_number))
Explanation: Create a 2D Transect plot of water existence for one date
End of explanation
transect_plot(start, end, water_xarray)
Explanation: Create a 3D Hovmoller plot of water extent for the entire time series
End of explanation
transect_plot(start, end, tsm_xarray.isel(time = acquisition_number))
Explanation: Create a 2D Transect plot of water quality (TSM) for one date
End of explanation
transect_plot(start, end, tsm_xarray)
Explanation: Create a 3D Hovmoller plot of water quality (TSM) for one date
End of explanation |
3,573 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Think Bayes
Step1: The Dice problem
Suppose I have a box of dice that contains a 4-sided die, a 6-sided
die, an 8-sided die, a 12-sided die, and a 20-sided die.
Suppose I select a die from the box at random, roll it, and get a 6.
What is the probability that I rolled each die?
The Dice class inherits Update and provides Likelihood
Step2: Here's what the update looks like
Step3: And here's what it looks like after more data
Step4: The train problem
The Train problem has the same likelihood as the Dice problem.
Step5: But there are many more hypotheses
Step6: Here's what the posterior looks like
Step7: And here's how we can compute the posterior mean
Step8: Or we can just use the method
Step10: Sensitivity to the prior
Here's a function that solves the train problem for different priors and data
Step11: Let's run it with the same dataset and several uniform priors
Step12: The results are quite sensitive to the prior, even with several observations.
Power law prior
Now let's try it with a power law prior.
Step13: Here's what a power law prior looks like, compared to a uniform prior
Step14: Now let's see what the posteriors look like after observing one train.
Step15: The power law gives less prior probability to high values, which yields lower posterior means, and less sensitivity to the upper bound.
Step16: Credible intervals
To compute credible intervals, we can use the Percentile method on the posterior.
Step17: If you have to compute more than a few percentiles, it is more efficient to compute a CDF.
Also, a CDF can be a better way to visualize distributions.
Step18: Cdf also provides Percentile
Step19: Exercises
Exercise
Step24: Exercise | Python Code:
from __future__ import print_function, division
% matplotlib inline
import thinkplot
from thinkbayes2 import Hist, Pmf, Suite, Cdf
Explanation: Think Bayes: Chapter 3
This notebook presents example code and exercise solutions for Think Bayes.
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
class Dice(Suite):
def Likelihood(self, data, hypo):
if hypo < data:
return 0
else:
return 1/hypo
Explanation: The Dice problem
Suppose I have a box of dice that contains a 4-sided die, a 6-sided
die, an 8-sided die, a 12-sided die, and a 20-sided die.
Suppose I select a die from the box at random, roll it, and get a 6.
What is the probability that I rolled each die?
The Dice class inherits Update and provides Likelihood
End of explanation
suite = Dice([4, 6, 8, 12, 20])
suite.Update(6)
suite.Print()
Explanation: Here's what the update looks like:
End of explanation
for roll in [6, 8, 7, 7, 5, 4]:
suite.Update(roll)
suite.Print()
Explanation: And here's what it looks like after more data:
End of explanation
class Train(Suite):
def Likelihood(self, data, hypo):
if hypo < data:
return 0
else:
return 1/hypo
Explanation: The train problem
The Train problem has the same likelihood as the Dice problem.
End of explanation
hypos = xrange(1, 1001)
suite = Train(hypos)
suite.Update(60)
Explanation: But there are many more hypotheses
End of explanation
thinkplot.Pdf(suite)
Explanation: Here's what the posterior looks like
End of explanation
def Mean(suite):
total = 0
for hypo, prob in suite.Items():
total += hypo * prob
return total
Mean(suite)
Explanation: And here's how we can compute the posterior mean
End of explanation
suite.Mean()
Explanation: Or we can just use the method
End of explanation
def MakePosterior(high, dataset, constructor=Train):
Solves the train problem.
high: int maximum number of trains
dataset: sequence of observed train numbers
constructor: function used to construct the Train object
returns: Train object representing the posterior suite
hypos = range(1, high+1)
suite = constructor(hypos)
for data in dataset:
suite.Update(data)
return suite
Explanation: Sensitivity to the prior
Here's a function that solves the train problem for different priors and data
End of explanation
dataset = [30, 60, 90]
for high in [500, 1000, 2000]:
suite = MakePosterior(high, dataset)
print(high, suite.Mean())
Explanation: Let's run it with the same dataset and several uniform priors
End of explanation
class Train2(Train):
def __init__(self, hypos, alpha=1.0):
Pmf.__init__(self)
for hypo in hypos:
self[hypo] = hypo**(-alpha)
self.Normalize()
Explanation: The results are quite sensitive to the prior, even with several observations.
Power law prior
Now let's try it with a power law prior.
End of explanation
high = 100
hypos = range(1, high+1)
suite1 = Train(hypos)
suite2 = Train2(hypos)
thinkplot.Pdf(suite1)
thinkplot.Pdf(suite2)
Explanation: Here's what a power law prior looks like, compared to a uniform prior
End of explanation
dataset = [60]
high = 1000
thinkplot.PrePlot(num=2)
constructors = [Train, Train2]
labels = ['uniform', 'power law']
for constructor, label in zip(constructors, labels):
suite = MakePosterior(high, dataset, constructor)
suite.label = label
thinkplot.Pmf(suite)
thinkplot.Config(xlabel='Number of trains',
ylabel='Probability')
Explanation: Now let's see what the posteriors look like after observing one train.
End of explanation
dataset = [30, 60, 90]
for high in [500, 1000, 2000]:
suite = MakePosterior(high, dataset, Train2)
print(high, suite.Mean())
Explanation: The power law gives less prior probability to high values, which yields lower posterior means, and less sensitivity to the upper bound.
End of explanation
hypos = xrange(1, 1001)
suite = Train(hypos)
suite.Update(60)
suite.Percentile(5), suite.Percentile(95)
Explanation: Credible intervals
To compute credible intervals, we can use the Percentile method on the posterior.
End of explanation
cdf = Cdf(suite)
thinkplot.Cdf(cdf)
thinkplot.Config(xlabel='Number of trains',
ylabel='Cumulative Probability',
legend=False)
Explanation: If you have to compute more than a few percentiles, it is more efficient to compute a CDF.
Also, a CDF can be a better way to visualize distributions.
End of explanation
cdf.Percentile(5), cdf.Percentile(95)
Explanation: Cdf also provides Percentile
End of explanation
# Solution
# Suppose Company A has N trains and all other companies have M.
# The chance that we would observe one of Company A's trains is $N/(N+M)$.
# Given that we observe one of Company A's trains, the chance that we
# observe number 60 is $1/N$ for $N \ge 60$.
# The product of these probabilities is $1/(N+M)$, which is just the
# probability of observing any given train.
# If N<<M, this converges to a constant, which means that all value of $N$
# have the same likelihood, so we learn nothing about how many trains
# Company A has.
# If N>>M, this converges to $1/N$, which is what we saw in the previous
# solution.
# More generally, if M is unknown, we would need a prior distribution for
# M, then we can do a two-dimensional update, and then extract the posterior
# distribution for N.
# We'll see how to do that soon.
Explanation: Exercises
Exercise: To write a likelihood function for the locomotive problem, we had
to answer this question: "If the railroad has N locomotives, what
is the probability that we see number 60?"
The answer depends on what sampling process we use when we observe the
locomotive. In this chapter, I resolved the ambiguity by specifying
that there is only one train-operating company (or only one that we
care about).
But suppose instead that there are many companies with different
numbers of trains. And suppose that you are equally likely to see any
train operated by any company.
In that case, the likelihood function is different because you
are more likely to see a train operated by a large company.
As an exercise, implement the likelihood function for this variation
of the locomotive problem, and compare the results.
End of explanation
# Solution
from scipy.special import binom
class Hyrax(Suite):
Represents hypotheses about how many hyraxes there are.
def Likelihood(self, data, hypo):
Computes the likelihood of the data under the hypothesis.
hypo: total population (N)
data: # tagged (K), # caught (n), # of caught who were tagged (k)
N = hypo
K, n, k = data
if hypo < K + (n - k):
return 0
like = binom(N-K, n-k) / binom(N, n)
return like
# Solution
hypos = range(1, 1000)
suite = Hyrax(hypos)
data = 10, 10, 2
suite.Update(data)
# Solution
thinkplot.Pdf(suite)
thinkplot.Config(xlabel='Number of hyraxes', ylabel='PMF', legend=False)
# Solution
print('Posterior mean', suite.Mean())
print('Maximum a posteriori estimate', suite.MaximumLikelihood())
print('90% credible interval', suite.CredibleInterval(90))
# Solution
from scipy import stats
class Hyrax2(Suite):
Represents hypotheses about how many hyraxes there are.
def Likelihood(self, data, hypo):
Computes the likelihood of the data under the hypothesis.
hypo: total population (N)
data: # tagged (K), # caught (n), # of caught who were tagged (k)
N = hypo
K, n, k = data
if hypo < K + (n - k):
return 0
like = stats.hypergeom.pmf(k, N, K, n)
return like
# Solution
hypos = range(1, 1000)
suite = Hyrax2(hypos)
data = 10, 10, 2
suite.Update(data)
# Solution
print('Posterior mean', suite.Mean())
print('Maximum a posteriori estimate', suite.MaximumLikelihood())
print('90% credible interval', suite.CredibleInterval(90))
Explanation: Exercise: Suppose I capture and tag 10 rock hyraxes. Some time later, I capture another 10 hyraxes and find that two of them are already tagged. How many hyraxes are there in this environment?
As always with problems like this, we have to make some modeling assumptions.
1) For simplicity, you can assume that the environment is reasonably isolated, so the number of hyraxes does not change between observations.
2) And you can assume that each hyrax is equally likely to be captured during each phase of the experiment, regardless of whether it has been tagged. In reality, it is possible that tagged animals would avoid traps in the future, or possible that the same behavior that got them caught the first time makes them more likely to be caught again. But let's start simple.
I suggest the following notation:
N: total population of hyraxes
K: number of hyraxes tagged in the first round
n: number of hyraxes caught in the second round
k: number of hyraxes in the second round that had been tagged
So N is the hypothesis and (K, n, k) make up the data. The probability of the data, given the hypothesis, is the probability of finding k tagged hyraxes out of n if (in the population) K out of N are tagged.
If you are familiar with the hypergeometric distribution, you can use the hypergeometric PMF to compute the likelihood function. Otherwise, you can figure it out using combinatorics.
End of explanation |
3,574 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
http
Step1: Całka oznaczona
$$\int_a^b f(x) dx = \lim_{n\to\infty} \sum_{i=1}^{n} f(\hat x_i) \Delta x_i$$
Step2: Całka nieoznaczona
$$\int_a^x f(y) dy = \lim_{n\to\infty} \sum_{i=1}^{n} f(\hat y_i) \Delta y_i$$
Step3: Pochodne funkcji wielu zmiennych | Python Code:
import numpy as np
x = np.linspace(1,8,5)
x.shape
y = np.sin(x)
y.shape
for i in range(y.shape[0]-1):
print( (y[i+1]-y[i]),(y[i+1]-y[i])/(x[i+1]-x[i]))
y[1:]-y[:-1]
y[1:]
(y[1:]-y[:-1])/(x[1:]-x[:-1])
np.diff(y)
np.diff(x)
np.roll(y,-1)
y
np.gradient(y)
import sympy
X = sympy.Symbol('X')
expr = (sympy.sin(X**2+1*sympy.cos(sympy.exp(X)))).diff(X)
expr
f = sympy.lambdify(X,expr,"numpy")
f( np.array([1,2,3]))
import ipywidgets as widgets
from ipywidgets import interact
widgets.IntSlider?
@interact(x=widgets.IntSlider(1,2,10,1))
def g(x=1):
print(x)
Explanation: http://www.scipy-lectures.org/
End of explanation
import numpy as np
N = 10
x = np.linspace( 0,np.pi*1.23, N)
f = np.sin(x)
x,f
np.diff(x)
np.sum(f[:-1]*np.diff(x))
w = np.ones_like(x)
h = np.diff(x)[0]
w[-1] = 0
h*np.sum(w*f)
w[0] = 0.5
w[-1] = 0.5
h*np.sum(w*f)
import scipy.integrate
scipy.integrate.
Explanation: Całka oznaczona
$$\int_a^b f(x) dx = \lim_{n\to\infty} \sum_{i=1}^{n} f(\hat x_i) \Delta x_i$$
End of explanation
np.cumsum(f)*h
np.sum(f)*h
f.shape,np.cumsum(f).shape
Explanation: Całka nieoznaczona
$$\int_a^x f(y) dy = \lim_{n\to\infty} \sum_{i=1}^{n} f(\hat y_i) \Delta y_i$$
End of explanation
x = np.linspace(0,2,50)
y = np.linspace(0,2,50)
X,Y = np.meshgrid(x,y,indexing='xy')
X
F = np.sin(X**2 + Y)
F[1,2],X[1,2],Y[1,2]
%matplotlib inline
import matplotlib.pyplot as plt
plt.contour(X,Y,F)
plt.imshow(F,origin='lower')
np.diff(F,axis=1).shape
np.diff(F,2,axis=0).shape
Explanation: Pochodne funkcji wielu zmiennych
End of explanation |
3,575 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
HEP Benchmark Queries Q1 to Q5 - CERN SWAN Version
This follows the IRIS-HEP benchmark
and the article Evaluating Query Languages and Systems for High-Energy Physics Data
and provides implementations of the benchmark tasks using Apache Spark.
The workload and data
Step1: Benchmark task
Step2: Benchmark task
Step3: Benchmark task
Step4: Benchmark task
Step6: Benchmark task | Python Code:
# Start the Spark Session
# When Using Spark on CERN SWAN, run this cell to get the Spark Session
# Note: when running SWAN for this, do not select to connect to a CERN Spark cluster
# If you want to use a cluster anyway, please copy the data to a cluster filesystem first
from pyspark.sql import SparkSession
spark = (SparkSession.builder
.appName("HEP benchmark")
.master("local[*]")
.config("spark.driver.memory", "4g")
.config("spark.sql.orc.enableNestedColumnVectorizedReader", "true")
.getOrCreate()
)
# Read data for the benchmark tasks
# Further details of the available datasets at
# https://github.com/LucaCanali/Miscellaneous/tree/master/Spark_Physics
# this works from SWAN and CERN machines with eos mounted
path = "/eos/project/s/sparkdltrigger/public/"
input_data = "Run2012B_SingleMu_sample.orc"
# use this if you downloaded the full dataset
# input_data = "Run2012B_SingleMu.orc"
df_events = spark.read.orc(path + input_data)
df_events.printSchema()
print(f"Number of events: {df_events.count()}")
Explanation: HEP Benchmark Queries Q1 to Q5 - CERN SWAN Version
This follows the IRIS-HEP benchmark
and the article Evaluating Query Languages and Systems for High-Energy Physics Data
and provides implementations of the benchmark tasks using Apache Spark.
The workload and data:
- Benchmark jobs are implemented follwing IRIS-HEP benchmark
- The input data is a series of events from CMS opendata
- The job output is typically a histogram
- See also https://github.com/LucaCanali/Miscellaneous/tree/master/Spark_Physics
Author and contact: [email protected]
February, 2022
End of explanation
# Compute the histogram for MET_pt
# The Spark function "width_bucket" is used to generate the histogram bucket number
# a groupBy operation with count is used to fill the histogram
# The result is a histogram with bins value and counts foreach bin (N_events)
min_val = 0
max_val = 100
num_bins = 100
step = (max_val - min_val) / num_bins
histogram_data = (
df_events
.selectExpr(f"width_bucket(MET_pt, {min_val}, {max_val}, {num_bins}) as bucket")
.groupBy("bucket")
.count()
.orderBy("bucket")
)
# convert bucket number to the corresponding value
histogram_data = histogram_data.selectExpr(f"round({min_val} + (bucket - 1/2) * {step},2) as value", "count as N_events")
# The action toPandas() here triggers the computation.
# Histogram data is fetched into the driver as a Pandas Dataframe.
%time histogram_data_pandas=histogram_data.toPandas()
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.rcParams.update({'font.size': 20, 'figure.figsize': [14,10]})
# cut the first and last bin
x = histogram_data_pandas.iloc[1:-1]["value"]
y = histogram_data_pandas.iloc[1:-1]["N_events"]
# line plot
f, ax = plt.subplots()
ax.plot(x, y, '-')
ax.set_xlim(min_val, max_val)
ax.set_xlabel('$𝐸^{𝑚𝑖𝑠𝑠}_T$ (GeV)')
ax.set_ylabel('Number of Events')
ax.set_title("Distribution of $𝐸^{𝑚𝑖𝑠𝑠}_T$ ")
plt.show()
Explanation: Benchmark task: Q1
Plot the $𝐸^{𝑚𝑖𝑠𝑠}_T$ (missing transverse energy) of all events.
End of explanation
# Jet_pt contains arrays of jet measurements
df_events.select("Jet_pt").show(5,False)
# Use the explode function to extract array data into DataFrame rows
df_events_jet_pt = df_events.selectExpr("explode(Jet_pt) as Jet_pt")
df_events_jet_pt.printSchema()
df_events_jet_pt.show(10, False)
# Compute the histogram for Jet_pt
# The Spark function "width_bucket" is used to generate the histogram bucket number
# a groupBy operation with count is used to fill the histogram
# The result is a histogram with bins value and counts foreach bin (N_events)
min_val = 15
max_val = 60
num_bins = 100
step = (max_val - min_val) / num_bins
histogram_data = (
df_events_jet_pt
.selectExpr(f"width_bucket(Jet_pt, {min_val}, {max_val}, {num_bins}) as bucket")
.groupBy("bucket")
.count()
.orderBy("bucket")
)
# convert bucket number to the corresponding value
histogram_data = histogram_data.selectExpr(f"round({min_val} + (bucket - 1/2) * {step},2) as value", "count as N_events")
# The action toPandas() here triggers the computation.
# Histogram data is fetched into the driver as a Pandas Dataframe.
%time histogram_data_pandas=histogram_data.toPandas()
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.rcParams.update({'font.size': 20, 'figure.figsize': [14,10]})
# cut the first and last bin
x = histogram_data_pandas.iloc[1:-1]["value"]
y = histogram_data_pandas.iloc[1:-1]["N_events"]
# line plot
f, ax = plt.subplots()
ax.plot(x, y, '-')
ax.set_xlim(min_val, max_val)
ax.set_xlabel('$p_T$ (GeV)')
ax.set_ylabel('Number of Events')
ax.set_title("Distribution of $p_T$ ")
plt.show()
Explanation: Benchmark task: Q2
Plot the $𝑝_𝑇$ (transverse momentum) of all jets in all events
End of explanation
# Take Jet arrays for pt and eta and transform them to rows with explode()
df1 = df_events.selectExpr("explode(arrays_zip(Jet_pt, Jet_eta)) as Jet")
df1.printSchema()
df1.show(10, False)
# Apply a filter on Jet_eta
q3 = df1.select("Jet.Jet_pt").filter("abs(Jet.Jet_eta) < 1")
q3.show(10,False)
# Compute the histogram for Jet_pt
# The Spark function "width_bucket" is used to generate the histogram bucket number
# a groupBy operation with count is used to fill the histogram
# The result is a histogram with bins value and counts foreach bin (N_events)
min_val = 15
max_val = 60
num_bins = 100
step = (max_val - min_val) / num_bins
histogram_data = (
q3
.selectExpr(f"width_bucket(Jet_pt, {min_val}, {max_val}, {num_bins}) as bucket")
.groupBy("bucket")
.count()
.orderBy("bucket")
)
# convert bucket number to the corresponding value
histogram_data = histogram_data.selectExpr(f"round({min_val} + (bucket - 1/2) * {step},2) as value", "count as N_events")
# The action toPandas() here triggers the computation.
# Histogram data is fetched into the driver as a Pandas Dataframe.
%time histogram_data_pandas=histogram_data.toPandas()
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.rcParams.update({'font.size': 20, 'figure.figsize': [14,10]})
# cut the first and last bin
x = histogram_data_pandas.iloc[1:-1]["value"]
y = histogram_data_pandas.iloc[1:-1]["N_events"]
# line plot
f, ax = plt.subplots()
ax.plot(x, y, '-')
ax.set_xlim(min_val, max_val)
ax.set_xlabel('$p_T$ (GeV)')
ax.set_ylabel('Number of Events')
ax.set_title("Distribution of $p_T$ ")
plt.show()
Explanation: Benchmark task: Q3
Plot the $𝑝_𝑇$ of jets with |𝜂| < 1 (𝜂 is the jet pseudorapidity).
End of explanation
# This will use MET adn Jet_pt
df_events.select("MET_pt","Jet_pt").show(10,False)
# The filter ispushed inside arrays of Jet_pt
# This use Spark's higher order functions for array processing
q4 = df_events.select("MET_pt").where("cardinality(filter(Jet_pt, x -> x > 40)) > 1")
q4.show(5,False)
# compute the histogram for MET_pt
min_val = 0
max_val = 100
num_bins = 100
step = (max_val - min_val) / num_bins
histogram_data = (
q4
.selectExpr(f"width_bucket(MET_pt, {min_val}, {max_val}, {num_bins}) as bucket")
.groupBy("bucket")
.count()
.orderBy("bucket")
)
# convert bucket number to the corresponding value
histogram_data = histogram_data.selectExpr(f"round({min_val} + (bucket - 1/2) * {step},2) as value", "count as N_events")
# The action toPandas() here triggers the computation.
# Histogram data is fetched into the driver as a Pandas Dataframe.
%time histogram_data_pandas=histogram_data.toPandas()
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.rcParams.update({'font.size': 20, 'figure.figsize': [14,10]})
# cut the first and last bin
x = histogram_data_pandas.iloc[1:-1]["value"]
y = histogram_data_pandas.iloc[1:-1]["N_events"]
# line plot
f, ax = plt.subplots()
ax.plot(x, y, '-')
ax.set_xlim(min_val, max_val)
ax.set_xlabel('$𝐸^{𝑚𝑖𝑠𝑠}_T$ (GeV)')
ax.set_ylabel('Number of Events')
ax.set_title("Distribution of $𝐸^{𝑚𝑖𝑠𝑠}_T$ ")
plt.show()
Explanation: Benchmark task: Q4
Plot the $𝐸^{𝑚𝑖𝑠𝑠}_𝑇$ of the events that have at least two jets with
$𝑝_𝑇$ > 40 GeV (gigaelectronvolt).
End of explanation
# filter the events
# select only events with 2 muons
# the 2 muons must have opposite charge
df_muons = df_events.filter("nMuon == 2").filter("Muon_charge[0] != Muon_charge[1]")
# Formula for dimuon mass in pt, eta, phi, m coordinates
# see also http://edu.itp.phys.ethz.ch/hs10/ppp1/2010_11_02.pdf
# and https://en.wikipedia.org/wiki/Invariant_mass
df_with_dimuonmass = df_muons.selectExpr("MET_pt",
sqrt(2 * Muon_pt[0] * Muon_pt[1] *
( cosh(Muon_eta[0] - Muon_eta[1]) - cos(Muon_phi[0] - Muon_phi[1]) )
) as Dimuon_mass
)
# apply a filter on the dimuon mass
Q5 = df_with_dimuonmass.filter("Dimuon_mass between 60 and 120")
# compute the histogram for MET_pt
min_val = 0
max_val = 100
num_bins = 100
step = (max_val - min_val) / num_bins
histogram_data = (
Q5
.selectExpr(f"width_bucket(MET_pt, {min_val}, {max_val}, {num_bins}) as bucket")
.groupBy("bucket")
.count()
.orderBy("bucket")
)
# convert bucket number to the corresponding dimoun mass value
histogram_data = histogram_data.selectExpr(f"round({min_val} + (bucket - 1/2) * {step},2) as value", "count as N_events")
# The action toPandas() here triggers the computation.
# Histogram data is fetched into the driver as a Pandas Dataframe.
%time histogram_data_pandas=histogram_data.toPandas()
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.rcParams.update({'font.size': 20, 'figure.figsize': [14,10]})
# cut the first and last bin
x = histogram_data_pandas.iloc[1:-1]["value"]
y = histogram_data_pandas.iloc[1:-1]["N_events"]
# line plot
f, ax = plt.subplots()
ax.plot(x, y, '-')
ax.set_xlabel('$𝐸^{𝑚𝑖𝑠𝑠}_T$ (GeV)')
ax.set_ylabel('Number of Events')
ax.set_title("Distribution of $𝐸^{𝑚𝑖𝑠𝑠}_T$ ")
plt.show()
spark.stop()
Explanation: Benchmark task: Q5
Plot the $𝐸^{𝑚𝑖𝑠𝑠}_T$ of events that have an opposite-charge muon
pair with an invariant mass between 60 GeV and 120 GeV.
End of explanation |
3,576 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
k-Nearest Neighbor (kNN) exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
The kNN classifier consists of two stages
Step1: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps
Step2: Inline Question #1
Step3: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5
Step5: You should expect to see a slightly better performance than with k = 1.
Step6: Cross-validation
We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation. | Python Code:
# Run some setup code for this notebook.
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print 'Training data shape: ', X_train.shape
print 'Training labels shape: ', y_train.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Subsample the data for more efficient code execution in this exercise
num_training = 5000
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
num_test = 500
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
print X_train.shape, X_test.shape
from cs231n.classifiers import KNearestNeighbor
# Create a kNN classifier instance.
# Remember that training a kNN classifier is a noop:
# the Classifier simply remembers the data and does no further processing
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
Explanation: k-Nearest Neighbor (kNN) exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
The kNN classifier consists of two stages:
During training, the classifier takes the training data and simply remembers it
During testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples
The value of k is cross-validated
In this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code.
End of explanation
# Open cs231n/classifiers/k_nearest_neighbor.py and implement
# compute_distances_two_loops.
# Test your implementation:
dists = classifier.compute_distances_two_loops(X_test)
print dists.shape
# We can visualize the distance matrix: each row is a single test example and
# its distances to training examples
plt.imshow(dists, interpolation='none')
plt.show()
Explanation: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps:
First we must compute the distances between all test examples and all train examples.
Given these distances, for each test example we find the k nearest examples and have them vote for the label
Lets begin with computing the distance matrix between all training and test examples. For example, if there are Ntr training examples and Nte test examples, this stage should result in a Nte x Ntr matrix where each element (i,j) is the distance between the i-th test and j-th train example.
First, open cs231n/classifiers/k_nearest_neighbor.py and implement the function compute_distances_two_loops that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time.
End of explanation
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
Explanation: Inline Question #1: Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)
What in the data is the cause behind the distinctly bright rows?
What causes the columns?
Your Answer: Row bright where most common color appear in test image and column bright when most common color appear in train image.
End of explanation
y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
Explanation: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5:
End of explanation
# Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)
# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Let's compare how fast the implementations are
def time_function(f, *args):
Call a function f with args and return the time (in seconds) that it took to execute.
import time
tic = time.time()
f(*args)
toc = time.time()
return toc - tic
two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)
print 'Two loop version took %f seconds' % two_loop_time
one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)
print 'One loop version took %f seconds' % one_loop_time
no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)
print 'No loop version took %f seconds' % no_loop_time
# you should see significantly faster performance with the fully vectorized implementation
Explanation: You should expect to see a slightly better performance than with k = 1.
End of explanation
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []
################################################################################
# TODO: #
# Split up the training data into folds. After splitting, X_train_folds and #
# y_train_folds should each be lists of length num_folds, where #
# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #
# Hint: Look up the numpy array_split function. #
################################################################################
X_train_folds = np.split(X_train,num_folds)
y_train_folds = np.split(y_train,num_folds)
################################################################################
# END OF YOUR CODE #
################################################################################
# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}
################################################################################
# TODO: #
# Perform k-fold cross validation to find the best value of k. For each #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all #
# values of k in the k_to_accuracies dictionary. #
################################################################################
for k in k_choices:
for i in range(num_folds):
X_train_data_fold = np.empty(shape=(0,X_train.shape[1]))
y_train_data_fold = np.array([])
for j in range(num_folds):
if j!=i:
X_train_data_fold = np.vstack((X_train_data_fold,X_train_folds[j]))
y_train_data_fold = np.hstack((y_train_data_fold,y_train_folds[j]))
classifier = KNearestNeighbor()
classifier.train(X_train_data_fold, y_train_data_fold)
dists = classifier.compute_distances_no_loops(X_train_folds[i])
y_test_pred = classifier.predict_labels(dists, k)
num_correct = np.sum(y_test_pred == y_train_folds[i])
accuracy = float(num_correct) / num_test
if k not in k_to_accuracies:
k_to_accuracies[k] = []
k_to_accuracies[k].append(accuracy)
print ("done with ",k)
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print 'k = %d, accuracy = %f' % (k, accuracy)
# plot the raw observations
for k in k_choices:
accuracies = k_to_accuracies[k]
plt.scatter([k] * len(accuracies), accuracies)
# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
best_k = 5
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
Explanation: Cross-validation
We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.
End of explanation |
3,577 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
making new class prediction for a classification problem
| Python Code::
from keras.models import Sequential
from keras.layers import Dense
from sklearn.datasets import make_blobs
from sklearn.preprocessing import MinMaxScaler
from numpy import array
X, y = make_blobs(n_samples=100, centers=2, n_features=2, random_state=1)
scalar = MinMaxScaler()
scalar.fit(X)
X = scalar.transform(X)
model = Sequential()
model.add(Dense(4, input_shape=(2,), activation='relu'))
model.add(Dense(4, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam')
model.fit(X, y, epochs=500, verbose=0)
Xnew = array([[0.89337759, 0.65864154]])
ynew = model.predict_classes(Xnew)
|
3,578 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cccr-iitm', 'sandbox-1', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: CCCR-IITM
Source ID: SANDBOX-1
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:48
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation |
3,579 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Detecting Changes in Sentinel-1 Imagery (Part 3)
Author
Step1: Datasets and Python modules
One dataset will be used in the tutorial
Step4: This cell carries over the chi square cumulative distribution function and the determinant of a Sentinel-1 image from Part 2.
Step6: And to make use of interactive graphics, we import the folium package
Step7: Part 3. Multitemporal change detection
Continuing from Part 2, in which we discussed bitemporal change detection with Sentinel-1 images, we turn our attention to the multitemporal case. To get started, we obviously need ...
A time series
Here is a fairly interesting one
Step8: The image collection below covers the months of September, 2019 through January, 2020 at 6-day intervals
Step10: It will turn out to be convenient to work with a list rather than a collection, so we'll convert the collection to a list and, while we're at it, clip the images to our AOI
Step11: Here is an RGB composite of the VV bands for three images in early November, after conversion to decibels. Note that some changes, especially those due to flooding, already show up in this representation as colored pixels.
Step12: Now we have a series of 26 SAR images and, for whatever reason, would like to know where and when changes have taken place. A first reaction might be
Step14: Actually things are a bit worse. The bitemporal tests are manifestly not independent because consecutive tests have one image in common. The best one can say in this situation is
$$
\alpha_T \le (k-1)\alpha, \tag{3.2}
$$
or $\alpha_T \le 25\%$ for $k=26$ and $\alpha=0.01$ . If we wish to set a false positive rate of at most, say, 1% for the entire series, then each bitemporal test must have a significance level of $\alpha=0.0004$ and a correspondingly large false negative rate $\beta$. In other words many significant changes may be missed.
How to proceed? Perhaps by being a bit less ambitious at first and asking the simpler question
Step15: Let's see if this test statistic does indeed follow the chi square distribution. First we define a small polygon aoi_sub over the Thorne Moors (on the eastern side of the AOI) for which we hope there are few significant changes.
Step16: Here is a comparison for pixels in aoi_sub with the chi square distribution with $k-1$ degrees of freedom. We choose the first 10 images in the series ($k=10$) because we expect fewer changes in September/October than over the complete sequence $k=24$, which extends into January.
Step17: It appears that Wilks' Theorem is again a fairly good approximation. So why not generate a change map for the full series? The good news is that we now have the overall false positive probability $\alpha$ under control. Here we set it to $\alpha=0.01$.
Step18: So plenty of changes, but hard to interpret considering the time span. Although we can see where changes took place, we know neither when they occurred nor their multiplicity. Also there is a matter that we have glossed over up until now, and that is ...
A question of scale
The number of looks plays an important role in all of the formulae that we have discussed so far, and for the Sentinel-1 ground range detected imagery we first used $m=5$ and now the ENL $=4.4$. When we display a change map interactively, the zoom factor determines the image pyramid level at which the GEE servers perform the required calculations and pass the result to the folium map client. If the calculations are not at the nominal scale of 10m then the number of looks is effectively larger than the ENL due to the averaging involved in constructing higher pyramid levels. The effect can be seen in the output cell above
Step20: You will notice in the output cell above that the calculation at nominal scale (the blue pixels) now takes considerably longer to complete. Also some red pixels are not completely covered by blue ones. Those changes are a spurious result of the falsified number of looks. Nevertheless for quick previewing purposes we might prefer to do without the reprojection.
A sequential omnibus test
Recalling the last remark at the end of Part 2, let's now guess the omnibus LRT for the dual polarization case. From Eq. (3.5), replacing $s_i \to|c_i|$, $\ \sum s_i \to |\sum c_i|\ $ and $k^k \to k^{2k}$, we get
$$
Q_k = \left[k^{2k}{\prod_i |c_i|\over |\sum_i c_i|^k}\right]^m. \tag{3.7}
$$
This is in fact a special case of a more general omnibus test statistic
$$
Q_k = \left[k^{pk}{\prod_i |c_i|\over |\sum_i c_i|^k}\right]^m
$$
which holds for $p\times p$ polarimetric covariance matrix images, for example for the full dual pol matrix Eq. (1.5) or for full $3\times 3$ quad pol matrices ($p=3$), but also for diagonal $2\times 2$ and $3\times 3$ matrices.
Which brings us to the heart of this Tutorial. We will now decompose Eq. (3.7) into a product of independent likelihood ratio tests which will enable us to determine when changes occurred at each pixel location. Then we'll code a complete multitemporal change detection algorithm on the GEE Python API.
Single polarization
Rather than make a formal derivation, we will illustrate the decomposition on a series of $k=5$ single polarization (VV) measurements. The omnibus test Eq. (3.5) for any change over the series from $t_1$ to $t_5$ is
$$
Q_5 = \left[ 5^5 {s_1s_2s_3s_4s_5\over (s_1+s_2+s_3+s_4+s_5)^5}\right]^m.
$$
If we accept the null hypothesis $a_1=a_2=a_3=a_4=a_5$ we're done and can move on to the next pixel (figuratively of course, since this stuff is all done in parallel). But suppose we have rejected the null hypothesis, i.e., there was a least one significant change. In order to find it (or them), we begin by testing the first of the four intervals. That's just the bitemporal test from Part 2, but let's call it $R_2$ rather than $Q_2$,
$$
R_2 = \left[ 2^2 {s_1s_2\over (s_1+s_2)^2}\right]^m.
$$
Suppose we conclude no change, that is, $a_1=a_2$. Now we don't do just another bitemporal test on the second interval. Instead we test the hypothesis
$$
\begin{align}
H_0
Step27: The off-diagonal elements are mostly small. The not-so-small values can be attributed to sampling error or to the presence of some change pixels in the samples.
Dual polarization and an algorithm
With our substitution trick, we can now write down the sequential test for the dual polarization (bivariate) image time series. From Eq. (3.8) we get
$$
Q_k = \prod_{j=2}^k R_j , \quad R_j = \left[{j^{2j}\over (j-1)^{2(j-1)}}{|c_1+\dots +c_{j-1}|^{j-1}|c_j|\over |c_1+\dots +c_j|^j}\right]^m,\quad j = 2\dots k. \tag{3.9}
$$
And of course we have again to use Wilks' Theorem to get the P values, so we work with
$$
-2\log{R_j} = -2m\Big[2(j\log{j}-(j-1)\log(j-1)+(j-1)\log\Big|\sum_{i=1}^{j-1}c_i \Big|+\log|c_j|-j\log\Big|\sum_{i=1}^j c_i\Big|\ \Big] \tag{3.10a}
$$
and
$$
-2\log Q_k = \sum_{j=2}^k -2\log R_j. \tag{3.10b}
$$
The statistic $-2\log R_j$ is approximately chi square distributed with two degrees of freedom. Similarly $-2\log Q_k$ is approximately chi square distributed with $2(k-1)$ degrees of freedom. Readers should satisfy themselves that these numbers are indeed the correct, taking into account that each measurement $c_i$ has two free parameters $|S^a_{vv}|^2$ and $|S^b_{vh}|^2$, see Eq. (2.13).
Now for the algorithm
Step30: Filtering the P values
|Table 3.2 | | | | | | |
|----------|-------|-------|-------|-------|-------|--------|
|$i\ $ / $j$| | 1 | 2 | 3 | 4 | |
| 1 | | $P_2$ | $P_3$ | $P_4$ | $P_5$ | $P_{Q5}$ |
| 2 | | | $P_2$ | $P_3$ | $P_4$ | $P_{Q4}$ |
| 3 | | | | $P_2$ | $P_3$ | $P_{Q3}$ |
| 4 | | | | | $P_2$ | $P_{Q2}$ |
The pre-calculated P values in pv_arr (shown schematically in Table 3.2 for $k=5$) are then scanned in nested iterations over indices $i$ and $j$ to determine the following thematic change maps
Step32: The following function ties the two steps together
Step33: And now we run the algorithm and display the color-coded change maps
Step35: Post-processing
Step37: We only have to modify the change_maps function to include the change direction in the bmap image
Step38: Because of the long delays when the zoom level is changed, it is a lot more convenient to export the change maps to GEE Assets and then examine them, either here in Colab or in the Code Editor. This also means the maps will be shown at the correct scale, irrespective of the zoom level. Here I export all of the change maps as a single image.
Step39: The asset cmaps is shared so we can all access it | Python Code:
import ee
# Trigger the authentication flow.
ee.Authenticate()
# Initialize the library.
ee.Initialize()
Explanation: Detecting Changes in Sentinel-1 Imagery (Part 3)
Author: mortcanty
Run me first
Run the following cell to initialize the API. The output will contain instructions on how to grant this notebook access to Earth Engine using your account.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
from scipy.stats import norm, gamma, f, chi2
import IPython.display as disp
%matplotlib inline
Explanation: Datasets and Python modules
One dataset will be used in the tutorial:
COPERNICUS/S1_GRD_FLOAT
Sentinel-1 ground range detected images
The following cell imports some python modules which we will be using as we go along and enables inline graphics.
End of explanation
def chi2cdf(chi2, df):
Calculates Chi square cumulative distribution function for
df degrees of freedom using the built-in incomplete gamma
function gammainc().
return ee.Image(chi2.divide(2)).gammainc(ee.Number(df).divide(2))
def det(im):
Calculates determinant of 2x2 diagonal covariance matrix.
return im.expression('b(0)*b(1)')
Explanation: This cell carries over the chi square cumulative distribution function and the determinant of a Sentinel-1 image from Part 2.
End of explanation
import folium
def add_ee_layer(self, ee_image_object, vis_params, name):
Adds Earth Engine layers to a folium map.
map_id_dict = ee.Image(ee_image_object).getMapId(vis_params)
folium.raster_layers.TileLayer(
tiles = map_id_dict['tile_fetcher'].url_format,
attr = 'Map Data © <a href="https://earthengine.google.com/">Google Earth Engine</a>',
name = name,
overlay = True,
control = True).add_to(self)
# Add EE drawing method to folium.
folium.Map.add_ee_layer = add_ee_layer
Explanation: And to make use of interactive graphics, we import the folium package:
End of explanation
geoJSON = {
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
-1.2998199462890625,
53.48028242228504
],
[
-0.841827392578125,
53.48028242228504
],
[
-0.841827392578125,
53.6958933974518
],
[
-1.2998199462890625,
53.6958933974518
],
[
-1.2998199462890625,
53.48028242228504
]
]
]
}
}
]
}
coords = geoJSON['features'][0]['geometry']['coordinates']
aoi = ee.Geometry.Polygon(coords)
Explanation: Part 3. Multitemporal change detection
Continuing from Part 2, in which we discussed bitemporal change detection with Sentinel-1 images, we turn our attention to the multitemporal case. To get started, we obviously need ...
A time series
Here is a fairly interesting one: a region in South Yorkshire, England where, in November 2019, extensive flooding occurred along the River Don just north of the city of Doncaster.
End of explanation
im_coll = (ee.ImageCollection('COPERNICUS/S1_GRD_FLOAT')
.filterBounds(aoi)
.filterDate(ee.Date('2019-09-01'),ee.Date('2020-01-31'))
.filter(ee.Filter.eq('orbitProperties_pass', 'DESCENDING'))
.filter(ee.Filter.eq('relativeOrbitNumber_start', 154))
.map(lambda img: img.set('date', ee.Date(img.date()).format('YYYYMMdd')))
.sort('date'))
timestamplist = (im_coll.aggregate_array('date')
.map(lambda d: ee.String('T').cat(ee.String(d)))
.getInfo())
timestamplist
Explanation: The image collection below covers the months of September, 2019 through January, 2020 at 6-day intervals:
End of explanation
def clip_img(img):
Clips a list of images.
return ee.Image(img).clip(aoi)
im_list = im_coll.toList(im_coll.size())
im_list = ee.List(im_list.map(clip_img))
im_list.length().getInfo()
Explanation: It will turn out to be convenient to work with a list rather than a collection, so we'll convert the collection to a list and, while we're at it, clip the images to our AOI:
End of explanation
def selectvv(current):
return ee.Image(current).select('VV')
vv_list = im_list.map(selectvv)
location = aoi.centroid().coordinates().getInfo()[::-1]
mp = folium.Map(location=location, zoom_start=11)
rgb_images = (ee.Image.rgb(vv_list.get(10), vv_list.get(11), vv_list.get(12))
.log10().multiply(10))
mp.add_ee_layer(rgb_images, {'min': -20,'max': 0}, 'rgb composite')
mp.add_child(folium.LayerControl())
Explanation: Here is an RGB composite of the VV bands for three images in early November, after conversion to decibels. Note that some changes, especially those due to flooding, already show up in this representation as colored pixels.
End of explanation
alpha = 0.01
1-(1-alpha)**25
Explanation: Now we have a series of 26 SAR images and, for whatever reason, would like to know where and when changes have taken place. A first reaction might be:
What's the problem? Just apply the bitemporal method we developed in Part 2 to each of the 25 time intervals.
Well, one problem is the rate of false positives. If the bitemporal tests are statistically independent, then the probability of not getting a false positive over a series of length $k$ is the product of not getting one in each of the $k-1$ intervals, i.e., $(1-\alpha)^{k-1}$ and the overall first kind error probability $\alpha_T$ is its complement:
$$
\alpha_T = 1-(1-\alpha)^{k-1}. \tag{3.1}
$$
For our case, even with a small value of $\alpha=0.01$, this gives a whopping 22.2% false positive rate:
End of explanation
def omnibus(im_list, m = 4.4):
Calculates the omnibus test statistic, monovariate case.
def log(current):
return ee.Image(current).log()
im_list = ee.List(im_list)
k = im_list.length()
klogk = k.multiply(k.log())
klogk = ee.Image.constant(klogk)
sumlogs = ee.ImageCollection(im_list.map(log)).reduce(ee.Reducer.sum())
logsum = ee.ImageCollection(im_list).reduce(ee.Reducer.sum()).log()
return klogk.add(sumlogs).subtract(logsum.multiply(k)).multiply(-2*m)
Explanation: Actually things are a bit worse. The bitemporal tests are manifestly not independent because consecutive tests have one image in common. The best one can say in this situation is
$$
\alpha_T \le (k-1)\alpha, \tag{3.2}
$$
or $\alpha_T \le 25\%$ for $k=26$ and $\alpha=0.01$ . If we wish to set a false positive rate of at most, say, 1% for the entire series, then each bitemporal test must have a significance level of $\alpha=0.0004$ and a correspondingly large false negative rate $\beta$. In other words many significant changes may be missed.
How to proceed? Perhaps by being a bit less ambitious at first and asking the simpler question: Were there any changes at all over the interval? If the answer is affirmative, we can worry about how many there were and when they occurred later. Let's formulate this question as ...
An omnibus test for change
We'll start again with the easier single polarization case. For the series of VV intensity images acquired at times $t_1, t_2,\dots t_k$, our null hypothesis is that, at a given pixel position, there has been no change in the signal strengths $a_i=\langle|S^{a_i}_{vv}|^2\rangle$ over the entire period, i.e.,
$$
H_0:\quad a_1 = a_2 = \dots = a_k = a.
$$
The alternative hypothesis is that there was at least one change (and possibly many) over the interval. For the more mathematically inclined this can be written succinctly as
$$
H_1:\quad \exists\ i,j :\ a_i \ne a_j,
$$
which says: there exist indices $i, j$ for which $a_i$ is not equal to $a_j$.
Again, the likelihood functions are products of gamma distributions:
$$
L_1(a_1,\dots,a_k) =\prod_{i=1}^k p(s_i\mid a_i) = {1\over\Gamma(m)^k}\left[\prod_i{a_i\over m}\right]^{-m}\left[\prod_i s_i\right]^{m-1}\exp(-m\sum_i{s_i\over a_i}) \tag{3.3}
$$
$$
L_0(a) = \prod_{i=1}^k p(s_i\mid a) = {1\over\Gamma(m)^k} \left[{a\over m}\right]^{-mk}\left[\prod_i s_i\right]^{m-1}\exp(-{m\over a}\sum_i s_i) \tag{3.4}
$$
and $L_1$ is maximized for $\hat a_i = s_i,\ i=1\dots k,$ while $L_0$ is maximized for $\hat a = {1\over k}\sum_i s_i$. So with a bit of simple algebra our likelihood ratio test statistic is
$$
Q_k = {L_0(\hat a)\over L_1(\hat a_1,\dots,\hat a_k)} = \left[k^k{\prod_i s_i\over (\sum_i s_i)^k}\right]^m \tag{3.5}
$$
and is called an omnibus test statistic. Note that, for $k=2$, we get the bitemporal LRT given by Eq. (2.10).
We can't expect to find an analytical expression for the probability distribution of this LRT statistic, so we will again invoke Wilks' Theorem and work with
$$
-2 \log{Q_k} = \bigk\log{k}+\sum_i\log{s_i}-k\log{\sum_i s_i}\big \tag{3.6}
$$
According to Wilks, it should be approximately chi square distributed with $k-1$ degrees of freedom under $H_0$. (Why?)
The input cell below evaluates the test statistic Eq. (3.6) for a list of single polarization images. We prefer from now on to use as default the equivalent number of looks 4.4 that we discussed at the end of Part 1 rather than the actual number of looks $m=5$, in the hope of getting a better agreement.
End of explanation
geoJSON = {
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
-0.9207916259765625,
53.63649628489509
],
[
-0.9225082397460938,
53.62550271303527
],
[
-0.8892059326171875,
53.61022911107819
],
[
-0.8737564086914062,
53.627538775780984
],
[
-0.9207916259765625,
53.63649628489509
]
]
]
}
}
]
}
coords = geoJSON['features'][0]['geometry']['coordinates']
aoi_sub = ee.Geometry.Polygon(coords)
location = aoi.centroid().coordinates().getInfo()[::-1]
mp = folium.Map(location=location, zoom_start=11)
mp.add_ee_layer(rgb_images.clip(aoi_sub), {'min': -20, 'max': 0}, 'aoi_sub rgb composite')
mp.add_child(folium.LayerControl())
Explanation: Let's see if this test statistic does indeed follow the chi square distribution. First we define a small polygon aoi_sub over the Thorne Moors (on the eastern side of the AOI) for which we hope there are few significant changes.
End of explanation
k = 10
hist = (omnibus(vv_list.slice(0,k))
.reduceRegion(ee.Reducer.fixedHistogram(0, 40, 200), geometry=aoi_sub, scale=10)
.get('constant')
.getInfo())
a = np.array(hist)
x = a[:,0]
y = a[:,1]/np.sum(a[:,1])
plt.plot(x, y, '.', label='data')
plt.plot(x, chi2.pdf(x, k-1)/5, '-r', label='chi square')
plt.legend()
plt.grid()
plt.show()
Explanation: Here is a comparison for pixels in aoi_sub with the chi square distribution with $k-1$ degrees of freedom. We choose the first 10 images in the series ($k=10$) because we expect fewer changes in September/October than over the complete sequence $k=24$, which extends into January.
End of explanation
# The change map for alpha = 0.01.
k = 26; alpha = 0.01
p_value = ee.Image.constant(1).subtract(chi2cdf(omnibus(vv_list), k-1))
c_map = p_value.multiply(0).where(p_value.lt(alpha), 1)
# Make the no-change pixels transparent.
c_map = c_map.updateMask(c_map.gt(0))
# Overlay onto the folium map.
location = aoi.centroid().coordinates().getInfo()[::-1]
mp = folium.Map(location=location, zoom_start=11)
mp.add_ee_layer(c_map, {'min': 0,'max': 1, 'palette': ['black', 'red']}, 'change map')
mp.add_child(folium.LayerControl())
Explanation: It appears that Wilks' Theorem is again a fairly good approximation. So why not generate a change map for the full series? The good news is that we now have the overall false positive probability $\alpha$ under control. Here we set it to $\alpha=0.01$.
End of explanation
c_map_10m = c_map.reproject(c_map.projection().crs(), scale=10)
mp = folium.Map(location=location, zoom_start=11)
mp.add_ee_layer(c_map, {'min': 0, 'max': 1, 'palette': ['black', 'red']}, 'Change map')
mp.add_ee_layer(c_map_10m, {'min': 0, 'max': 1, 'palette': ['black', 'blue']}, 'Change map (10m)')
mp.add_child(folium.LayerControl())
Explanation: So plenty of changes, but hard to interpret considering the time span. Although we can see where changes took place, we know neither when they occurred nor their multiplicity. Also there is a matter that we have glossed over up until now, and that is ...
A question of scale
The number of looks plays an important role in all of the formulae that we have discussed so far, and for the Sentinel-1 ground range detected imagery we first used $m=5$ and now the ENL $=4.4$. When we display a change map interactively, the zoom factor determines the image pyramid level at which the GEE servers perform the required calculations and pass the result to the folium map client. If the calculations are not at the nominal scale of 10m then the number of looks is effectively larger than the ENL due to the averaging involved in constructing higher pyramid levels. The effect can be seen in the output cell above: the number of change pixels seems to decrease when we zoom out. There is no problem when we export our results to GEE assets, to Google Drive or to Cloud storage, since we can simply choose the correct nominal scale for export.
In order to see the changes correctly at all zoom levels, we can force GEE to work at the nominal scale by reprojecting before displaying on the map (use with caution):
End of explanation
def sample_vv_imgs(j):
Samples the test statistics Rj in the region aoi_sub.
j = ee.Number(j)
# Get the factors in the expression for Rj.
sj = vv_list.get(j.subtract(1))
jfact = j.pow(j).divide(j.subtract(1).pow(j.subtract(1)))
sumj = ee.ImageCollection(vv_list.slice(0, j)).reduce(ee.Reducer.sum())
sumjm1 = ee.ImageCollection(vv_list.slice(0, j.subtract(1))).reduce(ee.Reducer.sum())
# Put them together.
Rj = sumjm1.pow(j.subtract(1)).multiply(sj).multiply(jfact).divide(sumj.pow(j)).pow(5)
# Sample Rj.
sample = (Rj.sample(region=aoi_sub, scale=10, numPixels=1000, seed=123)
.aggregate_array('VV_sum'))
return sample
# Sample the first few list indices.
samples = ee.List.sequence(2, 8).map(sample_vv_imgs)
# Calculate and display the correlation matrix.
np.set_printoptions(precision=2, suppress=True)
print(np.corrcoef(samples.getInfo()))
Explanation: You will notice in the output cell above that the calculation at nominal scale (the blue pixels) now takes considerably longer to complete. Also some red pixels are not completely covered by blue ones. Those changes are a spurious result of the falsified number of looks. Nevertheless for quick previewing purposes we might prefer to do without the reprojection.
A sequential omnibus test
Recalling the last remark at the end of Part 2, let's now guess the omnibus LRT for the dual polarization case. From Eq. (3.5), replacing $s_i \to|c_i|$, $\ \sum s_i \to |\sum c_i|\ $ and $k^k \to k^{2k}$, we get
$$
Q_k = \left[k^{2k}{\prod_i |c_i|\over |\sum_i c_i|^k}\right]^m. \tag{3.7}
$$
This is in fact a special case of a more general omnibus test statistic
$$
Q_k = \left[k^{pk}{\prod_i |c_i|\over |\sum_i c_i|^k}\right]^m
$$
which holds for $p\times p$ polarimetric covariance matrix images, for example for the full dual pol matrix Eq. (1.5) or for full $3\times 3$ quad pol matrices ($p=3$), but also for diagonal $2\times 2$ and $3\times 3$ matrices.
Which brings us to the heart of this Tutorial. We will now decompose Eq. (3.7) into a product of independent likelihood ratio tests which will enable us to determine when changes occurred at each pixel location. Then we'll code a complete multitemporal change detection algorithm on the GEE Python API.
Single polarization
Rather than make a formal derivation, we will illustrate the decomposition on a series of $k=5$ single polarization (VV) measurements. The omnibus test Eq. (3.5) for any change over the series from $t_1$ to $t_5$ is
$$
Q_5 = \left[ 5^5 {s_1s_2s_3s_4s_5\over (s_1+s_2+s_3+s_4+s_5)^5}\right]^m.
$$
If we accept the null hypothesis $a_1=a_2=a_3=a_4=a_5$ we're done and can move on to the next pixel (figuratively of course, since this stuff is all done in parallel). But suppose we have rejected the null hypothesis, i.e., there was a least one significant change. In order to find it (or them), we begin by testing the first of the four intervals. That's just the bitemporal test from Part 2, but let's call it $R_2$ rather than $Q_2$,
$$
R_2 = \left[ 2^2 {s_1s_2\over (s_1+s_2)^2}\right]^m.
$$
Suppose we conclude no change, that is, $a_1=a_2$. Now we don't do just another bitemporal test on the second interval. Instead we test the hypothesis
$$
\begin{align}
H_0:\ & a_1=a_2= a_3\ (=a)\cr
{\rm against}\quad H_1:\ &a_1=a_2\ (=a) \ne a_3.
\end{align}
$$
So the alternative hypothesis is: There was no change in the first interval and there was a change in the second interval. The LRT is easy to derive, but let's go through it anyway.
$$
\begin{align}
{\rm From\ Eq.}\ (3.4):\ &L_0(a) = {1\over\Gamma(m)^3} \left[{a\over m}\right]^{-3m}\left[s_1s_2s_3\right]^{m-1}\exp(-{m\over a}(s_1+s_2+s_3) \cr
&\hat a = {1\over 3}(s_1+s_2+s_3) \cr
=>\ &L_0(\hat a) = {1\over\Gamma(m)^3} \left[{s_1+s_2+s_3\over 3m}\right]^{-3m}\left[s_1s_2s_3\right]^{m-1} \exp(-3m) \cr
{\rm From\ Eq.}\ (3.3):\ &L_1(a_1,a_2,a_3) = {1\over\Gamma(m)^3}\left[a_1a_2a_3\over m\right]^{-m}[s_1s_2s_3]^{m-1}\exp(-m(s_1/a_1+s_2/a_2+s_3/a_3)\cr
&\hat a_1 = \hat a_2 = {1\over 2}(s_1+s_2),\quad \hat a_3 = s_3 \cr
=>\ &L_1(\hat a_1,\hat a_2, \hat a_3) = {1\over\Gamma(m)^3}\left[(s_1+s_2)^2s_3\over 2^2m \right]^{-m}[s_1s_2s_3]^{m-1}\exp(-3m)
\end{align}
$$
And, taking the ratio $L_0/L_1$of the maximum likelihoods,
$$
R_3 = \left[{3^3\over 2^2}{(s_1+s_2)^2s_3\over (s_1+s_2+s_3)^3}\right]^m.
$$
Not too hard to guess that, if we accept $H_0$ again, we go on to test
$$
\begin{align}
H_0:\ a_1=a_2=a_3=a_4\ (=a)\cr
{\rm against}\quad H_1:\ a_1=a_2=a_3\ (=a) \ne a_4.
\end{align}
$$
with LRT statistic
$$
R_4 = \left[{4^4\over 3^3}{(s_1+s_2+s_3)^3s_4\over (s_1+s_2+s_3+s_4)^4}\right]^m,
$$
and so on to $R_5$ and the end of the time series.
Now for the cool part (try it out yourself):
$$
R_2\times R_3\times R_4 \times R_5 = Q_5.
$$
So, generalizing to a series of length $k$:
The omnibus test statistic $Q_k$ may be factored into the product of LRT's $R_j$ which test for homogeneity in the measured reflectance signal up to and including time $t_j$, assuming homogeneity up to time $t_{j-1}$:
$$
Q_k = \prod_{j=2}^k R_j, \quad R_j = \left[{j^j\over (j-1)^{j-1}}{(s_1+\dots +s_{j-1})^{j-1}s_j\over (s_1+\dots +s_j)^j}\right]^m,\quad j = 2\dots k. \tag{3.8}
$$
Moreover the test statistics $R_j$ are stochastically independent under $H_0$.
This can be shown analytically, see Conradsen et al. (2016) or P. 405 in my textbook, but we'll show it here empirically by sampling the test statistics $R_j$ in the region aoi_sub and examining the correlation matrix.
End of explanation
def log_det_sum(im_list, j):
Returns log of determinant of the sum of the first j images in im_list.
im_ist = ee.List(im_list)
sumj = ee.ImageCollection(im_list.slice(0, j)).reduce(ee.Reducer.sum())
return ee.Image(det(sumj)).log()
def log_det(im_list, j):
Returns log of the determinant of the jth image in im_list.
im = ee.Image(ee.List(im_list).get(j.subtract(1)))
return ee.Image(det(im)).log()
def pval(im_list, j, m=4.4):
Calculates -2logRj for im_list and returns P value and -2logRj.
im_list = ee.List(im_list)
j = ee.Number(j)
m2logRj = (log_det_sum(im_list, j.subtract(1))
.multiply(j.subtract(1))
.add(log_det(im_list, j))
.add(ee.Number(2).multiply(j).multiply(j.log()))
.subtract(ee.Number(2).multiply(j.subtract(1))
.multiply(j.subtract(1).log()))
.subtract(log_det_sum(im_list,j).multiply(j))
.multiply(-2).multiply(m))
pv = ee.Image.constant(1).subtract(chi2cdf(m2logRj, 2))
return (pv, m2logRj)
def p_values(im_list):
Pre-calculates the P-value array for a list of images.
im_list = ee.List(im_list)
k = im_list.length()
def ells_map(ell):
Arranges calculation of pval for combinations of k and j.
ell = ee.Number(ell)
# Slice the series from k-l+1 to k (image indices start from 0).
im_list_ell = im_list.slice(k.subtract(ell), k)
def js_map(j):
Applies pval calculation for combinations of k and j.
j = ee.Number(j)
pv1, m2logRj1 = pval(im_list_ell, j)
return ee.Feature(None, {'pv': pv1, 'm2logRj': m2logRj1})
# Map over j=2,3,...,l.
js = ee.List.sequence(2, ell)
pv_m2logRj = ee.FeatureCollection(js.map(js_map))
# Calculate m2logQl from collection of m2logRj images.
m2logQl = ee.ImageCollection(pv_m2logRj.aggregate_array('m2logRj')).sum()
pvQl = ee.Image.constant(1).subtract(chi2cdf(m2logQl, ell.subtract(1).multiply(2)))
pvs = ee.List(pv_m2logRj.aggregate_array('pv')).add(pvQl)
return pvs
# Map over l = k to 2.
ells = ee.List.sequence(k, 2, -1)
pv_arr = ells.map(ells_map)
# Return the P value array ell = k,...,2, j = 2,...,l.
return pv_arr
Explanation: The off-diagonal elements are mostly small. The not-so-small values can be attributed to sampling error or to the presence of some change pixels in the samples.
Dual polarization and an algorithm
With our substitution trick, we can now write down the sequential test for the dual polarization (bivariate) image time series. From Eq. (3.8) we get
$$
Q_k = \prod_{j=2}^k R_j , \quad R_j = \left[{j^{2j}\over (j-1)^{2(j-1)}}{|c_1+\dots +c_{j-1}|^{j-1}|c_j|\over |c_1+\dots +c_j|^j}\right]^m,\quad j = 2\dots k. \tag{3.9}
$$
And of course we have again to use Wilks' Theorem to get the P values, so we work with
$$
-2\log{R_j} = -2m\Big[2(j\log{j}-(j-1)\log(j-1)+(j-1)\log\Big|\sum_{i=1}^{j-1}c_i \Big|+\log|c_j|-j\log\Big|\sum_{i=1}^j c_i\Big|\ \Big] \tag{3.10a}
$$
and
$$
-2\log Q_k = \sum_{j=2}^k -2\log R_j. \tag{3.10b}
$$
The statistic $-2\log R_j$ is approximately chi square distributed with two degrees of freedom. Similarly $-2\log Q_k$ is approximately chi square distributed with $2(k-1)$ degrees of freedom. Readers should satisfy themselves that these numbers are indeed the correct, taking into account that each measurement $c_i$ has two free parameters $|S^a_{vv}|^2$ and $|S^b_{vh}|^2$, see Eq. (2.13).
Now for the algorithm:
The sequential omnibus change detection algorithm
With a time series of $k$ SAR images $(c_1,c_2,\dots,c_k)$,
Set $\ell = k$.
Set $s = (c_{k-\ell+1}, \dots c_k)$.
Perform the omnibus test $Q_\ell$ for any changes change over $s$.
If no significant changes are found, stop.
Successively test series $s$ with $R_2, R_3, \dots$ until the first significant change is met for $R_j$.
Set $\ell = k-j+1$ and go to 2.
|Table 3.1 | | | | | | |
|----------|-------|-------|-------|-------|-------|--------|
| $\ell$ | $c_1$ | $c_2$ | $c_3$ | $c_4$ | $c_5$ | |
| 5 | | $R^5_2$ | $R^5_3$ | $R^5_4$ | $R^5_5$ | $Q_5$ |
| 4 | | | $R^4_2$ | $R^4_3$ | $R^4_4$ | $Q_4$ |
| 3 | | | | $R^3_2$ | $R^3_3$ | $Q_3$ |
| 2 | | | | | $R^2_2$ | $Q_2$ |
Thus if a change is found, the series is truncated up to the point of change and the testing procedure is repeated for the rest of the series. Take for example a series of $k=5$ images. (See Table 3.1 where, to avoid ambiguity, we add superscript $\ell$ to each $R_j$ test). Suppose there is one change in the second interval only. Then the test sequence is (the asterisk means $H_0$ is rejected)
$$
Q^_5 \to R^5_2 \to R^{5}_3 \to Q_3.
$$
If there are changes in the second and last intervals,
$$
Q^_5 \to R^5_2 \to R^{5}_3 \to Q^_3 \to R^3_2 \to R^{3}_3,
$$
and if there are significant changes in all four intervals,
$$
Q^_5 \to R^{5}_2 \to Q^_4 \to R^{4}_2 \to Q^_3 \to R^{3}_2 \to Q^*_2.
$$
The approach taken in the coding of this algorithm is to pre-calculate P values for all of the $Q_\ell / R_j$ tests and then, in a second pass, to filter them to determine the points of change.
Pre-calculating the P value array
The following code cell performs map operations on the indices $\ell$ and $j$, returning an array of P values for all possible LRT statistics. For example again for $k=5$, the code calculates the P values for each $R_j$ entry in Table 3.1 as a list of lists. Before calculating each row, the time series $c_1, c_2,c_3,c_4, c_5$ is sliced from $k-\ell+1$ to $k$. The last entry in each row is simply the product of the other entries, $Q_\ell =\prod_{j=2}^\ell R_j.$
The program actually operates on the logarithms of the test statistics, Equations (3.10).
End of explanation
def filter_j(current, prev):
Calculates change maps; iterates over j indices of pv_arr.
pv = ee.Image(current)
prev = ee.Dictionary(prev)
pvQ = ee.Image(prev.get('pvQ'))
i = ee.Number(prev.get('i'))
cmap = ee.Image(prev.get('cmap'))
smap = ee.Image(prev.get('smap'))
fmap = ee.Image(prev.get('fmap'))
bmap = ee.Image(prev.get('bmap'))
alpha = ee.Image(prev.get('alpha'))
j = ee.Number(prev.get('j'))
cmapj = cmap.multiply(0).add(i.add(j).subtract(1))
# Check Rj? Ql? Row i?
tst = pv.lt(alpha).And(pvQ.lt(alpha)).And(cmap.eq(i.subtract(1)))
# Then update cmap...
cmap = cmap.where(tst, cmapj)
# ...and fmap...
fmap = fmap.where(tst, fmap.add(1))
# ...and smap only if in first row.
smap = ee.Algorithms.If(i.eq(1), smap.where(tst, cmapj), smap)
# Create bmap band and add it to bmap image.
idx = i.add(j).subtract(2)
tmp = bmap.select(idx)
bname = bmap.bandNames().get(idx)
tmp = tmp.where(tst, 1)
tmp = tmp.rename([bname])
bmap = bmap.addBands(tmp, [bname], True)
return ee.Dictionary({'i': i, 'j': j.add(1), 'alpha': alpha, 'pvQ': pvQ,
'cmap': cmap, 'smap': smap, 'fmap': fmap, 'bmap':bmap})
def filter_i(current, prev):
Arranges calculation of change maps; iterates over row-indices of pv_arr.
current = ee.List(current)
pvs = current.slice(0, -1 )
pvQ = ee.Image(current.get(-1))
prev = ee.Dictionary(prev)
i = ee.Number(prev.get('i'))
alpha = ee.Image(prev.get('alpha'))
median = prev.get('median')
# Filter Ql p value if desired.
pvQ = ee.Algorithms.If(median, pvQ.focalMedian(2.5), pvQ)
cmap = prev.get('cmap')
smap = prev.get('smap')
fmap = prev.get('fmap')
bmap = prev.get('bmap')
first = ee.Dictionary({'i': i, 'j': 1, 'alpha': alpha ,'pvQ': pvQ,
'cmap': cmap, 'smap': smap, 'fmap': fmap, 'bmap': bmap})
result = ee.Dictionary(ee.List(pvs).iterate(filter_j, first))
return ee.Dictionary({'i': i.add(1), 'alpha': alpha, 'median': median,
'cmap': result.get('cmap'), 'smap': result.get('smap'),
'fmap': result.get('fmap'), 'bmap': result.get('bmap')})
Explanation: Filtering the P values
|Table 3.2 | | | | | | |
|----------|-------|-------|-------|-------|-------|--------|
|$i\ $ / $j$| | 1 | 2 | 3 | 4 | |
| 1 | | $P_2$ | $P_3$ | $P_4$ | $P_5$ | $P_{Q5}$ |
| 2 | | | $P_2$ | $P_3$ | $P_4$ | $P_{Q4}$ |
| 3 | | | | $P_2$ | $P_3$ | $P_{Q3}$ |
| 4 | | | | | $P_2$ | $P_{Q2}$ |
The pre-calculated P values in pv_arr (shown schematically in Table 3.2 for $k=5$) are then scanned in nested iterations over indices $i$ and $j$ to determine the following thematic change maps:
cmap: the interval of the most recent change, one band, byte values $\in [0,k-1]$,
smap: the interval of the first change, one band, byte values $\in [0,k-1]$,
fmap: the number of changes, one band, byte values $\in [0,k-1]$,
bmap: the changes in each interval, $\ k-1$ bands, byte values $\in [0,1]$).
A boolean variable median is included in the code. Its purpose is to reduce the salt-and-pepper effect in no-change regions, which is at least partly a consequence of the uniform distribution of the P values under $H_0$ (see the section A note on P values in Part 2). If median is True, the P values for each $Q_\ell$ statistic are passed through a $5\times 5$ median filter before being compared with the significance threshold. This is not statistically kosher but probably justifiable if one is only interested in large homogeneous changes, for example flood inundations or deforestation.
Here is the code:
End of explanation
def change_maps(im_list, median=False, alpha=0.01):
Calculates thematic change maps.
k = im_list.length()
# Pre-calculate the P value array.
pv_arr = ee.List(p_values(im_list))
# Filter P values for change maps.
cmap = ee.Image(im_list.get(0)).select(0).multiply(0)
bmap = ee.Image.constant(ee.List.repeat(0, k.subtract(1))).add(cmap)
alpha = ee.Image.constant(alpha)
first = ee.Dictionary({'i': 1, 'alpha': alpha, 'median': median,
'cmap': cmap, 'smap': cmap, 'fmap': cmap, 'bmap': bmap})
return ee.Dictionary(pv_arr.iterate(filter_i, first))
Explanation: The following function ties the two steps together:
End of explanation
result = change_maps(im_list, median=True, alpha=0.05)
# Extract the change maps and display.
cmap = ee.Image(result.get('cmap'))
smap = ee.Image(result.get('smap'))
fmap = ee.Image(result.get('fmap'))
location = aoi.centroid().coordinates().getInfo()[::-1]
palette = ['black', 'blue', 'cyan', 'yellow', 'red']
mp = folium.Map(location=location, zoom_start=11)
mp.add_ee_layer(cmap, {'min': 0, 'max': 25, 'palette': palette}, 'cmap')
mp.add_ee_layer(smap, {'min': 0, 'max': 25, 'palette': palette}, 'smap')
mp.add_ee_layer(fmap, {'min': 0, 'max': 25, 'palette': palette}, 'fmap')
mp.add_child(folium.LayerControl())
Explanation: And now we run the algorithm and display the color-coded change maps: cmap, smap (blue early, red late) and fmap (blue few, red many):
End of explanation
def dmap_iter(current, prev):
Reclassifies values in directional change maps.
prev = ee.Dictionary(prev)
j = ee.Number(prev.get('j'))
image = ee.Image(current)
avimg = ee.Image(prev.get('avimg'))
diff = image.subtract(avimg)
# Get positive/negative definiteness.
posd = ee.Image(diff.select(0).gt(0).And(det(diff).gt(0)))
negd = ee.Image(diff.select(0).lt(0).And(det(diff).gt(0)))
bmap = ee.Image(prev.get('bmap'))
bmapj = bmap.select(j)
dmap = ee.Image.constant(ee.List.sequence(1, 3))
bmapj = bmapj.where(bmapj, dmap.select(2))
bmapj = bmapj.where(bmapj.And(posd), dmap.select(0))
bmapj = bmapj.where(bmapj.And(negd), dmap.select(1))
bmap = bmap.addBands(bmapj, overwrite=True)
# Update avimg with provisional means.
i = ee.Image(prev.get('i')).add(1)
avimg = avimg.add(image.subtract(avimg).divide(i))
# Reset avimg to current image and set i=1 if change occurred.
avimg = avimg.where(bmapj, image)
i = i.where(bmapj, 1)
return ee.Dictionary({'avimg': avimg, 'bmap': bmap, 'j': j.add(1), 'i': i})
Explanation: Post-processing: The Loewner order
The above change maps are still difficult to interpret. But what about bmap, the map of changes detected in each interval? Before we look at them it makes sense to include the direction of change, i.e., the Loewner order, see Part 2. In the event of significant change at time $j$, we can simply determine the positive or negative definiteness (or indefiniteness) of the difference between consecutive covariance matrix pixels
$$
c_j-c_{j-1},\quad j = 2,\dots,k,
$$
to get the change direction. But we can do better. Instead of subtracting the value for the preceding image, $c_{j-1}$, we can subtract the average over all values up to and including time $j-1$ for which no change has been signalled. For example for $k=5$, suppose there are significant changes in the first and fourth (last) interval. Then to get their directions we examine the differences
$$
c_2-c_1\quad{\rm and}\quad c_5 - (c_2+c_3+c_4)/3.
$$
The running averages can be conveniently determined with the so-called provisional means algorithm. The average $\bar c_i$ of the first $i$ images is calculated recursively as
$$
\begin{align}
\bar c_i &= \bar c_{i-1} + (c_i - \bar c_{i-1})/i \cr
\bar c_1 &= c_1.
\end{align}
$$
The function dmap_iter below is iterated over the bands of bmap, replacing the values for changed pixels with
1 for positive definite differences,
2 for negative definite differences,
3 for indefinite differences.
End of explanation
def change_maps(im_list, median=False, alpha=0.01):
Calculates thematic change maps.
k = im_list.length()
# Pre-calculate the P value array.
pv_arr = ee.List(p_values(im_list))
# Filter P values for change maps.
cmap = ee.Image(im_list.get(0)).select(0).multiply(0)
bmap = ee.Image.constant(ee.List.repeat(0,k.subtract(1))).add(cmap)
alpha = ee.Image.constant(alpha)
first = ee.Dictionary({'i': 1, 'alpha': alpha, 'median': median,
'cmap': cmap, 'smap': cmap, 'fmap': cmap, 'bmap': bmap})
result = ee.Dictionary(pv_arr.iterate(filter_i, first))
# Post-process bmap for change direction.
bmap = ee.Image(result.get('bmap'))
avimg = ee.Image(im_list.get(0))
j = ee.Number(0)
i = ee.Image.constant(1)
first = ee.Dictionary({'avimg': avimg, 'bmap': bmap, 'j': j, 'i': i})
dmap = ee.Dictionary(im_list.slice(1).iterate(dmap_iter, first)).get('bmap')
return ee.Dictionary(result.set('bmap', dmap))
Explanation: We only have to modify the change_maps function to include the change direction in the bmap image:
End of explanation
# Run the algorithm with median filter and at 1% significance.
result = ee.Dictionary(change_maps(im_list, median=True, alpha=0.01))
# Extract the change maps and export to assets.
cmap = ee.Image(result.get('cmap'))
smap = ee.Image(result.get('smap'))
fmap = ee.Image(result.get('fmap'))
bmap = ee.Image(result.get('bmap'))
cmaps = ee.Image.cat(cmap, smap, fmap, bmap).rename(['cmap', 'smap', 'fmap']+timestamplist[1:])
# EDIT THE ASSET PATH TO POINT TO YOUR ACCOUNT.
assetId = 'users/YOUR_USER_NAME/cmaps'
assexport = ee.batch.Export.image.toAsset(cmaps,
description='assetExportTask',
assetId=assetId, scale=10, maxPixels=1e9)
# UNCOMMENT THIS TO EXPORT THE MAP TO YOUR ACCOUNT.
#assexport.start()
Explanation: Because of the long delays when the zoom level is changed, it is a lot more convenient to export the change maps to GEE Assets and then examine them, either here in Colab or in the Code Editor. This also means the maps will be shown at the correct scale, irrespective of the zoom level. Here I export all of the change maps as a single image.
End of explanation
cmaps = ee.Image('projects/earthengine-community/tutorials/detecting-changes-in-sentinel-1-imagery-pt-3/cmaps')
cmaps = cmaps.updateMask(cmaps.gt(0))
location = aoi.centroid().coordinates().getInfo()[::-1]
palette = ['black', 'red', 'cyan', 'yellow']
mp = folium.Map(location=location, zoom_start=13)
mp.add_ee_layer(cmaps.select('T20191107'), {'min': 0,'max': 3, 'palette': palette}, 'T20191107')
mp.add_ee_layer(cmaps.select('T20191113'), {'min': 0,'max': 3, 'palette': palette}, 'T20191113')
mp.add_ee_layer(cmaps.select('T20191119'), {'min': 0,'max': 3, 'palette': palette}, 'T20191119')
mp.add_ee_layer(cmaps.select('T20191125'), {'min': 0,'max': 3, 'palette': palette}, 'T20191125')
mp.add_ee_layer(cmaps.select('T20191201'), {'min': 0,'max': 3, 'palette': palette}, 'T20191201')
mp.add_ee_layer(cmaps.select('T20191207'), {'min': 0,'max': 3, 'palette': palette}, 'T20191207')
mp.add_child(folium.LayerControl())
Explanation: The asset cmaps is shared so we can all access it:
End of explanation |
3,580 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Handwritten Number Recognition with TFLearn and MNIST
In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9.
This kind of neural network is used in a variety of real-world applications including
Step1: Retrieving training and test data
The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.
Each MNIST data point has
Step2: Visualize the training data
Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.
Step3: Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define
Step4: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively.
Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
Step5: Testing
After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.
A good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy! | Python Code:
# Import Numpy, TensorFlow, TFLearn, and MNIST data
import numpy as np
import tensorflow as tf
import tflearn
import tflearn.datasets.mnist as mnist
Explanation: Handwritten Number Recognition with TFLearn and MNIST
In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9.
This kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the MNIST data set, which consists of images of handwritten numbers and their correct labels 0-9.
We'll be using TFLearn, a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network.
End of explanation
# Retrieve the training and test data
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
print(testX[0])
Explanation: Retrieving training and test data
The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.
Each MNIST data point has:
1. an image of a handwritten digit and
2. a corresponding label (a number 0-9 that identifies the image)
We'll call the images, which will be the input to our neural network, X and their corresponding labels Y.
We're going to want our labels as one-hot vectors, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0].
Flattened data
For this example, we'll be using flattened data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values.
Flattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network.
End of explanation
# Visualizing the data
import matplotlib.pyplot as plt
%matplotlib inline
# Function for displaying a training image by it's index in the MNIST set
def show_digit(index):
label = trainY[index].argmax(axis=0)
# Reshape 784 array into 28x28 image
image = trainX[index].reshape([28,28])
plt.title('Training data, index: %d, Label: %d' % (index, label))
plt.imshow(image, cmap='gray_r')
plt.show()
# Display the first (index 0) training image
show_digit(0)
show_digit(1)
Explanation: Visualize the training data
Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.
End of explanation
# Define the neural network
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
# Include the input layer, hidden layer(s), and set how you want to train the model
net = tflearn.input_data([None,784])
net = tflearn.fully_connected(net, 200, activation='ReLU')
net = tflearn.fully_connected(net, 25, activation='ReLU')
net = tflearn.fully_connected(net,10, activation='softmax')
net = tflearn.regression(net, optimizer='sgd',
learning_rate=0.1,
loss='categorical_crossentropy')
# This model assumes that your network is named "net"
model = tflearn.DNN(net)
return model
# Build the model
model = build_model()
Explanation: Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define:
The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data.
Hidden layers, which recognize patterns in data and connect the input to the output layer, and
The output layer, which defines how the network learns and outputs a label for a given image.
Let's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units).
Then, to set how you train the network, use:
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with categorical cross-entropy.
Finally, you put all this together to create the model with tflearn.DNN(net).
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
Hint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer.
End of explanation
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=20)
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively.
Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
End of explanation
# Compare the labels that our model predicts with the actual labels
# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.
predictions = np.array(model.predict(testX)).argmax(axis=1)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
actual = testY.argmax(axis=1)
test_accuracy = np.mean(predictions == actual, axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy)
Explanation: Testing
After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.
A good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy!
End of explanation |
3,581 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example data
Step1: Dot (.) column expression
Create a column expression that will return the original column values. | Python Code:
mtcars = spark.read.csv('../../../data/mtcars.csv', inferSchema=True, header=True)
mtcars = mtcars.withColumnRenamed('_c0', 'model')
mtcars.show(5)
Explanation: Example data
End of explanation
mpg_col_exp = mtcars.mpg
mpg_col_exp
mtcars.select(mpg_col_exp).show(5)
Explanation: Dot (.) column expression
Create a column expression that will return the original column values.
End of explanation |
3,582 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interactive mapping
Alongside static plots, geopandas can create interactive maps based on the folium library.
Creating maps for interactive exploration mirrors the API of static plots in an explore() method of a GeoSeries or GeoDataFrame.
Loading some example data
Step1: The simplest option is to use GeoDataFrame.explore()
Step2: Interactive plotting offers largely the same customisation as static one plus some features on top of that. Check the code below which plots a customised choropleth map. You can use "BoroName" column with NY boroughs names as an input of the choropleth, show (only) its name in the tooltip on hover but show all values on click. You can also pass custom background tiles (either a name supported by folium, a name recognized by xyzservices.providers.query_name(), XYZ URL or xyzservices.TileProvider object), specify colormap (all supported by matplotlib) and specify black outline.
Step3: The explore() method returns a folium.Map object, which can also be passed directly (as you do with ax in plot()). You can then use folium functionality directly on the resulting map. In the example below, you can plot two GeoDataFrames on the same map and add layer control using folium. You can also add additional tiles allowing you to change the background directly in the map. | Python Code:
import geopandas
nybb = geopandas.read_file(geopandas.datasets.get_path('nybb'))
world = geopandas.read_file(geopandas.datasets.get_path('naturalearth_lowres'))
cities = geopandas.read_file(geopandas.datasets.get_path('naturalearth_cities'))
Explanation: Interactive mapping
Alongside static plots, geopandas can create interactive maps based on the folium library.
Creating maps for interactive exploration mirrors the API of static plots in an explore() method of a GeoSeries or GeoDataFrame.
Loading some example data:
End of explanation
nybb.explore()
Explanation: The simplest option is to use GeoDataFrame.explore():
End of explanation
nybb.explore(
column="BoroName", # make choropleth based on "BoroName" column
tooltip="BoroName", # show "BoroName" value in tooltip (on hover)
popup=True, # show all values in popup (on click)
tiles="CartoDB positron", # use "CartoDB positron" tiles
cmap="Set1", # use "Set1" matplotlib colormap
style_kwds=dict(color="black") # use black outline
)
Explanation: Interactive plotting offers largely the same customisation as static one plus some features on top of that. Check the code below which plots a customised choropleth map. You can use "BoroName" column with NY boroughs names as an input of the choropleth, show (only) its name in the tooltip on hover but show all values on click. You can also pass custom background tiles (either a name supported by folium, a name recognized by xyzservices.providers.query_name(), XYZ URL or xyzservices.TileProvider object), specify colormap (all supported by matplotlib) and specify black outline.
End of explanation
import folium
m = world.explore(
column="pop_est", # make choropleth based on "BoroName" column
scheme="naturalbreaks", # use mapclassify's natural breaks scheme
legend=True, # show legend
k=10, # use 10 bins
legend_kwds=dict(colorbar=False), # do not use colorbar
name="countries" # name of the layer in the map
)
cities.explore(
m=m, # pass the map object
color="red", # use red color on all points
marker_kwds=dict(radius=10, fill=True), # make marker radius 10px with fill
tooltip="name", # show "name" column in the tooltip
tooltip_kwds=dict(labels=False), # do not show column label in the tooltip
name="cities" # name of the layer in the map
)
folium.TileLayer('Stamen Toner', control=True).add_to(m) # use folium to add alternative tiles
folium.LayerControl().add_to(m) # use folium to add layer control
m # show map
Explanation: The explore() method returns a folium.Map object, which can also be passed directly (as you do with ax in plot()). You can then use folium functionality directly on the resulting map. In the example below, you can plot two GeoDataFrames on the same map and add layer control using folium. You can also add additional tiles allowing you to change the background directly in the map.
End of explanation |
3,583 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Creating datasets for 2D
We begin by reading the csv file, into a data frame. This makes it easier to create.
Step1: Then we want to filter the data set.
We do this by only taking the rows with the category PROSTITUTION as well as removing some rows with invalid Y coordinate.
Step2: To reduce the amount of data we need to load on the page, we only extract the columns that we need.
In this case it is the district, longtitude and latitude.
If this file were written to the disk at this point, the size would be around 700KB (e.i. very small).
Step3: Then we define a function that we use to calculate the clusters, as well as centoids.
Step4: Now we calcualte all our K nearest neighbor, for 2..6.
Step5: Here is a preview of our data, now enriched with K values.
Step6: Write our result
Lastly we write our result to the disk, so that we can use it on our page.
Step7: Below is the centoids printed | Python Code:
data_path = '../../SFPD_Incidents_-_from_1_January_2003.csv'
data = pd.read_csv(data_path)
Explanation: Creating datasets for 2D
We begin by reading the csv file, into a data frame. This makes it easier to create.
End of explanation
mask = (data.Category == 'PROSTITUTION') & (data.Y != 90)
filterByCat = data[mask]
Explanation: Then we want to filter the data set.
We do this by only taking the rows with the category PROSTITUTION as well as removing some rows with invalid Y coordinate.
End of explanation
reducted = filterByCat[['PdDistrict','X','Y']]
Explanation: To reduce the amount of data we need to load on the page, we only extract the columns that we need.
In this case it is the district, longtitude and latitude.
If this file were written to the disk at this point, the size would be around 700KB (e.i. very small).
End of explanation
X = data.loc[mask][['X','Y']]
centers = {}
def knn(k):
md = cluster.KMeans(n_clusters=k).fit(X)
return md.predict(X),md.cluster_centers_
Explanation: Then we define a function that we use to calculate the clusters, as well as centoids.
End of explanation
for i in range(2,7):
reducted['K'+str(i)], centers[i] = knn(i)
centers
Explanation: Now we calcualte all our K nearest neighbor, for 2..6.
End of explanation
reducted.head()
Explanation: Here is a preview of our data, now enriched with K values.
End of explanation
reducted.to_csv('week_8_vis_1.csv', sep=',')
Explanation: Write our result
Lastly we write our result to the disk, so that we can use it on our page.
End of explanation
centers
Explanation: Below is the centoids printed
End of explanation |
3,584 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working with models in FedJAX
In this chapter, we will learn about fedjax.Model. This notebook assumes you already have finished the "Datasets" chapter. We first overview centralized training and evaluation with fedjax.Model and then describe how to add new neural architectures and specify additional evaluation metrics.
Step1: Centralized training & evaluation with fedjax.Model
Most federated learning algorithms are built upon common components from standard centralized learning. fedjax.Model holds these common components. In centralized learning, we are mostly concerned with two tasks
Step2: Random initialization, the JAX way
To start training, we need some randomly initialized parameters. In JAX, pseudo random number generation works slightly differently. For now, it is sufficient to know we call jax.random.PRNGKey() to seed the random number generator. JAX has a detailed introduction on this topic, if you are interested.
To create the initial model parameters, we simply call fedjax.Model.init() with a PRNGKey.
Step3: Here are our initial model parameters. With the same PRNGKey, we will always get the same random initialization. There are 2 parameters in our model, the weights w, and the bias b. They are organized into a FlapMapping, but in general any PyTree can be used to store model parameters.
Step4: Evaluating model parameters
Before we start training, let's first see how our initial parameters fare on the train and test sets. Unsurprisingly, they do not do very well. We evaluate using the fedjax.evaluate_model() which takes in model, parameters, and datasets which are batched. As noted in the dataset tutorial, we batch using
fedjax.padded_batch_federated_data() for efficiency. fedjax.padded_batch_federated_data() is very similar to fedjax.ClientDataset.padded_batch() but operates over the entire federated dataset.
Step5: How does our model know what evaluation metrics to report? It is simply specified in the eval_metrics field. We will discuss evaluation metrics in more detail later.
Step6: Since fedjax.evaluate_model() simply takes a stream of batches, we can also use it to evaluate multiple clients.
Step7: The training objective
To train our model, we need two things
Step8: Note that the output is per example predictions and has shape (8, 62), where 8 is the batch size and 62 is the number of classes. Alternatively, we can use model_per_example_loss() to get a function that gives us the same result. model_per_example_loss() is a convenience function that does exactly what we just did.
Step9: The training objective is a scalar, so why does train_loss() return a vector of per-example loss values? First of all, the training objective in most cases is just the average of the per-example loss values, so arriving at the final training objective isn't hard. Moreover, in certain algorithms, we not only use the train loss over a single batch of examples for a stochastic training step, but also need to estimate the average train loss over an entire (client) dataset. Having the per-example loss values there is instrumental in obtaining the correct estimate when the batch sizes may vary.
Step10: Optimizers
With the training objective at hand, we just need an optimizer to find some good model parameters that minimize it.
There are many optimizer implementations in JAX out there, but FedJAX doesn't force one choice over any other. Instead, FedJAX provides a simple fedjax.optimizers.Optimizer interface so a new optimizer implementation can be wrapped. For convenience, FedJAX provides some common optimizers wrapped from optax.
Step11: An optimizer is simply a pair of two functions
Step12: Instead of using jax.grad() directly, FedJAX also provides a convenient fedjax.model_grad() which computes the gradient of a model with respect to the averaged fedjax.model_per_example_loss().
Step13: Let's wrap everything into a single JIT compiled function and train a few more steps, and evaluate again.
Step14: Building a custom model
fedjax.Model was designed with customization in mind. We have already seen how to switch to a different training loss. In this section, we will discuss how the rest of a fedjax.Model can be customized.
Training loss
Because train_loss() is separate from apply_for_train(), it is easy to switch to a different loss function.
Step15: Evaluation metrics
We have already seen that the eval_metrics field of a fedjax.Model tells the model what metrics to evaluate. eval_metrics is a mapping from metric names to fedjax.metrics.Metric objects. A fedjax.metrics.Metric object tells us how to calculate a metric's value from multiple batches of examples. Like fedjax.Model, a fedjax.metrics.Metric is stateless.
To customize the metrics to evaluate on, or what names to give to each, simply specify a different mapping.
Step16: There are already some concrete Metrics in fedjax.metrics. It is also easy to implement a new one. You can read more about how to implement a Metric in its own introduction.
The bit of fedjax.Model that is directly relevant to evaluation is apply_for_eval(). The relation between apply_for_eval() and an evaluation metric is similar to that between apply_for_train() and train_loss()
Step17: What apply_for_eval() needs to produce really just depends on what evaluation fedjax.metrics.Metrics will be used. In our case, we are using fedjax.metrics.Accuracy, and fedjax.metrics.CrossEntropyLoss. They are similar in their requirements on the inputs
Step18: Neural network architectures
We have now covered all five parts of a fedjax.Model, namely init(), apply_for_train(), apply_for_eval(), train_loss(), and eval_metrics. train_loss() and eval_metrics are easy to customize since they are mostly agnostic to the actual neural network architecture of the model. init(), apply_for_train(), and apply_for_eval() on the other hand, are closely related.
In principle, as long as these three functions meet the interface we have seen so far, they can be used to build a custom model. Let's try to build a model that uses multi-layer perceptron and hinge loss.
Step19: While writing custom neural network architectures from scratch is possible, most of the time, it is much more convenient to use a neural network library such as Haiku or jax.experimental.stax. The two functions fedjax.create_model_from_haiku and fedjax.create_model_from_stax can convert a neural network expressed in the respective framework into a fedjax.Model. Let's build a convolutional network using jax.experimental.stax this time. | Python Code:
# Uncomment these to install fedjax.
# !pip install fedjax
# !pip install --upgrade git+https://github.com/google/fedjax.git
import itertools
import jax
import jax.numpy as jnp
from jax.experimental import stax
import fedjax
Explanation: Working with models in FedJAX
In this chapter, we will learn about fedjax.Model. This notebook assumes you already have finished the "Datasets" chapter. We first overview centralized training and evaluation with fedjax.Model and then describe how to add new neural architectures and specify additional evaluation metrics.
End of explanation
# Load train/test splits of the EMNIST dataset.
train, test = fedjax.datasets.emnist.load_data()
# As a start, let's simply use a logistic regression model.
model = fedjax.models.emnist.create_logistic_model()
Explanation: Centralized training & evaluation with fedjax.Model
Most federated learning algorithms are built upon common components from standard centralized learning. fedjax.Model holds these common components. In centralized learning, we are mostly concerned with two tasks:
Training: We want to optimize our model parameters on the training dataset.
Evaluation: We want to know the values of evaluation metrics (e.g. accuracy) of the current model parameters on a test dataset.
Let's first see how we can carry out these two tasks on the EMNIST dataset with fedjax.Model.
End of explanation
params_rng = jax.random.PRNGKey(0)
params = model.init(params_rng)
Explanation: Random initialization, the JAX way
To start training, we need some randomly initialized parameters. In JAX, pseudo random number generation works slightly differently. For now, it is sufficient to know we call jax.random.PRNGKey() to seed the random number generator. JAX has a detailed introduction on this topic, if you are interested.
To create the initial model parameters, we simply call fedjax.Model.init() with a PRNGKey.
End of explanation
params
Explanation: Here are our initial model parameters. With the same PRNGKey, we will always get the same random initialization. There are 2 parameters in our model, the weights w, and the bias b. They are organized into a FlapMapping, but in general any PyTree can be used to store model parameters.
End of explanation
# We select first 16 batches using itertools.islice.
batched_test_data = list(itertools.islice(
fedjax.padded_batch_federated_data(test, batch_size=128), 16))
batched_train_data = list(itertools.islice(
fedjax.padded_batch_federated_data(train, batch_size=128), 16))
print('eval_test', fedjax.evaluate_model(model, params, batched_test_data))
print('eval_train', fedjax.evaluate_model(model, params, batched_train_data))
Explanation: Evaluating model parameters
Before we start training, let's first see how our initial parameters fare on the train and test sets. Unsurprisingly, they do not do very well. We evaluate using the fedjax.evaluate_model() which takes in model, parameters, and datasets which are batched. As noted in the dataset tutorial, we batch using
fedjax.padded_batch_federated_data() for efficiency. fedjax.padded_batch_federated_data() is very similar to fedjax.ClientDataset.padded_batch() but operates over the entire federated dataset.
End of explanation
model.eval_metrics
Explanation: How does our model know what evaluation metrics to report? It is simply specified in the eval_metrics field. We will discuss evaluation metrics in more detail later.
End of explanation
for client_id, dataset in itertools.islice(test.clients(), 4):
print(
client_id,
fedjax.evaluate_model(model, params,
dataset.padded_batch(batch_size=128)))
Explanation: Since fedjax.evaluate_model() simply takes a stream of batches, we can also use it to evaluate multiple clients.
End of explanation
# train_batches is an infinite stream of shuffled batches of examples.
def train_batches():
return fedjax.shuffle_repeat_batch_federated_data(
train,
batch_size=8,
client_buffer_size=16,
example_buffer_size=1024,
seed=0)
# We obtain the first batch by using the `next` function.
example = next(train_batches())
output = model.apply_for_train(params, example, None)
per_example_loss = model.train_loss(example, output)
output.shape, per_example_loss
Explanation: The training objective
To train our model, we need two things: the objective function to minimize and an optimizer.
fedjax.Model contains two functions that can be used to arrive at the training objective:
apply_for_train(params, batch_example, rng) takes the current model parameters, a batch of examples, and a PRNGKey, and returns some output.
train_loss(batch_example, train_output) translates the output of apply_for_train() into a vector of per-example loss values.
In our example model, apply_for_train() produces a score for each class and train_loss() is simply the cross entropy loss. apply_for_train() in this case does not make use of a PRNGKey, so we can pass None instead for convenience. A different apply_for_train() might actually make use of the PRNGKey, for tasks such as dropout.
End of explanation
per_example_loss_fn = fedjax.model_per_example_loss(model)
per_example_loss_fn(params, example, None)
Explanation: Note that the output is per example predictions and has shape (8, 62), where 8 is the batch size and 62 is the number of classes. Alternatively, we can use model_per_example_loss() to get a function that gives us the same result. model_per_example_loss() is a convenience function that does exactly what we just did.
End of explanation
def train_objective(params, example):
return jnp.mean(per_example_loss_fn(params, example, None))
train_objective(params, example)
Explanation: The training objective is a scalar, so why does train_loss() return a vector of per-example loss values? First of all, the training objective in most cases is just the average of the per-example loss values, so arriving at the final training objective isn't hard. Moreover, in certain algorithms, we not only use the train loss over a single batch of examples for a stochastic training step, but also need to estimate the average train loss over an entire (client) dataset. Having the per-example loss values there is instrumental in obtaining the correct estimate when the batch sizes may vary.
End of explanation
optimizer = fedjax.optimizers.adam(1e-3)
Explanation: Optimizers
With the training objective at hand, we just need an optimizer to find some good model parameters that minimize it.
There are many optimizer implementations in JAX out there, but FedJAX doesn't force one choice over any other. Instead, FedJAX provides a simple fedjax.optimizers.Optimizer interface so a new optimizer implementation can be wrapped. For convenience, FedJAX provides some common optimizers wrapped from optax.
End of explanation
opt_state = optimizer.init(params)
grads = jax.grad(train_objective)(params, example)
opt_state, params = optimizer.apply(grads, opt_state, params)
train_objective(params, example)
Explanation: An optimizer is simply a pair of two functions:
init(params) returns the initial optimizer state, such as initial values for accumulators of gradients.
apply(grads, opt_state, params) applies the gradients to update the current optimizer state and model parameters.
Instead of modifying opt_state or params, apply() returns a new pair of optimizer state and model parameters. In JAX, it is common to express computations in this stateless/mutation free style, often referred to as functional programming, or pure functions. The pureness of functions is crucial to many features in JAX, so it is always good practice to write functions that do not modify its inputs. You have probably also noticed that all the functions of fedjax.Model we have seen so far do not modify the model object itself (for example, init() returns model parameters instead of setting some attribute of model; apply_for_train() takes model parameters as an input argument, instead of getting it from model). FedJAX does this to keep all functions pure.
However, in the top level training loop, it is fine to mutate states since we are not in a function that may be transformed by JAX. Let's run our first training step, which resulted in a slight decrease in objective on the same batch of examples.
To obtain the gradients, we use jax.grad() which returns the gradient function. More details about jax.grad() can be found from the JAX documentation.
End of explanation
model_grads = fedjax.model_grad(model)(params, example, None)
opt_state, params = optimizer.apply(grads, opt_state, params)
train_objective(params, example)
Explanation: Instead of using jax.grad() directly, FedJAX also provides a convenient fedjax.model_grad() which computes the gradient of a model with respect to the averaged fedjax.model_per_example_loss().
End of explanation
@jax.jit
def train_step(example, opt_state, params):
grads = jax.grad(train_objective)(params, example)
return optimizer.apply(grads, opt_state, params)
for example in itertools.islice(train_batches(), 5000):
opt_state, params = train_step(example, opt_state, params)
print('eval_test', fedjax.evaluate_model(model, params, batched_test_data))
print('eval_train', fedjax.evaluate_model(model, params, batched_train_data))
Explanation: Let's wrap everything into a single JIT compiled function and train a few more steps, and evaluate again.
End of explanation
def hinge_loss(example, output):
label = example['y']
num_classes = output.shape[-1]
mask = jax.nn.one_hot(label, num_classes)
label_score = jnp.sum(output * mask, axis=-1)
best_score = jnp.max(output + 1 - mask, axis=-1)
return best_score - label_score
hinge_model = model.replace(train_loss=hinge_loss)
fedjax.model_per_example_loss(hinge_model)(params, example, None)
Explanation: Building a custom model
fedjax.Model was designed with customization in mind. We have already seen how to switch to a different training loss. In this section, we will discuss how the rest of a fedjax.Model can be customized.
Training loss
Because train_loss() is separate from apply_for_train(), it is easy to switch to a different loss function.
End of explanation
only_accuracy = model.replace(
eval_metrics={'accuracy': fedjax.metrics.Accuracy()})
fedjax.evaluate_model(only_accuracy, params, batched_test_data)
Explanation: Evaluation metrics
We have already seen that the eval_metrics field of a fedjax.Model tells the model what metrics to evaluate. eval_metrics is a mapping from metric names to fedjax.metrics.Metric objects. A fedjax.metrics.Metric object tells us how to calculate a metric's value from multiple batches of examples. Like fedjax.Model, a fedjax.metrics.Metric is stateless.
To customize the metrics to evaluate on, or what names to give to each, simply specify a different mapping.
End of explanation
jnp.all(
model.apply_for_train(params, example, None) == model.apply_for_eval(
params, example))
Explanation: There are already some concrete Metrics in fedjax.metrics. It is also easy to implement a new one. You can read more about how to implement a Metric in its own introduction.
The bit of fedjax.Model that is directly relevant to evaluation is apply_for_eval(). The relation between apply_for_eval() and an evaluation metric is similar to that between apply_for_train() and train_loss(): apply_for_eval(params, example) takes the model parameters and a batch of examples (notice there is no randomness in evaluation so we don't need a PRNGKey), and produces some prediction that evaluation metrics can consume. In our example, the outputs from apply_for_eval() and apply_for_train() are identical, but they don't have to be.
End of explanation
fedjax.metrics.Accuracy()
Explanation: What apply_for_eval() needs to produce really just depends on what evaluation fedjax.metrics.Metrics will be used. In our case, we are using fedjax.metrics.Accuracy, and fedjax.metrics.CrossEntropyLoss. They are similar in their requirements on the inputs:
They both need to know the true label from the example, using a target_key that defaults to "y".
They both need to know the predicted scores from apply_for_eval(), customizable as pred_key. If pred_key is None, apply_for_eval() should return just a vector of per-class scores; otherwise pred_key can be a string key, and apply_for_eval() should return a mapping (e.g. dict) that maps the key to a vector of per-class scores.
End of explanation
def cross_entropy_loss(example, output):
label = example['y']
num_classes = output.shape[-1]
mask = jax.nn.one_hot(label, num_classes)
return -jnp.sum(jax.nn.log_softmax(output) * mask, axis=-1)
def mlp_model(num_input_units, num_units, num_classes):
def mlp_init(rng):
w0_rng, w1_rng = jax.random.split(rng)
w0 = jax.random.uniform(w0_rng, [num_input_units, num_units])
b0 = jnp.zeros([num_units])
w1 = jax.random.uniform(w1_rng, [num_units, num_classes])
b1 = jnp.zeros([num_classes])
return w0, b0, w1, b1
def mlp_apply(params, batch, rng=None):
w0, b0, w1, b1 = params
x = batch['x']
batch_size = x.shape[0]
h = jax.nn.relu(x.reshape([batch_size, -1]) @ w0 + b0)
return h @ w1 + b1
return fedjax.Model(
init=mlp_init,
apply_for_train=mlp_apply,
apply_for_eval=mlp_apply,
train_loss=cross_entropy_loss,
eval_metrics={'accuracy': fedjax.metrics.Accuracy()})
# There are 28*28 input pixels, and 62 classes in EMNIST.
mlp = mlp_model(28 * 28, 128, 62)
@jax.jit
def mlp_train_step(example, opt_state, params):
@jax.grad
def grad_fn(params, example):
return jnp.mean(fedjax.model_per_example_loss(mlp)(params, example, None))
grads = grad_fn(params, example)
return optimizer.apply(grads, opt_state, params)
params = mlp.init(jax.random.PRNGKey(0))
opt_state = optimizer.init(params)
print('eval_test before training:',
fedjax.evaluate_model(mlp, params, batched_test_data))
for example in itertools.islice(train_batches(), 5000):
opt_state, params = mlp_train_step(example, opt_state, params)
print('eval_test after training:',
fedjax.evaluate_model(mlp, params, batched_test_data))
Explanation: Neural network architectures
We have now covered all five parts of a fedjax.Model, namely init(), apply_for_train(), apply_for_eval(), train_loss(), and eval_metrics. train_loss() and eval_metrics are easy to customize since they are mostly agnostic to the actual neural network architecture of the model. init(), apply_for_train(), and apply_for_eval() on the other hand, are closely related.
In principle, as long as these three functions meet the interface we have seen so far, they can be used to build a custom model. Let's try to build a model that uses multi-layer perceptron and hinge loss.
End of explanation
def stax_cnn_model(input_shape, num_classes):
stax_init, stax_apply = stax.serial(
stax.Conv(
out_chan=64, filter_shape=(3, 3), strides=(1, 1), padding='SAME'),
stax.Relu,
stax.Flatten,
stax.Dense(256),
stax.Relu,
stax.Dense(num_classes),
)
return fedjax.create_model_from_stax(
stax_init=stax_init,
stax_apply=stax_apply,
sample_shape=input_shape,
train_loss=cross_entropy_loss,
eval_metrics={'accuracy': fedjax.metrics.Accuracy()})
stax_cnn = stax_cnn_model([-1, 28, 28, 1], 62)
@jax.jit
def stax_cnn_train_step(example, opt_state, params):
@jax.grad
def grad_fn(params, example):
return jnp.mean(
fedjax.model_per_example_loss(stax_cnn)(params, example, None))
grads = grad_fn(params, example)
return optimizer.apply(grads, opt_state, params)
params = stax_cnn.init(jax.random.PRNGKey(0))
opt_state = optimizer.init(params)
print('eval_test before training:',
fedjax.evaluate_model(stax_cnn, params, batched_test_data))
for example in itertools.islice(train_batches(), 1000):
opt_state, params = stax_cnn_train_step(example, opt_state, params)
print('eval_test after training:',
fedjax.evaluate_model(stax_cnn, params, batched_test_data))
Explanation: While writing custom neural network architectures from scratch is possible, most of the time, it is much more convenient to use a neural network library such as Haiku or jax.experimental.stax. The two functions fedjax.create_model_from_haiku and fedjax.create_model_from_stax can convert a neural network expressed in the respective framework into a fedjax.Model. Let's build a convolutional network using jax.experimental.stax this time.
End of explanation |
3,585 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Part 3
Step1: Import libraries
Step2: Configure GCP environment settings
Update the following variables to reflect the values for your GCP environment
Step3: Authenticate your GCP account
This is required if you run the notebook in Colab. If you use an AI Platform notebook, you should already be authenticated.
Step4: Create the embedding lookup model
You use the EmbeddingLookup class to create the item embedding lookup model. The EmbeddingLookup class inherits from tf.keras.Model, and is implemented in the
lookup_creator.py
module.
The EmbeddingLookupclass works as follows
Step5: Create the model and export the SavedModel file
Call the export_saved_model method, which uses the EmbeddingLookup class to create the model and then exports the resulting SavedModel file
Step6: Inspect the exported SavedModel using the saved_model_cli command line tool
Step7: Test the SavedModel file
Test the SavedModel by loading it and then calling it with input item IDs | Python Code:
!pip install -q -U pip
!pip install -q tensorflow==2.2.0
!pip install -q -U google-auth google-api-python-client google-api-core
Explanation: Part 3: Create a model to serve the item embedding data
This notebook is the third of five notebooks that guide you through running the Real-time Item-to-item Recommendation with BigQuery ML Matrix Factorization and ScaNN solution.
Use this notebook to wrap the item embeddings data in a Keras model that can act as an item-embedding lookup, then export the model as a SavedModel.
Before starting this notebook, you must run the 02_export_bqml_mf_embeddings notebook to process the item embeddings data and export it to Cloud Storage.
After completing this notebook, run the 04_build_embeddings_scann notebook to create an approximate nearest neighbor index for the item embeddings.
Setup
Import the required libraries, configure the environment variables, and authenticate your GCP account.
End of explanation
import os
import tensorflow as tf
import numpy as np
print(f'Tensorflow version: {tf.__version__}')
Explanation: Import libraries
End of explanation
PROJECT_ID = 'yourProject' # Change to your project.
BUCKET = 'yourBucketName' # Change to the bucket you created.
EMBEDDING_FILES_PATH = f'gs://{BUCKET}/bqml/item_embeddings/embeddings-*'
MODEL_OUTPUT_DIR = f'gs://{BUCKET}/bqml/embedding_lookup_model'
!gcloud config set project $PROJECT_ID
Explanation: Configure GCP environment settings
Update the following variables to reflect the values for your GCP environment:
PROJECT_ID: The ID of the Google Cloud project you are using to implement this solution.
BUCKET: The name of the Cloud Storage bucket you created to use with this solution. The BUCKET value should be just the bucket name, so myBucket rather than gs://myBucket.
End of explanation
try:
from google.colab import auth
auth.authenticate_user()
print("Colab user is authenticated.")
except: pass
Explanation: Authenticate your GCP account
This is required if you run the notebook in Colab. If you use an AI Platform notebook, you should already be authenticated.
End of explanation
if tf.io.gfile.exists(MODEL_OUTPUT_DIR):
print("Removing {} contents...".format(MODEL_OUTPUT_DIR))
tf.io.gfile.rmtree(MODEL_OUTPUT_DIR)
Explanation: Create the embedding lookup model
You use the EmbeddingLookup class to create the item embedding lookup model. The EmbeddingLookup class inherits from tf.keras.Model, and is implemented in the
lookup_creator.py
module.
The EmbeddingLookupclass works as follows:
Accepts the embedding_files_prefix variable in the class constructor. This variable points to the Cloud Storage location of the CSV files containing the item embedding data.
Reads and parses the item embedding CSV files.
Populates the vocabulary and embeddings class variables. vocabulary is an array of item IDs, while embeddings is a Numpy array with the shape (number of embeddings, embedding dimensions).
Appends the oov_embedding variable to the embeddings variable. The oov_embedding variable value is all zeros, and it represents the out of vocabulary (OOV) embedding vector. The oov_embedding variable is used when an invalid ("out of vocabulary", or OOV) item ID is submitted, in which case an embedding vector of zeros is returned.
Writes the vocabulary value to a file, one array element per line, so it can be used as a model asset by the SavedModel.
Uses token_to_idx, a tf.lookup.StaticHashTable object, to map the
item ID to the index of the embedding vector in the embeddings Numpy array.
Accepts a list of strings with the __call__ method of the model. Each string represents the item ID(s) for which the embeddings are to be retrieved. If the input list contains N strings, then N embedding vectors are returned.
Note that each string in the input list may contain one or more space-separated item IDs. If multiple item IDs are present, the embedding vectors of these item IDs are retrieved and combined (by averaging) into a single embedding vector. This makes it possible to fetch an embedding vector representing a set of items (like a playlist) rather than just a single item.
Clear the model export directory
End of explanation
from embeddings_lookup import lookup_creator
lookup_creator.export_saved_model(EMBEDDING_FILES_PATH, MODEL_OUTPUT_DIR)
Explanation: Create the model and export the SavedModel file
Call the export_saved_model method, which uses the EmbeddingLookup class to create the model and then exports the resulting SavedModel file:
End of explanation
!saved_model_cli show --dir {MODEL_OUTPUT_DIR} --tag_set serve --signature_def serving_default
Explanation: Inspect the exported SavedModel using the saved_model_cli command line tool:
End of explanation
loaded_model = tf.saved_model.load(MODEL_OUTPUT_DIR)
input_items = ['2114406', '2114402 2120788', 'abc123']
output = loaded_model(input_items)
print(f'Embeddings retrieved: {output.shape}')
for idx, embedding in enumerate(output):
print(f'{input_items[idx]}: {embedding[:5]}')
Explanation: Test the SavedModel file
Test the SavedModel by loading it and then calling it with input item IDs:
End of explanation |
3,586 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2D Histograms in physt
Step1: Multidimensional binning
In most cases, binning methods that apply for 1D histograms, can be used also in higher dimensions. In such cases, each parameter can be either scalar (applies to all dimensions) or a list/tuple with independent values for each dimension. This also applies for range that has to be list/tuple of tuples.
Step2: Plotting
2D
Step3: Large histograms as images
Plotting histograms in this way gets problematic with more than roughly 50x50 bins. There is an alternative, though, partially inspired by the datashader project - plot the histogram as bitmap, which works very fast even for very large histograms.
Note
Step4: See that the output is equivalent to map without lines.
Transformation
Sometimes, the value range is too big to show details. Therefore, it may be of some use to transform the values by a function, e.g. logarithm.
Step5: 3D
By this, we mean 3D bar plots of 2D histograms (not a visual representation of 3D histograms).
Step6: Projections
Step7: Adaptive 2D histograms
Step8: N-dimensional histograms
Although is not easy to visualize them, it is possible to create histograms of any dimensions that behave similar to 2D ones. Warning
Step9: Support for pandas DataFrames (without pandas dependency ;-)) | Python Code:
# Necessary import evil
import physt
from physt import h1, h2, histogramdd
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(42)
# Some data
x = np.random.normal(100, 1, 1000)
y = np.random.normal(10, 10, 1000)
# Create a simple histogram
histogram = h2(x, y, [8, 4], name="Some histogram", axis_names=["x", "y"])
histogram
# Frequencies are a 2D-array
histogram.frequencies
Explanation: 2D Histograms in physt
End of explanation
histogram = h2(x, y, "fixed_width", bin_width=[2, 10], name="Fixed-width bins", axis_names=["x", "y"])
histogram.plot();
histogram.numpy_bins
histogram = h2(x, y, "quantile", bin_count=[3, 4], name="Quantile bins", axis_names=["x", "y"])
histogram.plot(cmap_min=0);
histogram.numpy_bins
histogram = h2(x, y, "human", bin_count=5, name="Human-friendly bins", axis_names=["x", "y"])
histogram.plot();
histogram.numpy_bins
Explanation: Multidimensional binning
In most cases, binning methods that apply for 1D histograms, can be used also in higher dimensions. In such cases, each parameter can be either scalar (applies to all dimensions) or a list/tuple with independent values for each dimension. This also applies for range that has to be list/tuple of tuples.
End of explanation
# Default is workable
ax = histogram.plot()
# Custom colormap, no colorbar
import matplotlib.cm as cm
fig, ax = plt.subplots()
ax = histogram.plot(ax=ax, cmap=cm.copper, show_colorbar=False, grid_color=cm.copper(0.5))
ax.set_title("Custom colormap");
# Use a named colormap + limit it to a range of values
import matplotlib.cm as cm
fig, ax = plt.subplots()
ax = histogram.plot(ax=ax, cmap="Oranges", show_colorbar=True, cmap_min=20, cmap_max=100, show_values=True)
ax.set_title("Clipped colormap");
# Show labels (and hide zero bins), no grid(lw=0)
ax = histogram.plot(show_values=True, show_zero=False, cmap=cm.RdBu, format_value=float, lw=0)
Explanation: Plotting
2D
End of explanation
x = np.random.normal(100, 1, 1000000)
y = np.random.normal(10, 10, 1000000)
fig, axes = plt.subplots(1, 3, figsize=(12, 4))
h2(x, y, 20, name="20 bins - map").plot("map", cmap="rainbow", lw=0, alpha=1, ax=axes[0], show_colorbar=False)
h2(x, y, 20, name="20 bins - image").plot("image", cmap="rainbow", alpha=1, ax=axes[1])
h2(x, y, 500, name="500 bins - image").plot("image", cmap="rainbow", alpha=1, ax=axes[2]);
Explanation: Large histograms as images
Plotting histograms in this way gets problematic with more than roughly 50x50 bins. There is an alternative, though, partially inspired by the datashader project - plot the histogram as bitmap, which works very fast even for very large histograms.
Note: This method does not work for histograms with irregular bins.
End of explanation
fig, axes = plt.subplots(1, 3, figsize=(12, 4))
h2(x, y, 20, name="20 bins - map").plot("map", alpha=1, lw=0, show_zero=False, cmap="rainbow", ax=axes[0], show_colorbar=False, cmap_normalize="log")
h2(x, y, 20, name="20 bins - image").plot("image", alpha=1, ax=axes[1], cmap="rainbow", cmap_normalize="log")
h2(x, y, 500, name="500 bins - image").plot("image", alpha=1, ax=axes[2], cmap="rainbow", cmap_normalize="log");
# Composition - show histogram overlayed with "points"
fig, ax = plt.subplots(figsize=(8, 7))
h_2 = h2(x, y, 30)
h_2.plot("map", lw=0, alpha=0.9, cmap="Blues", ax=ax, cmap_normalize="log", show_zero=False)
# h2(x, y, 300).plot("image", alpha=1, cmap="Greys", ax=ax, transform=lambda x: x > 0);
# Not working currently
Explanation: See that the output is equivalent to map without lines.
Transformation
Sometimes, the value range is too big to show details. Therefore, it may be of some use to transform the values by a function, e.g. logarithm.
End of explanation
histogram.plot("bar3d", cmap="rainbow");
histogram.plot("bar3d", color="red");
Explanation: 3D
By this, we mean 3D bar plots of 2D histograms (not a visual representation of 3D histograms).
End of explanation
proj1 = histogram.projection("x", name="Projection to X")
proj1.plot(errors=True)
proj1
proj2 = histogram.projection("y", name="Projection to Y")
proj2.plot(errors=True)
proj2
Explanation: Projections
End of explanation
# Create and add two histograms with adaptive binning
height1 = np.random.normal(180, 5, 1000)
weight1 = np.random.normal(80, 2, 1000)
ad1 = h2(height1, weight1, "fixed_width", bin_width=1, adaptive=True)
ad1.plot(show_zero=False)
height2 = np.random.normal(160, 5, 1000)
weight2 = np.random.normal(70, 2, 1000)
ad2 = h2(height2, weight2, "fixed_width", bin_width=1, adaptive=True)
ad2.plot(show_zero=False)
(ad1 + ad2).plot(show_zero=False);
Explanation: Adaptive 2D histograms
End of explanation
# Create a 4D histogram
data = [np.random.rand(1000)[:, np.newaxis] for i in range(4)]
data = np.concatenate(data, axis=1)
h4 = histogramdd(data, [3, 2, 2, 3], axis_names="abcd")
h4
h4.frequencies
h4.projection("a", "d", name="4D -> 2D").plot(show_values=True, format_value=int, cmap_min="min");
h4.projection("d", name="4D -> 1D").plot("scatter", errors=True);
Explanation: N-dimensional histograms
Although is not easy to visualize them, it is possible to create histograms of any dimensions that behave similar to 2D ones. Warning: be aware that the memory consumption can be significant.
End of explanation
# Load notorious example data set
import seaborn as sns
iris = sns.load_dataset('iris')
iris = sns.load_dataset('iris')
iris_hist = physt.h2(iris["sepal_length"], iris["sepal_width"], "human", bin_count=[12, 7], name="Iris")
iris_hist.plot(show_zero=False, cmap=cm.gray_r, show_values=True, format_value=int);
iris_hist.projection("sepal_length").plot();
Explanation: Support for pandas DataFrames (without pandas dependency ;-))
End of explanation |
3,587 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Head data is generated for a pumping test in a two-aquifer model. The well starts pumping at time $t=0$ with a discharge $Q=800$ m$^3$/d. The head is measured in an observation well 10 m from the pumping well. The thickness of the aquifer is 20 m. Questions
Step1: Model as semi-confined | Python Code:
def generate_data():
# 2 layer model with some random error
ml = ModelMaq(kaq=[10, 20], z=[0, -20, -22, -42], c=[1000],
Saq=[0.0002, 0.0001], tmin=0.001, tmax=100)
w = Well(ml, 0, 0, rw=0.3, tsandQ=[(0, 800)])
ml.solve()
t = np.logspace(-2, 1, 100)
h = ml.head(10, 0, t)
plt.figure()
r = 0.01 * np.random.randn(100)
n = np.zeros_like(r)
alpha = 0.8
for i in range(1, len(n)):
n[i] = 0.8 * n[i - 1] + r[i]
ho = h[0] + n
plt.plot(t, ho, '.')
data = np.zeros((len(ho), 2))
data[:, 0] = t
data[:, 1] = ho
#np.savetxt('pumpingtestdata.txt', data, fmt='%2.3f', header='time (d), head (m)')
return data
np.random.seed(11)
data = generate_data()
to = data[:, 0]
ho = data[:, 1]
def func(p, to=to, ho=ho, returnmodel=False):
k = p[0]
S = p[1]
ml = ModelMaq(kaq=k, z=[0, -20], Saq=S, tmin=0.001, tmax=100)
w = Well(ml, 0, 0, rw=0.3, tsandQ=[(0, 800)])
ml.solve(silent=True)
if returnmodel:
return ml
h = ml.head(10, 0, to)
return np.sum((h[0] - ho) ** 2)
from scipy.optimize import fmin
lsopt = fmin(func, [10, 1e-4])
print('optimal parameters:', lsopt)
print('rmse:', np.sqrt(func(lsopt) / len(ho)))
ml = func(lsopt, returnmodel=True)
plt.figure()
plt.plot(data[:, 0], data[:, 1], '.', label='observed')
hm = ml.head(10, 0, to)
plt.plot(to, hm[0], 'r', label='modeled')
plt.legend()
plt.xlabel('time (d)')
plt.ylabel('head (m)');
cal = Calibrate(ml)
cal.set_parameter(name='kaq0', initial=10, pmin=0.1, pmax=1000)
cal.set_parameter(name='Saq0', initial=1e-4, pmin=1e-5, pmax=1e-3)
cal.series(name='obs1', x=10, y=0, layer=0, t=to, h=ho)
cal.fit(report=False)
print('rmse:', cal.rmse())
cal.parameters.style.set_precision(3)
Explanation: Head data is generated for a pumping test in a two-aquifer model. The well starts pumping at time $t=0$ with a discharge $Q=800$ m$^3$/d. The head is measured in an observation well 10 m from the pumping well. The thickness of the aquifer is 20 m. Questions:
Determine the optimal values of the hydraulic conductivity and specific storage coefficient of the aquifer when the aquifer is approximated as confined. Use a least squares approach and make use of the fmin function of scipy.optimize to find the optimal values. Plot the data with dots and the best-fit model in one graph. Print the optimal values of $k$ and $S_s$ to the screen as well as the root mean squared error of the residuals.
Repeat Question 1 but now approximate the aquifer as semi-confined. Plot the data with dots and the best-fit model in one graph. Print to the screen the optimal values of $k$, $S_s$ and $c$ to the screen as well as the root mean squared error of the residuals. Is the semi-cofined model a better fit than the confined model?
End of explanation
def func2(p, to=to, ho=ho, returnmodel=False):
k = p[0]
S = p[1]
c = p[2]
ml = ModelMaq(kaq=k, z=[2, 0, -20], Saq=S, c=c, topboundary='semi',
tmin=0.001, tmax=100)
w = Well(ml, 0, 0, rw=0.3, tsandQ=[(0, 800)])
ml.solve(silent=True)
if returnmodel:
return ml
h = ml.head(10, 0, to)
return np.sum((h[0] - ho) ** 2)
lsopt2 = fmin(func2, [10, 1e-4, 1000])
print('optimal parameters:', lsopt2)
print('rmse:', np.sqrt(func2(lsopt2) / len(ho)))
ml = func2(lsopt2, returnmodel=True)
plt.figure()
plt.plot(data[:, 0], data[:, 1], '.', label='observed')
hm = ml.head(10, 0, to)
plt.plot(to, hm[0], 'r', label='modeled')
plt.legend()
plt.xlabel('time (d)')
plt.ylabel('head (m)');
ml = ModelMaq(kaq=10, z=[2, 0, -20], Saq=1e-4, c=1000, topboundary='semi', tmin=0.001, tmax=100)
w = Well(ml, 0, 0, rw=0.3, tsandQ=[(0, 800)])
ml.solve(silent=True)
cal = Calibrate(ml)
cal.set_parameter(name='kaq0', initial=10)
cal.set_parameter(name='Saq0', initial=1e-4)
cal.set_parameter(name='c0', initial=1000)
cal.series(name='obs1', x=10, y=0, layer=0, t=to, h=ho)
cal.fit(report=False)
cal.parameters.style.set_precision(5)
cal.rmse(), ml.aq.kaq
plt.figure()
plt.plot(data[:, 0], data[:, 1], '.', label='observed')
hm = ml.head(10, 0, to)
plt.plot(to, hm[0], 'r', label='modeled')
plt.legend()
plt.xlabel('time (d)')
plt.ylabel('head (m)');
Explanation: Model as semi-confined
End of explanation |
3,588 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sample Implementation of Poisson Kriging
This notebook contains a implemention example of Poisson kriging.
The used data is from ZoneA.data (for details, please refer to this link)
Step1: Step 1. Read the data
Step2: Step 2. Build Poisson Kriging model
Step3: Step 3. Semivariogram Analysis
(1). Sepherical Model
$
\gamma(h) = \begin{cases}
c \cdot \left( 1.5 \cdot \left( \frac{h}{a} \right) - 0.5 \cdot \left( \frac{h}{a} \right)^3 \right)
& \text{if} \ h <= a \
c & otherwise
\end{cases}
$
Step4: Extra part
To better visualize the change of MSE versus a, you may want to to plot the curve of MSE vs. a
Step5: (2). Exponential Model
$
\gamma(h) = c \cdot \left( 1 - e^{- \frac{h}{a}} \right)
$
Step6: Extra part
To better visualize the change of MSE versus a, you may want to to plot the curve of MSE vs. a
Step7: (3). Gaussian Model
$
\gamma(h) = c \cdot \left( 1 - e^{- (\frac{h}{a})^2 } \right)
$
Step8: Extra part
To better visualize the change of MSE versus a, you may want to to plot the curve of MSE vs. a
Step9: Step 4. Make Predictions
Step10: (1). Make prediction on the original location
Step11: (2). Make predictions on other locations and make scatter plot
Step12: Step 5. Get result | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from kriging2 import Kriging
%matplotlib inline
Explanation: Sample Implementation of Poisson Kriging
This notebook contains a implemention example of Poisson kriging.
The used data is from ZoneA.data (for details, please refer to this link)
End of explanation
# read data
with open('./data/ZoneA.dat', 'r') as f:
data = f.readlines()
data = [i.strip().split() for i in data[10:] ]
data = np.array(data, dtype=np.float)
data = pd.DataFrame(data, columns=['x','y','thk','por','perm','lperm','lpermp','lpermr'])
x = data['x'].values
y = data['y'].values
z = data['por'].values
Explanation: Step 1. Read the data
End of explanation
# Build the Poisson Kriging model
model = Kriging()
model.fit(x, y, z, xmin=None, xmax=None, ymin=None, ymax=None, xsplit=100, ysplit=100)
Explanation: Step 2. Build Poisson Kriging model
End of explanation
# Plot the semivariogram curve
a_range = np.linspace(1, 5000, 1000)
fig, mse = model.semivariogram(a_range=a_range, x_range=(0, 13000), bandwidth=1000,
model='sepherical', figsize=(16, 5), verbose=True)
Explanation: Step 3. Semivariogram Analysis
(1). Sepherical Model
$
\gamma(h) = \begin{cases}
c \cdot \left( 1.5 \cdot \left( \frac{h}{a} \right) - 0.5 \cdot \left( \frac{h}{a} \right)^3 \right)
& \text{if} \ h <= a \
c & otherwise
\end{cases}
$
End of explanation
plt.figure()
plt.plot(a_range, mse, label='MSE')
plt.xlabel('a', fontsize=12)
plt.ylabel('MSE', fontsize=12)
plt.title('MSE vs. a', fontsize=16)
plt.legend(fontsize=12)
plt.show()
Explanation: Extra part
To better visualize the change of MSE versus a, you may want to to plot the curve of MSE vs. a
End of explanation
# Plot the semivariogram curve
a_range = np.linspace(1, 5000, 1000)
fig, mse = model.semivariogram(a_range=a_range, x_range=(0, 13000), bandwidth=1000,
model='exponential', figsize=(16, 5), verbose=True)
Explanation: (2). Exponential Model
$
\gamma(h) = c \cdot \left( 1 - e^{- \frac{h}{a}} \right)
$
End of explanation
plt.figure()
plt.plot(a_range, mse, label='MSE')
plt.xlabel('a', fontsize=12)
plt.ylabel('MSE', fontsize=12)
plt.title('MSE vs. a', fontsize=16)
plt.legend(fontsize=12)
plt.show()
Explanation: Extra part
To better visualize the change of MSE versus a, you may want to to plot the curve of MSE vs. a
End of explanation
# Plot the semivariogram curve
a_range = np.linspace(1, 5000, 1000)
fig, mse = model.semivariogram(a_range=a_range, x_range=(0, 13000), bandwidth=1000,
model='gaussian', figsize=(16, 5), verbose=True)
Explanation: (3). Gaussian Model
$
\gamma(h) = c \cdot \left( 1 - e^{- (\frac{h}{a})^2 } \right)
$
End of explanation
plt.figure()
plt.plot(a_range, mse, label='MSE')
plt.xlabel('a', fontsize=12)
plt.ylabel('MSE', fontsize=12)
plt.title('MSE vs. a', fontsize=16)
plt.legend(fontsize=12)
plt.show()
Explanation: Extra part
To better visualize the change of MSE versus a, you may want to to plot the curve of MSE vs. a
End of explanation
a_range = np.linspace(1, 5000, 1000)
model.predict(loc=None, x_range=(0, 13000), bandwidth=1000, a_range=a_range,
model='sepherical', fit=True)
Explanation: Step 4. Make Predictions
End of explanation
prediction = np.zeros(len(z))
variance = np.zeros(len(z))
for i in range(len(x)):
prediction[i], variance[i] = model.predict(loc=(x[i], y[i]), a=3819.055,
model='sepherical', fit=False)
# plot of the original value and the predictions
plt.figure(figsize=(8, 6))
plt.plot(z, 'g.', label='Actual Value')
plt.plot(prediction, 'r.', label='Estimated Value')
plt.legend(fontsize=12, loc=1)
plt.show()
# make predictions on the original points
fig, ax = plt.subplots()
img = ax.scatter(x, y, c=prediction)
ax.axis('image')
ax.set_xlim((model.xmin, model.xmax))
ax.set_ylim((model.ymin, model.ymax))
ax.set_xlabel('x')
ax.set_ylabel('y')
plt.colorbar(img)
plt.show()
Explanation: (1). Make prediction on the original location
End of explanation
fig = model.plot2D(fitted=False)
plt.show()
fig = model.plot2D(fitted=True)
plt.show()
Explanation: (2). Make predictions on other locations and make scatter plot
End of explanation
# return distance, mu, a, c and df (data frame)
distance, mu, a, c, df = model.get_result()
df
Explanation: Step 5. Get result
End of explanation |
3,589 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'noresm2-lmec', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: NCC
Source ID: NORESM2-LMEC
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:24
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
3,590 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lecture 3
Step1: We'll train a logistic regression model of the form
$$
p(y = 1 ~|~ {\bf x}; {\bf w}) = \frac{1}{1 + \textrm{exp}[-(w_0 + w_1x_1 + w_2x_2)]}
$$
using sklearn's logistic regression classifier as follows
Step2: Q
Step3: Problem 2
Step4: Let's also store the documents in a list as follows
Step5: To be consistent with sklearn conventions, we'll encode the documents as row-vectors stored in a matrix. In this case, each row of the matrix corresponds to a document, and each column corresponds to a term in the vocabulary. For our example this gives us a matrix $M$ of shape $3 \times 6$. The $(d,t)$-entry in $M$ is then the number of times the term $t$ appears in document $d$
Q
Step6: Hopefully your code returns the matrix
$$M =
\left[
\begin{array}{ccccccc}
0 & 0 & 1 & 0 & 1 & 1 \
0 & 0 & 1 & 1 & 0 & 1 \
1 & 1 & 0 & 0 & 1 & 0 \
\end{array}
\right]$$.
Note that the entry in the (2,0) position is $1$ because the first word (angeles) appears once in the third document.
OK, let's see how we can construct the same term-frequency matrix in sklearn. We will use something called the <a href="http
Step7: The $\texttt{fit_transform}$ method actually does two things. It fits the model to the training data by building a vocabulary. It then transforms the text in $D$ into matrix form.
If we wish to see the vocabulary you can do it like so
Step8: Note that this is the same vocabulary and indexing that we definfed ourselves (just in a different order). Hopefully that means we'll get the same term-frequency matrix. We can print $X$ and check
Step9: Yep, they're the same! Notice that we had to convert $X$ to a dense matrix for printing. This is because CountVectorizer actually returns a sparse matrix. This is a very good thing since most vectors in a text model will be extremely sparse, since most documents will only contain a handful of words from the vocabulary.
OK, let's see how we can use the CountVectorizer to transform the test documents into their own term-frequency matrix.
Step10: OK, now suppose that we have a query document not included in the training set that we want to vectorize.
Step11: We've already fit the CountVectorizer to the training set, so all we need to do is transform the test set documents into a term-frequency vector using the same conventions. Since we've already fit the model, we do the transformation with the $\texttt{transform}$ method
Step12: Let's print it and see what it looks like
Step13: Notice that the query document included the word $\texttt{new}$ twice, which corresponds to the entry in the $(0,2)$-position.
Q
Step14: Hopefully you got something like the following
Step15: Let's see what we get when we use sklearn. Sklearn has a vectorizer called TfidfVectorizer which is similar to CountVectorizer, but it computes tf-idf scores.
Step16: Note that these are not quite the same, because sklearn's implementation of tf-idf uses the add-one smoothing in the denominator for idf.
Okay, now let's see if we can use TFIDF analysis on real text documents!
Run the following code to use this analysis on his inauguration speech from 2009. It will output what TFIDF thinks are the most important words from each paragraph
Q
Step17: <br>
Problem 4
Step18: The current parameters are set to not remove stop words from the text so that it's a bit easier to explore.
Look at a few of the reviews stored in $\texttt{text_train}$ as well as their associated labels in $\texttt{labels_train}$. Can you figure out which label refers to a positive review and which refers to a negative review?
Step19: The first review is labeled $1$ and has the following text
Step20: The fourth review is labeled $0$ and has the following text
Step21: Hopefully it's obvious that label 1 corresponds to positive reviews and label 0 to negative reviews!
OK, the first thing we'll do is train a logistic regression classifier using the Bag-of-Words model, and see what kind of accuracy we can get. To get started, we need to vectorize the text into mathematical features that we can use. We'll use CountVectorizer to do the job. (Before starting, I'm going to reload the data and remove the stop words this time)
Step22: Q
Step23: OK, so we got an accuracy of around 81% using Bag-of-Words. Now lets do the same tests but this time with tf-idf features. | Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn import datasets
iris = datasets.load_iris()
X_train = iris.data[iris.target != 2, :2] # first two features and
y_train = iris.target[iris.target != 2] # first two labels only
fig = plt.figure(figsize=(8,8))
mycolors = {"blue": "steelblue", "red": "#a76c6e", "green": "#6a9373"}
plt.scatter(X_train[:, 0], X_train[:, 1], s=100, alpha=0.9, c=[mycolors["red"] if yi==1 else mycolors["blue"] for yi in y_train])
plt.xlabel('sepal length', fontsize=16)
plt.ylabel('sepal width', fontsize=16);
Explanation: Lecture 3: Logistic Regression and Text Models
<img src="figs/logregwordcloud.png">
Problem 1: Logistic Regression for 2D Continuous Features
In the video lecture you saw some examples of using logistic regression to do binary classification on text data (SPAM vs HAM) and on 1D continuous data. In this problem we'll look at logistic regression for 2D continuous data. The data we'll use are <a href="https://www.math.umd.edu/~petersd/666/html/iris_with_labels.jpg">sepal</a> measurements from the ubiquitous iris dataset.
<p>
<img style="float:left; width:450px" src="https://upload.wikimedia.org/wikipedia/commons/9/9f/Iris_virginica.jpg">
</p>
The two features of our model will be the sepal length and sepal width. Execute the following cell to see a plot of the data. The blue points correspond to the sepal measurements of the Iris Setosa (left) and the red points correspond to the sepal measurements of the Iris Versicolour (right).
End of explanation
from sklearn.linear_model import LogisticRegression # import from sklearn
logreg = LogisticRegression() # initialize classifier
logreg.fit(X_train, y_train); # train on training data
Explanation: We'll train a logistic regression model of the form
$$
p(y = 1 ~|~ {\bf x}; {\bf w}) = \frac{1}{1 + \textrm{exp}[-(w_0 + w_1x_1 + w_2x_2)]}
$$
using sklearn's logistic regression classifier as follows
End of explanation
import numpy as np
import math
fig = plt.figure(figsize=(8,8))
plt.scatter(X_train[:, 0], X_train[:, 1], s=100, c=[mycolors["red"] if yi==1 else mycolors["blue"] for yi in y_train])
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
x_min, x_max = np.min(X_train[:,0])-0.1, np.max(X_train[:,0])+0.1
y_min, y_max = np.min(X_train[:,1])-0.1, np.max(X_train[:,1])+0.1
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
x1 = np.linspace(x_min, x_max, 100)
w0 = logreg.intercept_
w1 = logreg.coef_[0][0]
w2 = logreg.coef_[0][1]
x2 = ... # TODO
plt.plot(x1, x2, color="gray");
Explanation: Q: Determine the parameters ${\bf w}$ fit by the model. It might be helpful to consult the documentation for the classifier on the <a href="http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html">sklearn website</a>. Hint: The classifier stores the coefficients and bias term separately.
Q: In general, what does the Logistic Regression decision boundary look like for data with two features?
Q: Modify the code below to plot the decision boundary along with the data.
End of explanation
V = {"angeles": 0, "los": 1, "new": 2, "post": 3, "times": 4, "york": 5}
Explanation: Problem 2: The Bag-of-Words Text Model
The remainder of today's exercise will consider the problem of predicting the semantics of text. In particular, later we'll look at predicting whether movie reviews are positive or negative just based on their text.
Before we can utilize text as features in a learning model, we need a concise mathematical way to represent things like words, phrases, sentences, etc. The most common text models are based on the so-called <a href="https://en.wikipedia.org/wiki/Vector_space_model">Vector Space Model</a> (VSM) where individual words in a document are associated with entries of a vector:
$$
\textrm{"The sky is blue"} \quad \Rightarrow \quad
\left[
\begin{array}{c}
0 \
1 \
0 \
0 \
1
\end{array}
\right]
$$
The first step in creating a VSM is to define a vocabulary, $V$, of words that you will include in your model. This vocabulary can be determined by looking at all (or most) of the words in the training set, or even by including a fixed vocabulary based on the english language. A vector representation of a document like a movie review is then a vector with length $|V|$ where each entry in the vector maps uniquely to a word in the vocabulary. A vector encoding of a document would then be a vector that is nonzero in positions corresponding to words present in the document and zero everywhere else. How you fill in the nonzero entries depends on the model you're using. Two simple conventions are the Bag-of-Words model and the binary model.
In the binary model we simply set an entry of the vector to $1$ if the associate word appears at least once in the document. In the more common Bag-of-Words model we set an entry of the vector equal to the frequency with which the word appears in the document. Let's see if we can come up with a simple implementation of the Bag-of-Words model in Python, and then later we'll see how sklearn can do the heavy lifting for us.
Consider a training set containing three documents, specified as follows
$\texttt{Training Set}:$
$\texttt{d1}: \texttt{new york times}$
$\texttt{d2}: \texttt{new york post}$
$\texttt{d3}: \texttt{los angeles times}$
First we'll define the vocabulary based on the words in the test set. It is $V = { \texttt{angeles}, \texttt{los}, \texttt{new}, \texttt{post}, \texttt{times}, \texttt{york}}$.
We need to define an association between the particular words in the vocabulary and the specific entries in our vectors. Let's define this association in the order that we've listed them above. We can store this mapping as a Python dictionary as follows:
End of explanation
D = ["the new york times", "the new york post", "the los angeles times"]
Explanation: Let's also store the documents in a list as follows:
End of explanation
M = np.zeros((len(D),len(V)))
for ii, doc in enumerate(D):
for term in doc.split():
if(term in V): #only print if the term is in our dictionary
... #TODO
print(M)
Explanation: To be consistent with sklearn conventions, we'll encode the documents as row-vectors stored in a matrix. In this case, each row of the matrix corresponds to a document, and each column corresponds to a term in the vocabulary. For our example this gives us a matrix $M$ of shape $3 \times 6$. The $(d,t)$-entry in $M$ is then the number of times the term $t$ appears in document $d$
Q: Your first task is to write some simple Python code to construct the term-frequency matrix $M$
End of explanation
from sklearn.metrics.pairwise import euclidean_distances
from sklearn.feature_extraction.text import CountVectorizer # import CountVectorizer
vectorizer = CountVectorizer(stop_words = 'english') # initialize the vectorizer
X = vectorizer.fit_transform(D,) # fit to training data and transform to matrix
Explanation: Hopefully your code returns the matrix
$$M =
\left[
\begin{array}{ccccccc}
0 & 0 & 1 & 0 & 1 & 1 \
0 & 0 & 1 & 1 & 0 & 1 \
1 & 1 & 0 & 0 & 1 & 0 \
\end{array}
\right]$$.
Note that the entry in the (2,0) position is $1$ because the first word (angeles) appears once in the third document.
OK, let's see how we can construct the same term-frequency matrix in sklearn. We will use something called the <a href="http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html">CountVectorizer</a> to accomplish this. Let's see some code and then we'll explain how it functions.
To avoid common words, such as "the", in our analysis, we will remove any word from a list of common english words in our analysis. We can do so by typing
stop_words = 'english'
in the CountVectorizer call.
End of explanation
print(vectorizer.vocabulary_)
Explanation: The $\texttt{fit_transform}$ method actually does two things. It fits the model to the training data by building a vocabulary. It then transforms the text in $D$ into matrix form.
If we wish to see the vocabulary you can do it like so
End of explanation
print(X.todense())
Explanation: Note that this is the same vocabulary and indexing that we definfed ourselves (just in a different order). Hopefully that means we'll get the same term-frequency matrix. We can print $X$ and check
End of explanation
#get a sense of how different the vectors are
for f in X:
print(euclidean_distances(X[0],f))
Explanation: Yep, they're the same! Notice that we had to convert $X$ to a dense matrix for printing. This is because CountVectorizer actually returns a sparse matrix. This is a very good thing since most vectors in a text model will be extremely sparse, since most documents will only contain a handful of words from the vocabulary.
OK, let's see how we can use the CountVectorizer to transform the test documents into their own term-frequency matrix.
End of explanation
d4 = ["new york new tribune"]
Explanation: OK, now suppose that we have a query document not included in the training set that we want to vectorize.
End of explanation
x4 = vectorizer.transform(d4)
Explanation: We've already fit the CountVectorizer to the training set, so all we need to do is transform the test set documents into a term-frequency vector using the same conventions. Since we've already fit the model, we do the transformation with the $\texttt{transform}$ method:
End of explanation
print(x4.todense())
Explanation: Let's print it and see what it looks like
End of explanation
idf = np.array([np.log(3), np.log(3), np.log(3./2), np.log(3), np.log(3./2), np.log(3./2)])
Xtfidf = np.dot(X.todense(), np.diag(idf))
Explanation: Notice that the query document included the word $\texttt{new}$ twice, which corresponds to the entry in the $(0,2)$-position.
Q: What's missing from $x4$ that we might expect to see from the query document?
<br>
Problem 3: Term Frequency - Inverse Document Frequency
The Bag-of-Words model for text classification is very popular, but let's see if we can do better. Currently we're weighting every word in the corpus by it's frequency. It turns out that in text classification there are often features that are not particularly useful predictors for the document class, either because they are too common or too uncommon. Stop-words are extremely common, low-information words like "a", "the", "as", etc. Removing these from documents is typically the first thing done in peparing data for document classification.
Q: Can you think of a situation where it might be useful to keep stop words in the corpus?
Other words that tend to be uninformative predictors are words that appear very very rarely. In particular, if they do not appear frequently enough in the training data then it is difficult for a classification algorithm to weight them heavily in the classification process.
In general, the words that tend to be useful predictors are the words that appear frequently, but not too frequently. Consider the following frequency graph for a corpus.
<img src="figs/feat_freq.png">
The features in column A appear too frequently to be very useful, and the features in column C appear too rarely. One first-pass method of feature selection in text classification would be to discard the words from columns A and C, and build a classifier with only features from column B.
Another common model for identifying the useful terms in a document is the Term Frequency - Inverse Document Frequency (tf-idf) model. Here we won't throw away any terms, but we'll replace their Bag-of-Words frequency counts with tf-idf scores which we describe below.
The tf-idf score is the product of two statistics, term frequency and inverse document frequency
$$\texttt{tfidf(d,t)} = \texttt{tf(d,t)} \times \texttt{idf(t)}$$
The term frequency $\texttt{tf(d,t)}$ is a measure of the frequency with which term $t$ appears in document $d$. The inverse document frequency $\texttt{idf(t)}$ is a measure of how much information the word provides, that is, whether the term is common or rare across all documents. By multiplying the two quantities together, we obtain a representation of term $t$ in document $d$ that weighs how common the term is in the document with how common the word is in the entire corpus. You can imagine that the words that get the highest associated values are terms that appear many times in a small number of documents.
There are many ways to compute the composite terms $\texttt{tf}$ and $\texttt{idf}$. For simplicity, we'll define $\texttt{tf(d,t)}$ to be the number of times term $t$ appears in document $d$ (i.e., Bag-of-Words). We will define the inverse document frequency as follows:
$$
\texttt{idf(t)} = \ln ~ \frac{\textrm{total # documents}}{\textrm{1 + # documents with term }t}
= \ln ~ \frac{|D|}{|d: ~ t \in d |}
$$
Note that we could have a potential problem if a term comes up that is not in any of the training documents, resulting in a divide by zero. This might happen if you use a canned vocabulary instead of constructing one from the training documents. To guard against this, many implementations will use add-one smoothing in the denominator (this is what sklearn does).
$$
\texttt{idf(t)} = \ln ~ \frac{\textrm{total # documents}}{\textrm{1 + # documents with term }t}
= \ln ~ \frac{|D|}{1 + |d: ~ t \in d |}
$$
Q: Compute $\texttt{idf(t)}$ (without smoothing) for each of the terms in the training documents from the previous problem
Q: Compute the td-ifd matrix for the training set
End of explanation
row_norms = np.array([np.linalg.norm(row) for row in Xtfidf])
X_tfidf_n = np.dot(np.diag(1./row_norms), Xtfidf)
print(X_tfidf_n)
Explanation: Hopefully you got something like the following:
$$
X_{tfidf} =
\left[
\begin{array}{ccccccccc}
0. & 0. & 0.40546511 & 0. & 0.40546511 & 0.40546511 \
0. & 0. & 0.40546511 & 1.09861229 & 0. & 0.40546511 \
1.09861229 & 1.09861229 & 0. & 0. & 0.40546511 & 0.
\end{array}
\right]
$$
The final step in any VSM method is the normalization of the vectors. This is done so that very long documents to not completely overpower the small and medium length documents.
End of explanation
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer()
Y = tfidf.fit_transform(D)
print(Y.todense())
Explanation: Let's see what we get when we use sklearn. Sklearn has a vectorizer called TfidfVectorizer which is similar to CountVectorizer, but it computes tf-idf scores.
End of explanation
#load in text
ObamaText = open("data/obama_SOU_2012.txt").readlines()
#create TFIDF matrix
X = vectorizer.fit_transform(ObamaText)
D_tot = X.shape[0]
Xtfidf = np.zeros(X.shape)
for i,col in enumerate(X.T): #loop over rows of X (i.e. paragraphs of text)
#number of lines the word appears in (no need for smoothing here)
freq = np.count_nonzero(col.todense())
#compute theidf
idf = math.log(D_tot/(freq))
#calculate the tf-idf
Xtfidf[:,i:i+1] = X[:,i].todense()*idf
#normalize Xtfidf matrix
row_norms = np.array([np.linalg.norm(row) for row in Xtfidf])
Xtfidf_norm = np.dot(np.diag(1./row_norms),Xtfidf)
#create a list from the dictionary
V_words, V_nums = vectorizer.vocabulary_.keys(), vectorizer.vocabulary_.values()
V_reverse = zip(V_nums,V_words)
V_reverse_dict = dict(V_reverse)
#loop through the paragraphs of the text and print most important word
for i,row in enumerate(Xtfidf_norm):
row_str = " "
row_str = row_str + V_reverse_dict[np.argmax(row)]
#top_words_ind = np.argsort(row)[-5:]
#for ii in top_words_ind:
# row_str = row_str + V_reverse_dict[ii] + " "
print("The top word in paragraph " + str(i) + " is " + row_str)
Explanation: Note that these are not quite the same, because sklearn's implementation of tf-idf uses the add-one smoothing in the denominator for idf.
Okay, now let's see if we can use TFIDF analysis on real text documents!
Run the following code to use this analysis on his inauguration speech from 2009. It will output what TFIDF thinks are the most important words from each paragraph
Q: Is the analysis able to pick out the most important words correctly? Why does it sometimes pick the wrong words?
Q: You can do the same analysis for his 2012 State of the Union Speech by replacing the first line of code with "obama_SOU_2012.txt". How does the analysis do here?
Q: Find some other piece of text on your own and do the same analysis here by saving it in .txt file and entering the name of this file in the first line of code. You can find a big source of speeches http://www.americanrhetoric.com/newtop100speeches.htm.
End of explanation
import csv
def read_and_clean_data(fname, remove_stops=True):
with open('data/stopwords.txt', 'rt') as f:
stops = [line.rstrip('\n') for line in f]
with open(fname,'rt') as tsvin:
reader = csv.reader(tsvin, delimiter='\t')
labels = []; text = []
for ii, row in enumerate(reader):
labels.append(int(row[0]))
words = row[1].lower().split()
words = [w for w in words if not w in stops] if remove_stops else words
text.append(" ".join(words))
return text, labels
text_train, labels_train = read_and_clean_data('data/labeledTrainData.tsv', remove_stops=True)
text_test, labels_test = read_and_clean_data('data/labeledTestData.tsv', remove_stops=True)
Explanation: <br>
Problem 4: Classifying Semantics in Movie Reviews
The data for this problem was taken from the <a href="https://www.kaggle.com/c/word2vec-nlp-tutorial/details/part-1-for-beginners-bag-of-words">Bag of Words Meets Bag of Popcorn</a> Kaggle competition
In this problem you will use the text from movie reviews to predict whether the reviewer felt positively or negatively about the movie using Bag-of-Words and tf-idf. I've partially cleaned the data and stored it in files called $\texttt{labeledTrainData.tsv}$ and $\texttt{labeledTestData.tsv}$ in the data directory.
End of explanation
labels_train[:4]
Explanation: The current parameters are set to not remove stop words from the text so that it's a bit easier to explore.
Look at a few of the reviews stored in $\texttt{text_train}$ as well as their associated labels in $\texttt{labels_train}$. Can you figure out which label refers to a positive review and which refers to a negative review?
End of explanation
text_train[1]
Explanation: The first review is labeled $1$ and has the following text:
End of explanation
text_train[0]
Explanation: The fourth review is labeled $0$ and has the following text:
End of explanation
text_train, labels_train = read_and_clean_data('data/labeledTrainData.tsv', remove_stops=True)
text_test, labels_test = read_and_clean_data('data/labeledTestData.tsv', remove_stops=True)
cvec = CountVectorizer()
X_bw_train = cvec.fit_transform(text_train)
y_train = np.array(labels_train)
X_bw_test = cvec.transform(text_test)
y_test = np.array(labels_test)
Explanation: Hopefully it's obvious that label 1 corresponds to positive reviews and label 0 to negative reviews!
OK, the first thing we'll do is train a logistic regression classifier using the Bag-of-Words model, and see what kind of accuracy we can get. To get started, we need to vectorize the text into mathematical features that we can use. We'll use CountVectorizer to do the job. (Before starting, I'm going to reload the data and remove the stop words this time)
End of explanation
from sklearn.metrics import accuracy_score
bwLR = LogisticRegression()
bwLR.fit(X_bw_train, y_train)
pred_bwLR = bwLR.predict(X_bw_test)
print("Logistic Regression accuracy with Bag-of-Words: " + str(accuracy_score(y_test, pred_bwLR)))
Explanation: Q: How many different words are in the vocabulary?
OK, now we'll train a logistic regression classifier on the training set, and test the accuracy on the test set. To do this we'll need to load some kind of accuracy metric from sklearn.
End of explanation
tvec = TfidfVectorizer()
X_tf_train = tvec.fit_transform(text_train)
X_tf_test = tvec.transform(text_test)
tfLR = LogisticRegression()
tfLR.fit(X_tf_train, y_train)
pred_tfLR = tfLR.predict(X_tf_test)
print("Logistic Regression accuracy with tf-idf: " + str(accuracy_score(y_test, pred_tfLR)))
Explanation: OK, so we got an accuracy of around 81% using Bag-of-Words. Now lets do the same tests but this time with tf-idf features.
End of explanation |
3,591 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Building your Deep Neural Network
Step2: 2 - Outline of the Assignment
To build your neural network, you will be implementing several "helper functions". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will
Step4: Expected output
Step6: Expected output
Step8: Expected output
Step10: Expected output
Step12: <table style="width
Step14: Expected Output
Step16: Expected Output
Step18: Expected output with sigmoid
Step20: Expected Output
<table style="width | Python Code:
import numpy as np
import h5py
import matplotlib.pyplot as plt
from testCases_v2 import *
from dnn_utils_v2 import sigmoid, sigmoid_backward, relu, relu_backward
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
Explanation: Building your Deep Neural Network: Step by Step
Welcome to your week 4 assignment (part 1 of 2)! You have previously trained a 2-layer Neural Network (with a single hidden layer). This week, you will build a deep neural network, with as many layers as you want!
In this notebook, you will implement all the functions required to build a deep neural network.
In the next assignment, you will use these functions to build a deep neural network for image classification.
After this assignment you will be able to:
- Use non-linear units like ReLU to improve your model
- Build a deeper neural network (with more than 1 hidden layer)
- Implement an easy-to-use neural network class
Notation:
- Superscript $[l]$ denotes a quantity associated with the $l^{th}$ layer.
- Example: $a^{[L]}$ is the $L^{th}$ layer activation. $W^{[L]}$ and $b^{[L]}$ are the $L^{th}$ layer parameters.
- Superscript $(i)$ denotes a quantity associated with the $i^{th}$ example.
- Example: $x^{(i)}$ is the $i^{th}$ training example.
- Lowerscript $i$ denotes the $i^{th}$ entry of a vector.
- Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the $l^{th}$ layer's activations).
Let's get started!
1 - Packages
Let's first import all the packages that you will need during this assignment.
- numpy is the main package for scientific computing with Python.
- matplotlib is a library to plot graphs in Python.
- dnn_utils provides some necessary functions for this notebook.
- testCases provides some test cases to assess the correctness of your functions
- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work. Please don't change the seed.
End of explanation
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
parameters -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
np.random.seed(1)
### START CODE HERE ### (≈ 4 lines of code)
W1 = np.random.randn(n_h, n_x)*0.01
b1 = np.zeros((n_h, 1))
W2 = np.random.randn(n_y, n_h)*0.01
b2 = np.zeros((n_y, 1))
### END CODE HERE ###
assert(W1.shape == (n_h, n_x))
assert(b1.shape == (n_h, 1))
assert(W2.shape == (n_y, n_h))
assert(b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters = initialize_parameters(2,2,1)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
Explanation: 2 - Outline of the Assignment
To build your neural network, you will be implementing several "helper functions". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will:
Initialize the parameters for a two-layer network and for an $L$-layer neural network.
Implement the forward propagation module (shown in purple in the figure below).
Complete the LINEAR part of a layer's forward propagation step (resulting in $Z^{[l]}$).
We give you the ACTIVATION function (relu/sigmoid).
Combine the previous two steps into a new [LINEAR->ACTIVATION] forward function.
Stack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer $L$). This gives you a new L_model_forward function.
Compute the loss.
Implement the backward propagation module (denoted in red in the figure below).
Complete the LINEAR part of a layer's backward propagation step.
We give you the gradient of the ACTIVATE function (relu_backward/sigmoid_backward)
Combine the previous two steps into a new [LINEAR->ACTIVATION] backward function.
Stack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function
Finally update the parameters.
<img src="images/final outline.png" style="width:800px;height:500px;">
<caption><center> Figure 1</center></caption><br>
Note that for every forward function, there is a corresponding backward function. That is why at every step of your forward module you will be storing some values in a cache. The cached values are useful for computing gradients. In the backpropagation module you will then use the cache to calculate the gradients. This assignment will show you exactly how to carry out each of these steps.
3 - Initialization
You will write two helper functions that will initialize the parameters for your model. The first function will be used to initialize parameters for a two layer model. The second one will generalize this initialization process to $L$ layers.
3.1 - 2-layer Neural Network
Exercise: Create and initialize the parameters of the 2-layer neural network.
Instructions:
- The model's structure is: LINEAR -> RELU -> LINEAR -> SIGMOID.
- Use random initialization for the weight matrices. Use np.random.randn(shape)*0.01 with the correct shape.
- Use zero initialization for the biases. Use np.zeros(shape).
End of explanation
# GRADED FUNCTION: initialize_parameters_deep
def initialize_parameters_deep(layer_dims):
Arguments:
layer_dims -- python array (list) containing the dimensions of each layer in our network
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])
bl -- bias vector of shape (layer_dims[l], 1)
np.random.seed(3)
parameters = {}
L = len(layer_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1])*0.01
parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))
### END CODE HERE ###
assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1]))
assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))
return parameters
parameters = initialize_parameters_deep([5,4,3])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
Explanation: Expected output:
<table style="width:80%">
<tr>
<td> **W1** </td>
<td> [[ 0.01624345 -0.00611756]
[-0.00528172 -0.01072969]] </td>
</tr>
<tr>
<td> **b1**</td>
<td>[[ 0.]
[ 0.]]</td>
</tr>
<tr>
<td>**W2**</td>
<td> [[ 0.00865408 -0.02301539]]</td>
</tr>
<tr>
<td> **b2** </td>
<td> [[ 0.]] </td>
</tr>
</table>
3.2 - L-layer Neural Network
The initialization for a deeper L-layer neural network is more complicated because there are many more weight matrices and bias vectors. When completing the initialize_parameters_deep, you should make sure that your dimensions match between each layer. Recall that $n^{[l]}$ is the number of units in layer $l$. Thus for example if the size of our input $X$ is $(12288, 209)$ (with $m=209$ examples) then:
<table style="width:100%">
<tr>
<td> </td>
<td> **Shape of W** </td>
<td> **Shape of b** </td>
<td> **Activation** </td>
<td> **Shape of Activation** </td>
<tr>
<tr>
<td> **Layer 1** </td>
<td> $(n^{[1]},12288)$ </td>
<td> $(n^{[1]},1)$ </td>
<td> $Z^{[1]} = W^{[1]} X + b^{[1]} $ </td>
<td> $(n^{[1]},209)$ </td>
<tr>
<tr>
<td> **Layer 2** </td>
<td> $(n^{[2]}, n^{[1]})$ </td>
<td> $(n^{[2]},1)$ </td>
<td>$Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}$ </td>
<td> $(n^{[2]}, 209)$ </td>
<tr>
<tr>
<td> $\vdots$ </td>
<td> $\vdots$ </td>
<td> $\vdots$ </td>
<td> $\vdots$</td>
<td> $\vdots$ </td>
<tr>
<tr>
<td> **Layer L-1** </td>
<td> $(n^{[L-1]}, n^{[L-2]})$ </td>
<td> $(n^{[L-1]}, 1)$ </td>
<td>$Z^{[L-1]} = W^{[L-1]} A^{[L-2]} + b^{[L-1]}$ </td>
<td> $(n^{[L-1]}, 209)$ </td>
<tr>
<tr>
<td> **Layer L** </td>
<td> $(n^{[L]}, n^{[L-1]})$ </td>
<td> $(n^{[L]}, 1)$ </td>
<td> $Z^{[L]} = W^{[L]} A^{[L-1]} + b^{[L]}$</td>
<td> $(n^{[L]}, 209)$ </td>
<tr>
</table>
Remember that when we compute $W X + b$ in python, it carries out broadcasting. For example, if:
$$ W = \begin{bmatrix}
j & k & l\
m & n & o \
p & q & r
\end{bmatrix}\;\;\; X = \begin{bmatrix}
a & b & c\
d & e & f \
g & h & i
\end{bmatrix} \;\;\; b =\begin{bmatrix}
s \
t \
u
\end{bmatrix}\tag{2}$$
Then $WX + b$ will be:
$$ WX + b = \begin{bmatrix}
(ja + kd + lg) + s & (jb + ke + lh) + s & (jc + kf + li)+ s\
(ma + nd + og) + t & (mb + ne + oh) + t & (mc + nf + oi) + t\
(pa + qd + rg) + u & (pb + qe + rh) + u & (pc + qf + ri)+ u
\end{bmatrix}\tag{3} $$
Exercise: Implement initialization for an L-layer Neural Network.
Instructions:
- The model's structure is [LINEAR -> RELU] $ \times$ (L-1) -> LINEAR -> SIGMOID. I.e., it has $L-1$ layers using a ReLU activation function followed by an output layer with a sigmoid activation function.
- Use random initialization for the weight matrices. Use np.random.rand(shape) * 0.01.
- Use zeros initialization for the biases. Use np.zeros(shape).
- We will store $n^{[l]}$, the number of units in different layers, in a variable layer_dims. For example, the layer_dims for the "Planar Data classification model" from last week would have been [2,4,1]: There were two inputs, one hidden layer with 4 hidden units, and an output layer with 1 output unit. Thus means W1's shape was (4,2), b1 was (4,1), W2 was (1,4) and b2 was (1,1). Now you will generalize this to $L$ layers!
- Here is the implementation for $L=1$ (one layer neural network). It should inspire you to implement the general case (L-layer neural network).
python
if L == 1:
parameters["W" + str(L)] = np.random.randn(layer_dims[1], layer_dims[0]) * 0.01
parameters["b" + str(L)] = np.zeros((layer_dims[1], 1))
End of explanation
# GRADED FUNCTION: linear_forward
def linear_forward(A, W, b):
Implement the linear part of a layer's forward propagation.
Arguments:
A -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
Returns:
Z -- the input of the activation function, also called pre-activation parameter
cache -- a python dictionary containing "A", "W" and "b" ; stored for computing the backward pass efficiently
### START CODE HERE ### (≈ 1 line of code)
Z = np.dot(W, A) + b
### END CODE HERE ###
assert(Z.shape == (W.shape[0], A.shape[1]))
cache = (A, W, b)
return Z, cache
A, W, b = linear_forward_test_case()
Z, linear_cache = linear_forward(A, W, b)
print("Z = " + str(Z))
Explanation: Expected output:
<table style="width:80%">
<tr>
<td> **W1** </td>
<td>[[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388]
[-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218]
[-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034]
[-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]]</td>
</tr>
<tr>
<td>**b1** </td>
<td>[[ 0.]
[ 0.]
[ 0.]
[ 0.]]</td>
</tr>
<tr>
<td>**W2** </td>
<td>[[-0.01185047 -0.0020565 0.01486148 0.00236716]
[-0.01023785 -0.00712993 0.00625245 -0.00160513]
[-0.00768836 -0.00230031 0.00745056 0.01976111]]</td>
</tr>
<tr>
<td>**b2** </td>
<td>[[ 0.]
[ 0.]
[ 0.]]</td>
</tr>
</table>
4 - Forward propagation module
4.1 - Linear Forward
Now that you have initialized your parameters, you will do the forward propagation module. You will start by implementing some basic functions that you will use later when implementing the model. You will complete three functions in this order:
LINEAR
LINEAR -> ACTIVATION where ACTIVATION will be either ReLU or Sigmoid.
[LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID (whole model)
The linear forward module (vectorized over all the examples) computes the following equations:
$$Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}\tag{4}$$
where $A^{[0]} = X$.
Exercise: Build the linear part of forward propagation.
Reminder:
The mathematical representation of this unit is $Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}$. You may also find np.dot() useful. If your dimensions don't match, printing W.shape may help.
End of explanation
# GRADED FUNCTION: linear_activation_forward
def linear_activation_forward(A_prev, W, b, activation):
Implement the forward propagation for the LINEAR->ACTIVATION layer
Arguments:
A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
A -- the output of the activation function, also called the post-activation value
cache -- a python dictionary containing "linear_cache" and "activation_cache";
stored for computing the backward pass efficiently
if activation == "sigmoid":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = sigmoid(Z)
### END CODE HERE ###
elif activation == "relu":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = relu(Z)
### END CODE HERE ###
assert (A.shape == (W.shape[0], A_prev.shape[1]))
cache = (linear_cache, activation_cache)
return A, cache
A_prev, W, b = linear_activation_forward_test_case()
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "sigmoid")
print("With sigmoid: A = " + str(A))
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "relu")
print("With ReLU: A = " + str(A))
Explanation: Expected output:
<table style="width:35%">
<tr>
<td> **Z** </td>
<td> [[ 3.26295337 -1.23429987]] </td>
</tr>
</table>
4.2 - Linear-Activation Forward
In this notebook, you will use two activation functions:
Sigmoid: $\sigma(Z) = \sigma(W A + b) = \frac{1}{ 1 + e^{-(W A + b)}}$. We have provided you with the sigmoid function. This function returns two items: the activation value "a" and a "cache" that contains "Z" (it's what we will feed in to the corresponding backward function). To use it you could just call:
python
A, activation_cache = sigmoid(Z)
ReLU: The mathematical formula for ReLu is $A = RELU(Z) = max(0, Z)$. We have provided you with the relu function. This function returns two items: the activation value "A" and a "cache" that contains "Z" (it's what we will feed in to the corresponding backward function). To use it you could just call:
python
A, activation_cache = relu(Z)
For more convenience, you are going to group two functions (Linear and Activation) into one function (LINEAR->ACTIVATION). Hence, you will implement a function that does the LINEAR forward step followed by an ACTIVATION forward step.
Exercise: Implement the forward propagation of the LINEAR->ACTIVATION layer. Mathematical relation is: $A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]})$ where the activation "g" can be sigmoid() or relu(). Use linear_forward() and the correct activation function.
End of explanation
# GRADED FUNCTION: L_model_forward
def L_model_forward(X, parameters):
Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation
Arguments:
X -- data, numpy array of shape (input size, number of examples)
parameters -- output of initialize_parameters_deep()
Returns:
AL -- last post-activation value
caches -- list of caches containing:
every cache of linear_relu_forward() (there are L-1 of them, indexed from 0 to L-2)
the cache of linear_sigmoid_forward() (there is one, indexed L-1)
caches = []
A = X
L = len(parameters) // 2 # number of layers in the neural network
# Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list.
for l in range(1, L):
A_prev = A
### START CODE HERE ### (≈ 2 lines of code)
A, cache = linear_activation_forward(A_prev, parameters['W' + str(l)],
parameters['b' + str(l)], activation = 'relu')
caches.append(cache)
### END CODE HERE ###
# Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list.
### START CODE HERE ### (≈ 2 lines of code)
AL, cache = linear_activation_forward(A, parameters['W' + str(L)],
parameters['b' + str(L)], activation = 'sigmoid')
caches.append(cache)
### END CODE HERE ###
assert(AL.shape == (1,X.shape[1]))
return AL, caches
X, parameters = L_model_forward_test_case()
AL, caches = L_model_forward(X, parameters)
print("AL = " + str(AL))
print("Length of caches list = " + str(len(caches)))
Explanation: Expected output:
<table style="width:35%">
<tr>
<td> **With sigmoid: A ** </td>
<td > [[ 0.96890023 0.11013289]]</td>
</tr>
<tr>
<td> **With ReLU: A ** </td>
<td > [[ 3.43896131 0. ]]</td>
</tr>
</table>
Note: In deep learning, the "[LINEAR->ACTIVATION]" computation is counted as a single layer in the neural network, not two layers.
d) L-Layer Model
For even more convenience when implementing the $L$-layer Neural Net, you will need a function that replicates the previous one (linear_activation_forward with RELU) $L-1$ times, then follows that with one linear_activation_forward with SIGMOID.
<img src="images/model_architecture_kiank.png" style="width:600px;height:300px;">
<caption><center> Figure 2 : [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID model</center></caption><br>
Exercise: Implement the forward propagation of the above model.
Instruction: In the code below, the variable AL will denote $A^{[L]} = \sigma(Z^{[L]}) = \sigma(W^{[L]} A^{[L-1]} + b^{[L]})$. (This is sometimes also called Yhat, i.e., this is $\hat{Y}$.)
Tips:
- Use the functions you had previously written
- Use a for loop to replicate [LINEAR->RELU] (L-1) times
- Don't forget to keep track of the caches in the "caches" list. To add a new value c to a list, you can use list.append(c).
End of explanation
# GRADED FUNCTION: compute_cost
def compute_cost(AL, Y):
Implement the cost function defined by equation (7).
Arguments:
AL -- probability vector corresponding to your label predictions, shape (1, number of examples)
Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)
Returns:
cost -- cross-entropy cost
m = Y.shape[1]
# Compute loss from aL and y.
### START CODE HERE ### (≈ 1 lines of code)
cost = - np.mean(Y * np.log(AL) + (1-Y) * np.log(1-AL))
### END CODE HERE ###
cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).
assert(cost.shape == ())
return cost
Y, AL = compute_cost_test_case()
print("cost = " + str(compute_cost(AL, Y)))
Explanation: <table style="width:40%">
<tr>
<td> **AL** </td>
<td > [[ 0.17007265 0.2524272 ]]</td>
</tr>
<tr>
<td> **Length of caches list ** </td>
<td > 2</td>
</tr>
</table>
Great! Now you have a full forward propagation that takes the input X and outputs a row vector $A^{[L]}$ containing your predictions. It also records all intermediate values in "caches". Using $A^{[L]}$, you can compute the cost of your predictions.
5 - Cost function
Now you will implement forward and backward propagation. You need to compute the cost, because you want to check if your model is actually learning.
Exercise: Compute the cross-entropy cost $J$, using the following formula: $$-\frac{1}{m} \sum\limits_{i = 1}^{m} (y^{(i)}\log\left(a^{[L] (i)}\right) + (1-y^{(i)})\log\left(1- a^{L}\right)) \tag{7}$$
End of explanation
# GRADED FUNCTION: linear_backward
def linear_backward(dZ, cache):
Implement the linear portion of backward propagation for a single layer (layer l)
Arguments:
dZ -- Gradient of the cost with respect to the linear output (of current layer l)
cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
A_prev, W, b = cache
m = A_prev.shape[1]
### START CODE HERE ### (≈ 3 lines of code)
dW = np.dot(dZ, A_prev.T) / m
db = np.mean(dZ, axis = 1, keepdims=True)
dA_prev = np.dot(W.T, dZ)
### END CODE HERE ###
assert (dA_prev.shape == A_prev.shape)
assert (dW.shape == W.shape)
print(dZ.shape, db.shape, b.shape)
assert (db.shape == b.shape)
return dA_prev, dW, db
# Set up some test inputs
dZ, linear_cache = linear_backward_test_case()
dA_prev, dW, db = linear_backward(dZ, linear_cache)
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
Explanation: Expected Output:
<table>
<tr>
<td>**cost** </td>
<td> 0.41493159961539694</td>
</tr>
</table>
6 - Backward propagation module
Just like with forward propagation, you will implement helper functions for backpropagation. Remember that back propagation is used to calculate the gradient of the loss function with respect to the parameters.
Reminder:
<img src="images/backprop_kiank.png" style="width:650px;height:250px;">
<caption><center> Figure 3 : Forward and Backward propagation for LINEAR->RELU->LINEAR->SIGMOID <br> The purple blocks represent the forward propagation, and the red blocks represent the backward propagation. </center></caption>
<!--
For those of you who are expert in calculus (you don't need to be to do this assignment), the chain rule of calculus can be used to derive the derivative of the loss $\mathcal{L}$ with respect to $z^{[1]}$ in a 2-layer network as follows:
$$\frac{d \mathcal{L}(a^{[2]},y)}{{dz^{[1]}}} = \frac{d\mathcal{L}(a^{[2]},y)}{{da^{[2]}}}\frac{{da^{[2]}}}{{dz^{[2]}}}\frac{{dz^{[2]}}}{{da^{[1]}}}\frac{{da^{[1]}}}{{dz^{[1]}}} \tag{8} $$
In order to calculate the gradient $dW^{[1]} = \frac{\partial L}{\partial W^{[1]}}$, you use the previous chain rule and you do $dW^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial W^{[1]}}$. During the backpropagation, at each step you multiply your current gradient by the gradient corresponding to the specific layer to get the gradient you wanted.
Equivalently, in order to calculate the gradient $db^{[1]} = \frac{\partial L}{\partial b^{[1]}}$, you use the previous chain rule and you do $db^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial b^{[1]}}$.
This is why we talk about **backpropagation**.
!-->
Now, similar to forward propagation, you are going to build the backward propagation in three steps:
- LINEAR backward
- LINEAR -> ACTIVATION backward where ACTIVATION computes the derivative of either the ReLU or sigmoid activation
- [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID backward (whole model)
6.1 - Linear backward
For layer $l$, the linear part is: $Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]}$ (followed by an activation).
Suppose you have already calculated the derivative $dZ^{[l]} = \frac{\partial \mathcal{L} }{\partial Z^{[l]}}$. You want to get $(dW^{[l]}, db^{[l]} dA^{[l-1]})$.
<img src="images/linearback_kiank.png" style="width:250px;height:300px;">
<caption><center> Figure 4 </center></caption>
The three outputs $(dW^{[l]}, db^{[l]}, dA^{[l]})$ are computed using the input $dZ^{[l]}$.Here are the formulas you need:
$$ dW^{[l]} = \frac{\partial \mathcal{L} }{\partial W^{[l]}} = \frac{1}{m} dZ^{[l]} A^{[l-1] T} \tag{8}$$
$$ db^{[l]} = \frac{\partial \mathcal{L} }{\partial b^{[l]}} = \frac{1}{m} \sum_{i = 1}^{m} dZ^{l}\tag{9}$$
$$ dA^{[l-1]} = \frac{\partial \mathcal{L} }{\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]} \tag{10}$$
Exercise: Use the 3 formulas above to implement linear_backward().
End of explanation
# GRADED FUNCTION: linear_activation_backward
def linear_activation_backward(dA, cache, activation):
Implement the backward propagation for the LINEAR->ACTIVATION layer.
Arguments:
dA -- post-activation gradient for current layer l
cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
linear_cache, activation_cache = cache
if activation == "relu":
### START CODE HERE ### (≈ 2 lines of code)
dZ = relu_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
### END CODE HERE ###
elif activation == "sigmoid":
### START CODE HERE ### (≈ 2 lines of code)
dZ = sigmoid_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
### END CODE HERE ###
return dA_prev, dW, db
AL, linear_activation_cache = linear_activation_backward_test_case()
dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "sigmoid")
print ("sigmoid:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db) + "\n")
dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "relu")
print ("relu:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
Explanation: Expected Output:
<table style="width:90%">
<tr>
<td> **dA_prev** </td>
<td > [[ 0.51822968 -0.19517421]
[-0.40506361 0.15255393]
[ 2.37496825 -0.89445391]] </td>
</tr>
<tr>
<td> **dW** </td>
<td > [[-0.10076895 1.40685096 1.64992505]] </td>
</tr>
<tr>
<td> **db** </td>
<td> [[ 0.50629448]] </td>
</tr>
</table>
6.2 - Linear-Activation backward
Next, you will create a function that merges the two helper functions: linear_backward and the backward step for the activation linear_activation_backward.
To help you implement linear_activation_backward, we provided two backward functions:
- sigmoid_backward: Implements the backward propagation for SIGMOID unit. You can call it as follows:
python
dZ = sigmoid_backward(dA, activation_cache)
relu_backward: Implements the backward propagation for RELU unit. You can call it as follows:
python
dZ = relu_backward(dA, activation_cache)
If $g(.)$ is the activation function,
sigmoid_backward and relu_backward compute $$dZ^{[l]} = dA^{[l]} * g'(Z^{[l]}) \tag{11}$$.
Exercise: Implement the backpropagation for the LINEAR->ACTIVATION layer.
End of explanation
# GRADED FUNCTION: L_model_backward
def L_model_backward(AL, Y, caches):
Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group
Arguments:
AL -- probability vector, output of the forward propagation (L_model_forward())
Y -- true "label" vector (containing 0 if non-cat, 1 if cat)
caches -- list of caches containing:
every cache of linear_activation_forward() with "relu" (it's caches[l], for l in range(L-1) i.e l = 0...L-2)
the cache of linear_activation_forward() with "sigmoid" (it's caches[L-1])
Returns:
grads -- A dictionary with the gradients
grads["dA" + str(l)] = ...
grads["dW" + str(l)] = ...
grads["db" + str(l)] = ...
grads = {}
L = len(caches) # the number of layers
m = AL.shape[1]
Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL
# Initializing the backpropagation
### START CODE HERE ### (1 line of code)
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL))
### END CODE HERE ###
# Lth layer (SIGMOID -> LINEAR) gradients. Inputs: "AL, Y, caches". Outputs: "grads["dAL"], grads["dWL"], grads["dbL"]
### START CODE HERE ### (approx. 2 lines)
current_cache = caches[-1]
grads["dA" + str(L)], grads["dW" + str(L)], grads["db" + str(L)] = linear_activation_backward(dAL, current_cache , 'sigmoid')
### END CODE HERE ###
for l in reversed(range(L - 1)):
# lth layer: (RELU -> LINEAR) gradients.
# Inputs: "grads["dA" + str(l + 2)], caches". Outputs: "grads["dA" + str(l + 1)] , grads["dW" + str(l + 1)] , grads["db" + str(l + 1)]
### START CODE HERE ### (approx. 5 lines)
current_cache = caches[l]
dA_prev_temp, dW_temp, db_temp = linear_activation_backward(grads["dA" + str(l + 2)], current_cache , 'relu')
grads["dA" + str(l + 1)] = dA_prev_temp
grads["dW" + str(l + 1)] = dW_temp
grads["db" + str(l + 1)] = db_temp
### END CODE HERE ###
return grads
AL, Y_assess, caches = L_model_backward_test_case()
grads = L_model_backward(AL, Y_assess, caches)
print ("dW1 = "+ str(grads["dW1"]))
print ("db1 = "+ str(grads["db1"]))
print ("dA1 = "+ str(grads["dA1"]))
Explanation: Expected output with sigmoid:
<table style="width:100%">
<tr>
<td > dA_prev </td>
<td >[[ 0.11017994 0.01105339]
[ 0.09466817 0.00949723]
[-0.05743092 -0.00576154]] </td>
</tr>
<tr>
<td > dW </td>
<td > [[ 0.10266786 0.09778551 -0.01968084]] </td>
</tr>
<tr>
<td > db </td>
<td > [[-0.05729622]] </td>
</tr>
</table>
Expected output with relu
<table style="width:100%">
<tr>
<td > dA_prev </td>
<td > [[ 0.44090989 0. ]
[ 0.37883606 0. ]
[-0.2298228 0. ]] </td>
</tr>
<tr>
<td > dW </td>
<td > [[ 0.44513824 0.37371418 -0.10478989]] </td>
</tr>
<tr>
<td > db </td>
<td > [[-0.20837892]] </td>
</tr>
</table>
6.3 - L-Model Backward
Now you will implement the backward function for the whole network. Recall that when you implemented the L_model_forward function, at each iteration, you stored a cache which contains (X,W,b, and z). In the back propagation module, you will use those variables to compute the gradients. Therefore, in the L_model_backward function, you will iterate through all the hidden layers backward, starting from layer $L$. On each step, you will use the cached values for layer $l$ to backpropagate through layer $l$. Figure 5 below shows the backward pass.
<img src="images/mn_backward.png" style="width:450px;height:300px;">
<caption><center> Figure 5 : Backward pass </center></caption>
Initializing backpropagation:
To backpropagate through this network, we know that the output is,
$A^{[L]} = \sigma(Z^{[L]})$. Your code thus needs to compute dAL $= \frac{\partial \mathcal{L}}{\partial A^{[L]}}$.
To do so, use this formula (derived using calculus which you don't need in-depth knowledge of):
python
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL
You can then use this post-activation gradient dAL to keep going backward. As seen in Figure 5, you can now feed in dAL into the LINEAR->SIGMOID backward function you implemented (which will use the cached values stored by the L_model_forward function). After that, you will have to use a for loop to iterate through all the other layers using the LINEAR->RELU backward function. You should store each dA, dW, and db in the grads dictionary. To do so, use this formula :
$$grads["dW" + str(l)] = dW^{[l]}\tag{15} $$
For example, for $l=3$ this would store $dW^{[l]}$ in grads["dW3"].
Exercise: Implement backpropagation for the [LINEAR->RELU] $\times$ (L-1) -> LINEAR -> SIGMOID model.
End of explanation
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate):
Update parameters using gradient descent
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients, output of L_model_backward
Returns:
parameters -- python dictionary containing your updated parameters
parameters["W" + str(l)] = ...
parameters["b" + str(l)] = ...
L = len(parameters) // 2 # number of layers in the neural network
# Update rule for each parameter. Use a for loop.
### START CODE HERE ### (≈ 3 lines of code)
for l in range(L):
parameters["W" + str(l+1)] -= grads['dW' + str(l+1)] * learning_rate
parameters["b" + str(l+1)] -= grads['db' + str(l+1)] * learning_rate
### END CODE HERE ###
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads, 0.1)
print ("W1 = "+ str(parameters["W1"]))
print ("b1 = "+ str(parameters["b1"]))
print ("W2 = "+ str(parameters["W2"]))
print ("b2 = "+ str(parameters["b2"]))
Explanation: Expected Output
<table style="width:60%">
<tr>
<td > dW1 </td>
<td > [[ 0.41010002 0.07807203 0.13798444 0.10502167]
[ 0. 0. 0. 0. ]
[ 0.05283652 0.01005865 0.01777766 0.0135308 ]] </td>
</tr>
<tr>
<td > db1 </td>
<td > [[-0.22007063]
[ 0. ]
[-0.02835349]] </td>
</tr>
<tr>
<td > dA1 </td>
<td > [[ 0. 0.52257901]
[ 0. -0.3269206 ]
[ 0. -0.32070404]
[ 0. -0.74079187]] </td>
</tr>
</table>
6.4 - Update Parameters
In this section you will update the parameters of the model, using gradient descent:
$$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{16}$$
$$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{17}$$
where $\alpha$ is the learning rate. After computing the updated parameters, store them in the parameters dictionary.
Exercise: Implement update_parameters() to update your parameters using gradient descent.
Instructions:
Update parameters using gradient descent on every $W^{[l]}$ and $b^{[l]}$ for $l = 1, 2, ..., L$.
End of explanation |
3,592 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NOTE
Step1: Select columns for city, airport, latitude and longitude info
- City_Airport_Latitude_Longitude_DataFrame (CALL_DF)
Step3: Create database AIRPORTS
Step5: Fill into AIRPORTS info for top 50 airports
Step7: #2 build table to hold historic weather data
Step8: #3.1 crawler to pull data from Weather Underground
Step9: fetching URL is slow ...
to speed up I used multiprocessing, see
later I realized that it's really IO bound so multithreading or asynchronous IO should perform better
but I already fetched all the data in about 5 hours...will look into asyncio/multithreading when I have finished the homework
IMPORTANT
Step12: #3.2 push data to the database
First load all the data into a pandas DataFrame
- this took ~ 30s on my 2014 MacBook Pro
- the next step took ~ 20s
Step16: #4 Correlation between weather of cities
Step17: UNCOMMENT the liines below if you want to calculate correlation on your own machine
It took ~4min on my 2014 MacBook Pro
I have already saved data files in the data folder so you can also load them directly. just go to the next cell
Step18: make density plots for correlations
Step20: #5 Examine correlation length
Step21: 1day correlation
Step22: 3 day correlation
Step23: 7 day correlation | Python Code:
import pandas as pd
top_airport_csv = 'hw_5_data/top_airports.csv'
ICAO_airport_csv = 'hw_5_data/ICAO_airports.csv'
top50_df = pd.read_csv(top_airport_csv)
icao_df = pd.read_csv(ICAO_airport_csv)
# merge two data frames to obtain info for the top 50 airports
merged_df = pd.merge(top50_df, icao_df, how='inner', left_on='ICAO', right_on='ident')
Explanation: NOTE:
PART OF PROB4 AND PROB 5 WERE FINISHED AFTER DEADLINE. I DON'T EXPECT SCORE FOR THEM, BUT I APPRECIATE FEEDBACK.
#1 Make a table with geographic info about top 50 airports
Read CSV files into pandas dataframes
End of explanation
call_df = merged_df[['City', 'Airport', 'ICAO', 'latitude_deg', 'longitude_deg']]
call_df.head(3)
Explanation: Select columns for city, airport, latitude and longitude info
- City_Airport_Latitude_Longitude_DataFrame (CALL_DF)
End of explanation
import sqlite3, os.path
# remove database in case there's already one there
!rm hw_5_data/call.db
connection = sqlite3.connect("hw_5_data/call.db")
cursor = connection.cursor()
sql_cmd = CREATE TABLE airports (id INTEGER PRIMARY KEY AUTOINCREMENT,
airport TEXT, city TEXT, icao TEXT, latitude FLOAT, longitude FLOAT)
cursor.execute(sql_cmd)
Explanation: Create database AIRPORTS
End of explanation
for row in call_df.values:
city, airport, icao, lat, lon = row
sql_cmd = INSERT INTO airports (airport, city, icao,
latitude, longitude) VALUES ("{}","{}","{}", {},{}).format(
airport, city, icao, str(lat), str(lon))
cursor.execute(sql_cmd)
connection.commit()
connection.close()
Explanation: Fill into AIRPORTS info for top 50 airports
End of explanation
# remove database in case there's already one there
!rm hw_5_data/weather.db
connection = sqlite3.connect("hw_5_data/weather.db")
cursor = connection.cursor()
sql_cmd = CREATE TABLE weather (id INTEGER PRIMARY KEY AUTOINCREMENT, date DATE,
icao TEXT, min_temp INT, max_temp INT, min_hum INT, max_hum INT, prec FLOAT)
cursor.execute(sql_cmd)
connection.commit()
connection.close()
Explanation: #2 build table to hold historic weather data
End of explanation
from util import write2csv
# list of top 50 airports
icao_list = call_df['ICAO'].values
#-------------------------------------
# this is just a demo
#
# expect it to be slow
# a faster version is mentioned in the
# next cell
#-------------------------------------
# timerange just 1 day
tr_2010_mar = pd.date_range('20100301', '20100301')
fn = 'temp/2010_3.csv'
# only looked at the top 10 cities
# %time write2csv(tr_2010_mar, icao_list[:10], fn) # uncomment to see demo
Explanation: #3.1 crawler to pull data from Weather Underground
End of explanation
import os.path
def check_files():
tr = pd.date_range('20080101', '20161006')
ok = True
for date in tr:
filename = 'hw_5_data/weather_data/' + date.strftime('%Y') + '/'+ \
date.strftime('%Y%m%d')+'.csv'
if not os.path.isfile(filename):
print(date.strftime('%Y%m%d') + ' is missing.')
ok = False
continue
f = open(filename)
num_lines = sum(1 for line in f)
f.close()
if num_lines != 50:
ok = False
print(date.strftime('%Y%m%d') + ' may be corrupted, number of cities =/= 50.')
if ok: print('no file corruption/missing')
# this takes about 15s
check_files()
Explanation: fetching URL is slow ...
to speed up I used multiprocessing, see
later I realized that it's really IO bound so multithreading or asynchronous IO should perform better
but I already fetched all the data in about 5 hours...will look into asyncio/multithreading when I have finished the homework
IMPORTANT: please untar the data files in hw_5_data/weather_data
in bash:
cd hw_5_data/weather_data
for i in 2008 ... 2016
do
tar -xvf $i.tar
done
check_files() then Check if all the data are downloaded (see below)
also check if every file (for 1 day) has 50 lines (correspond to 50 cities)
End of explanation
from util import fetch_df
tr = pd.date_range('20080101', '20161006')
all_df = pd.DataFrame()
for date in tr:
# fetch data for that day
if all_df.empty:
all_df = fetch_df(date)
else:
df = fetch_df(date)
all_df = all_df.append(df, ignore_index=True)
# interpolate data to remove NaN
all_df = all_df.fillna(method='pad').fillna(method='bfill')
connection = sqlite3.connect("hw_5_data/weather.db")
cursor = connection.cursor()
#insert data into database
for row in all_df.values:
date, icao, min_temp, max_temp, min_hum, max_hum, prec = row
date = pd.to_datetime(date).strftime('%m/%d/%Y')
sql_cmd = INSERT INTO weather (date, icao, min_temp, max_temp,
min_hum, max_hum, prec) VALUES ("{}","{}",{},{},{},{},{}).format(
date, icao, min_temp, max_temp, min_hum, max_hum, prec)
cursor.execute(sql_cmd)
connection.commit()
connection.close()
Explanation: #3.2 push data to the database
First load all the data into a pandas DataFrame
- this took ~ 30s on my 2014 MacBook Pro
- the next step took ~ 20s
End of explanation
import numpy as np
import pandas as pd
import sqlite3
connection = sqlite3.connect("hw_5_data/weather.db")
cursor = connection.cursor()
def temp_prec_corr(n):
find the correlation between temperature between any two cities with N day shifts.
import os.path
# this is the file we are gonna store the correlations
filename = 'hw_5_data/corr_{}.npy'.format(n)
# skip if .npy file already exist
if os.path.isfile(filename):
return
temp_corr_arr = []
prec_corr_arr = []
for (i, city_1) in enumerate(icao_list):
for (j, city_2) in enumerate(icao_list):
sql_cmd = SELECT max_temp, min_temp, prec FROM weather WHERE icao = "{}" .format(city_1)
cursor.execute(sql_cmd)
city_1_info = np.array(cursor.fetchall())
high_temp_1 = np.array([int(temp) for temp in city_1_info[:, 0]])
low_temp_1 = np.array([int(temp) for temp in city_1_info[:, 1]])
#average temperature
temp_1 = (high_temp_1 + low_temp_1)/2
prec_1 = np.array([float(temp) for temp in city_1_info[:, 2]])
# daily change in temperature
temp_change_1 = np.roll(temp_1, 1) - temp_1
# prec_change_1 = np.roll(prec_1, 1) - prec_1
sql_cmd = SELECT max_temp, min_temp, prec FROM weather WHERE icao = "{}" .format(city_2)
cursor.execute(sql_cmd)
city_2_info = np.array(cursor.fetchall())
high_temp_2 = np.array([int(temp) for temp in city_2_info[:, 0]])
low_temp_2 = np.array([int(temp) for temp in city_2_info[:, 1]])
#average temperature
temp_2 = (high_temp_2 + low_temp_2)/2
prec_2 = np.array([float(temp) for temp in city_2_info[:, 2]])
# daily change in temperature
temp_change_2 = np.roll(temp_2, 1) - temp_2
# prec_change_2 = np.roll(prec_2, 1) - prec_2
t_corr = np.corrcoef(temp_change_1, np.roll(temp_change_2, n))[1, 0]
# p_corr = np.corrcoef(prec_change_1, np.roll(prec_change_2,n))[1, 0]
p_corr = np.corrcoef(prec_1, np.roll(prec_2,n))[1, 0]
temp_corr_arr += [t_corr]
prec_corr_arr += [p_corr]
corr_n = [temp_corr_arr, prec_corr_arr]
# save to .npy file for future use
np.save('hw_5_data/corr_{}'.format(n), corr_n)
return corr_n
Explanation: #4 Correlation between weather of cities
End of explanation
# !rm -rf hw_5_data/corr_*.npy
# import multiprocessing
# pool = multiprocessing.Pool(processes=3)
# %time result = pool.map(temp_prec_corr, [1, 3, 7])
# pool.close()
# pool.join()
# load data from .npy data file
arr = np.load('hw_5_data/corr_1.npy')
temp_corr_1, prec_corr_1 = arr[0, :], arr[1, :]
arr = np.load('hw_5_data/corr_3.npy')
temp_corr_3, prec_corr_3 = arr[0, :], arr[1, :]
arr = np.load('hw_5_data/corr_7.npy')
temp_corr_7, prec_corr_7 = arr[0, :], arr[1, :]
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import matplotlib as mpl
%matplotlib inline
# mpl.style.use('ggplot')
mpl.rcParams['axes.titlesize'] = 20
Explanation: UNCOMMENT the liines below if you want to calculate correlation on your own machine
It took ~4min on my 2014 MacBook Pro
I have already saved data files in the data folder so you can also load them directly. just go to the next cell
End of explanation
fig1, (ax1, ax2) = plt.subplots(1,2, figsize=[10, 5])
# num of city
nc = 50
im1 = ax1.imshow(temp_corr_1.reshape(nc, nc), interpolation='nearest',cmap='rainbow')
ax1.set_title('temp corr 1 day')
im2 = ax2.imshow(prec_corr_1.reshape(nc, nc), interpolation='nearest',cmap='rainbow')
ax2.set_title('precip corr 1 day')
# add color bar
fig1.subplots_adjust(right=0.8)
cbar_ax = fig1.add_axes([0.85, 0.15, 0.05, 0.7])
fig1.colorbar(im1, cax=cbar_ax)
fig3, (ax1, ax2) = plt.subplots(1,2, figsize=[10, 5])
# num of city
nc = 50
im1 = ax1.imshow(temp_corr_3.reshape(nc, nc), interpolation='nearest',cmap='rainbow')
ax1.set_title('temp corr 3 day')
im2 = ax2.imshow(prec_corr_3.reshape(nc, nc), interpolation='nearest',cmap='rainbow')
ax2.set_title('precip corr 3 day')
# add color bar
fig3.subplots_adjust(right=0.8)
cbar_ax = fig3.add_axes([0.85, 0.15, 0.05, 0.7])
fig3.colorbar(im1, cax=cbar_ax)
fig7, (ax1, ax2) = plt.subplots(1,2, figsize=[10, 5])
# num of city
nc = 50
im1 = ax1.imshow(temp_corr_7.reshape(nc, nc), interpolation='nearest',cmap='rainbow')
ax1.set_title('temp corr 7 day')
im2 = ax2.imshow(prec_corr_7.reshape(nc, nc), interpolation='nearest',cmap='rainbow')
ax2.set_title('precip corr 7 day')
# add color bar
fig7.subplots_adjust(right=0.8)
cbar_ax = fig7.add_axes([0.85, 0.15, 0.05, 0.7])
fig7.colorbar(im1, cax=cbar_ax)
Explanation: make density plots for correlations
End of explanation
def get_ntop_corr(corr_pairs, ntop = 10, nc = 50):
parameter
---------
corr_pairs: correlation between all city pairs between NC number of cities
for some weather variable.
ntop: number of most correlated city pairs that we want to study
nc: number of cities --> number of pairs are nc^2
return
------
pandas DataFrame containing info for
NTOP most correlated pairs of cities with
city1: 1st city in the pair
city2: 2nd city in the pair (whose weather 'is' predicted)
icao1: ICAO # for the airport near city1
icao2: ICAO # for the airport near city2
distance: distance in km between city1 and city2
corr: correlation coeff
# index of the N top correlated pairs
ntop_ind = corr_pairs.argsort()[-ntop:][::-1]
corr_arr = corr_pairs[ntop_ind]
city1_ind = ntop_ind // nc
city2_ind = ntop_ind % nc
city1_arr = call_df['City'][city1_ind].values
icao1_arr = call_df['ICAO'][city1_ind].values
city2_arr = call_df['City'][city2_ind].values
icao2_arr = call_df['ICAO'][city2_ind].values
lat1_arr = call_df['latitude_deg'][city1_ind].values
lat2_arr = call_df['latitude_deg'][city2_ind].values
lon1_arr = call_df['longitude_deg'][city1_ind].values
lon2_arr = call_df['longitude_deg'][city2_ind].values
from util import lat_lon_2_distance
from itertools import starmap
dist_arr = np.array(list(starmap(lat_lon_2_distance,
zip(lat1_arr, lon1_arr, lat2_arr, lon2_arr))))
diff_lon = np.abs(lon1_arr - lon2_arr)
# build a new dataframe
return pd.DataFrame({'city 1': city1_arr,
'icao 1': icao1_arr,
'city 2': city2_arr,
'icao 2': icao2_arr,
'distance': dist_arr,
'diff_lon': diff_lon,
'corr': corr_arr
})
Explanation: #5 Examine correlation length
End of explanation
temp_1_df = get_ntop_corr(temp_corr_1)
prec_1_df = get_ntop_corr(prec_corr_1)
# most correlated cities in temperature variation
temp_1_df
# most correlated cities in precipitation variation
prec_1_df
fig1, [[ax1, ax2], [ax3, ax4]] = plt.subplots(2,2, figsize=[12,10])
ax1.scatter(temp_1_df['distance'], temp_1_df['corr'], s=50)
ax1.set_title('temp corr with 1 day diff')
ax1.set_ylabel('corr')
ax1.set_xlabel('distance')
ax2.scatter(prec_1_df['distance'], prec_1_df['corr'], s=50)
ax2.set_title('precipitation corr with 1 day diff')
ax2.set_ylabel('corr')
ax2.set_xlabel('distance')
ax3.scatter(temp_1_df['diff_lon'], temp_1_df['corr'], s=50)
ax3.set_ylabel('corr')
ax3.set_xlabel('longitude difference')
ax4.scatter(prec_1_df['diff_lon'], prec_1_df['corr'], s=50)
ax4.set_ylabel('corr')
ax4.set_xlabel('longitude difference')
Explanation: 1day correlation
End of explanation
temp_3_df = get_ntop_corr(temp_corr_3)
prec_3_df = get_ntop_corr(prec_corr_3)
# most correlated cities in temperature variation
temp_3_df
# most correlated cities in precipitation variation
prec_3_df
fig3, [[ax1, ax2], [ax3, ax4]] = plt.subplots(2,2, figsize=[12,10])
ax1.scatter(temp_3_df['distance'], temp_3_df['corr'], s=50)
ax1.set_title('temp corr with 3 day diff')
ax1.set_ylabel('corr')
ax1.set_xlabel('distance')
ax2.scatter(prec_3_df['distance'], prec_3_df['corr'], s=50)
ax2.set_title('precipitation corr with 3 day diff')
ax2.set_ylabel('corr')
ax2.set_xlabel('distance')
ax3.scatter(temp_3_df['diff_lon'], temp_3_df['corr'], s=50)
ax3.set_ylabel('corr')
ax3.set_xlabel('longitude difference')
ax4.scatter(prec_3_df['diff_lon'], prec_3_df['corr'], s=50)
ax4.set_ylabel('corr')
ax4.set_xlabel('longitude difference')
Explanation: 3 day correlation
End of explanation
temp_7_df = get_ntop_corr(temp_corr_7)
prec_7_df = get_ntop_corr(prec_corr_7)
# most correlated cities in temperature variation
temp_7_df
# most correlated cities in precipitation variation
prec_7_df
fig7, [[ax1, ax2], [ax3, ax4]] = plt.subplots(2,2, figsize=[12,10])
ax1.scatter(temp_7_df['distance'], temp_7_df['corr'], s=50)
ax1.set_title('temp corr with 7 day diff')
ax1.set_ylabel('corr')
ax1.set_xlabel('distance')
ax2.scatter(prec_7_df['distance'], prec_7_df['corr'], s=50)
ax2.set_title('precipitation corr with 7 day diff')
ax2.set_ylabel('corr')
ax2.set_xlabel('distance')
ax3.scatter(temp_7_df['diff_lon'], temp_7_df['corr'], s=50)
ax3.set_ylabel('corr')
ax3.set_xlabel('longitude difference')
ax4.scatter(prec_7_df['diff_lon'], prec_7_df['corr'], s=50)
ax4.set_ylabel('corr')
ax4.set_xlabel('longitude difference')
Explanation: 7 day correlation
End of explanation |
3,593 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
Step2: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following
Step5: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
Step8: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint
Step10: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
Step12: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step17: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note
Step20: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling
Step23: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option
Step26: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step29: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step32: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model
Step35: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following
Step37: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
Step38: Hyperparameters
Tune the following parameters
Step40: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
Step42: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
Step45: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
return np.array(x/255)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
from sklearn import preprocessing
encoder = preprocessing.LabelBinarizer()
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
# TODO: Implement Function
encoder.fit(x)
encoder.classes_ = np.array(list(range(10)))
return encoder.transform(x)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
return tf.placeholder(tf.float32, shape=[None, *image_shape], name = 'x')
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
return tf.placeholder(tf.float32, shape=[None, n_classes], name = 'y')
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
return tf.placeholder(tf.float32, name='keep_prob')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
input_depth = int(x_tensor.get_shape()[3])
# the typecast to int is needed because in weights->truncated the expected input_depth is int
conv_ksize_height = conv_ksize[0]
conv_ksize_width = conv_ksize[1]
# What do kernel height and width represent? How do we know that conv_ksize[0] is height?
conv_stride_height = conv_strides[0]
conv_stride_width = conv_strides[1]
# Similar attributes for the maxpool layer
pool_ksize_height = pool_ksize[0]
pool_ksize_width = pool_ksize[1]
pool_stride_height = pool_strides[0]
pool_stride_width = pool_strides[1]
weights_shape = [conv_ksize_height, conv_ksize_width, input_depth, conv_num_outputs]
truncated = tf.truncated_normal(weights_shape, mean = 0.0, stddev = 0.05, dtype = tf.float32)
weights = tf.Variable(truncated)
biases = tf.Variable(tf.zeros(conv_num_outputs))
conv_strides = [1, conv_stride_height, conv_stride_width, 1]
layer = tf.nn.conv2d(input = x_tensor, filter = weights, strides = conv_strides, padding = 'SAME')
layer = tf.nn.bias_add(layer, biases)
layer = tf.nn.relu(layer)
# non-linear activation needed
pool_shape = [1, pool_ksize_height, pool_ksize_width, 1]
pool_strides = [1, pool_stride_height, pool_stride_width, 1]
layer = tf.nn.max_pool(layer, pool_shape, pool_strides, padding = 'SAME')
return layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
shape = x_tensor.get_shape().as_list() # returns shape = [None, 10, 30, 6]
# shape = list(x_tensor.get_shape()) # returns shape = [Dimension(None), Dimension(10), Dimension(30), Dimension(6)]
# print('shape',shape)
batch_sz = shape[0] or -1
height = shape[1]
width = shape[2]
depth = shape[3]
return tf.reshape(x_tensor, [batch_sz, height*width*depth])
# return tf.contrib.layers.flatten(x_tensor)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def connected(x_tensor, num_outputs):
'''Support function for fully_conn and output functions below
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs
- does not apply any activation
'''
batch_sz = x_tensor.get_shape().as_list()[1]
weights = tf.Variable(tf.truncated_normal((batch_sz, num_outputs), mean=0.0, stddev=0.05))
# how to choose the stddev above? How does contrib.layer choose it? What is the default in contrib.layer?
bias = tf.Variable(tf.zeros(num_outputs))
connected_layer = tf.add(tf.matmul(x_tensor, weights), bias)
return connected_layer
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
fully_conn_layer = connected(x_tensor, num_outputs)
fully_conn_layer = tf.nn.relu(fully_conn_layer)
return fully_conn_layer
# return tf.nn.relu(tf.contrib.layers.fully_connected(x_tensor, num_outputs))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
return connected(x_tensor, num_outputs)
# return tf.contrib.layers.fully_connected(x_tensor, num_outputs)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_num_outputs = 18
conv_ksize = (4,4)
conv_strides = (1,1)
pool_ksize = (4,4)
pool_strides = (1,1)
num_outputs = 10 # for the 10 classes
# TK: try more conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides
network = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
network = tf.nn.dropout(network, keep_prob)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
network = flatten(network)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
network = fully_conn(network, 384)
network = tf.nn.dropout(network, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
output_network = output(network, num_outputs)
# TODO: return output
return output_network
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})
# no return here, this is an execution function
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: x: feature_batch: Batch of Numpy image data
: y: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
: keep_prob: 1.0 # added by me
acc = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})
loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})
print('Acc: {} Loss: {}'.format(acc, loss))
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
# TODO: Tune Parameters
epochs = 75
batch_size = 512
keep_probability = 0.3
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation |
3,594 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
HOMO energy prediction with kernel ridge regression
In this notebook we will machine-learn the relationship between molecular structure (represented by the Coulomb matrix CM) and their HOMO energy using kernel regression (KRR).
KRR is a machine learning method that performs regression (fitting). This tutorial shows step by step how to load the data, visualize them, select the hyperparameters, train the model and validate it. We use the QM7 dataset of 7k small organic molecules. The HOMO energies of all molecules were pre-computed with first principles quantum mechanical methods (DFT) to obtain the target data that our model can be trained on. Detailed descriptions and results for a similar dataset (QM9) can be found in A. Stuke, et al. "Chemical diversity in molecular orbital energy predictions with kernel ridge regression." J. Chem. Phys. 150. 204121 (2019).
Setup
Step1: Load and visualize data
At first, we load the data. The input data x is an array that contains all 7k molecules of the QM7 dataset, represented by their Coulomb matrices. The output data y is a list that contains the corresponding (pre-computed) HOMO energies.
Step2: Print the Coulomb matrix of a random molecule in the dataset.
Step3: Visualize the Coulomb matrix of the random molecule.
Step4: Visualize the target data by plotting the distribution of HOMO energies in the dataset.
Step5: Before dividing the dataset into training and test set, we shuffle the data. Data are often stored in a certain order, and simply taking the first part for training and the second for testing would not result in a well trained model, since the training set would not represent the test data well (and vice versa).
Step6: Now, we divide the data into training and test set.
Step7: Check that the training data resemble the test data well by plotting the distribution of HOMO energies for both sets. The distributions should be centered around the same mean value and have the same shape.
Step8: Training
In the training phase we use a kernel function to measure the distance between all pairs of molecules (represented by their Coulomb matrices) in the training set. We here employ one of two kernels, the Gaussian kernel or the Laplacian kernel. The Gaussian kernel is given by
\begin{equation}
k_{Gaussian}(\boldsymbol{x},\boldsymbol{x}')=e^{-\frac{||{\boldsymbol{x}-\boldsymbol{x}'}||_2^2}{2\gamma^2}},
\end{equation}
which employs the Euclidean distance as similarity measure. The parameter $\gamma$ is defined as $\frac{1}{2\sigma^2}$, where $\sigma$ is the standard deviation of the Gaussian kernel (kernel width). The Laplacian kernel is given by
\begin{equation}
k_{Laplacian}(\boldsymbol{x},\boldsymbol{x}')=e^{-\frac{||{\boldsymbol{x}-\boldsymbol{x}'}||_1}{\gamma}},
\end{equation}
which uses the 1-norm as similarity measure. Here, $\gamma$ is defined as $\frac{1}{\sigma}$, where $\sigma$ is the kernel width of the Laplacian kernel.
In the KRR training phase with $N$ training molecules, the machine learns the relationship between the molecules (represented by their Coulomb matrix) and their corresponding (pre-computed) HOMO energies. It does so by employing a function $f(\boldsymbol{x})$ that maps a training molecule $\boldsymbol{x}$ to its reference HOMO energy
Step9: Grid search results
Print out the average validation errors and corresponding hyperparameter combinations
Step10: Next, we visualize the grid search results by plotting a heatmap.
Step11: Testing
With the best combination of hyperparameters, the model is once again trained on the entire training set (this is done automatically in scikit-learn). Then, with the best combination of hyperparameters, predictions are made on the test set to evaluate the final model. With the fitted regressions weights $\omega_i$ and the selected hyperparameter $\gamma$ (kernel width), the final model is used to predict the energies of the test molecules. The energy of a particular test molecule $\boldsymbol{x}$ is predicted by computing the weighted sum of kernel contributions $k(\boldsymbol{x}, \boldsymbol{x}_i)$ between the test molecule $\boldsymbol{x}$ and each of the $N$ molecules $\boldsymbol{x}_i$ in the training set (sum over $N$) | Python Code:
# initial imports
import numpy as np
import math, random
import matplotlib.pyplot as plt
import pandas as pd
import json
import seaborn as sns
from scipy.sparse import load_npz
from matplotlib.colors import LinearSegmentedColormap
from sklearn.model_selection import GridSearchCV
from sklearn.kernel_ridge import KernelRidge
from sklearn.metrics import r2_score
Explanation: HOMO energy prediction with kernel ridge regression
In this notebook we will machine-learn the relationship between molecular structure (represented by the Coulomb matrix CM) and their HOMO energy using kernel regression (KRR).
KRR is a machine learning method that performs regression (fitting). This tutorial shows step by step how to load the data, visualize them, select the hyperparameters, train the model and validate it. We use the QM7 dataset of 7k small organic molecules. The HOMO energies of all molecules were pre-computed with first principles quantum mechanical methods (DFT) to obtain the target data that our model can be trained on. Detailed descriptions and results for a similar dataset (QM9) can be found in A. Stuke, et al. "Chemical diversity in molecular orbital energy predictions with kernel ridge regression." J. Chem. Phys. 150. 204121 (2019).
Setup
End of explanation
x = load_npz("./data/qm7/cm.npz").toarray()
y = np.genfromtxt("./data/qm7/HOMO.txt")
print("Number of molecules:", len(y))
Explanation: Load and visualize data
At first, we load the data. The input data x is an array that contains all 7k molecules of the QM7 dataset, represented by their Coulomb matrices. The output data y is a list that contains the corresponding (pre-computed) HOMO energies.
End of explanation
rand_mol = random.randint(0, len(y))
print(x[rand_mol])
Explanation: Print the Coulomb matrix of a random molecule in the dataset.
End of explanation
shape = (23, 23)
mat = x[rand_mol].reshape(shape)
plt.figure()
plt.figure(figsize = (6,6))
plt.imshow(mat, origin="upper", cmap='rainbow', vmin=-15, vmax=90, interpolation='nearest')
plt.colorbar(fraction=0.046, pad=0.04).ax.tick_params(labelsize=20)
plt.axis('off')
plt.show()
Explanation: Visualize the Coulomb matrix of the random molecule.
End of explanation
plt.hist(y, bins=20, density=False, facecolor='blue')
plt.xlabel("Energy [eV]")
plt.ylabel("Number of molecules")
plt.title("Distribution of HOMO energies")
plt.show()
## mean value of distribution
print("Mean value of HOMO energies in QM9 dataset: %0.2f eV" %np.mean(y))
Explanation: Visualize the target data by plotting the distribution of HOMO energies in the dataset.
End of explanation
## shuffle the data
c = list(zip(x, y))
random.shuffle(c)
x, y = zip(*c)
x = np.array(x)
y = np.array(y)
Explanation: Before dividing the dataset into training and test set, we shuffle the data. Data are often stored in a certain order, and simply taking the first part for training and the second for testing would not result in a well trained model, since the training set would not represent the test data well (and vice versa).
End of explanation
# decide how many samples to take from the database for training and testing
n_train = 2000
n_test = 1000
# split data in training and test
# take first n_train molecules for training
x_train = x[0:n_train]
y_train = y[0:n_train]
# take the next n_test data for testing
x_test = x[n_train:n_train + n_test]
y_test = y[n_train:n_train + n_test]
Explanation: Now, we divide the data into training and test set.
End of explanation
plt.hist(y_test, bins=20, density=False, alpha=0.5, facecolor='red', label='test set')
plt.hist(y_train, bins=20, density=False, alpha=0.5, facecolor='gray', label='training set')
plt.xlabel("Energy [eV]")
plt.ylabel("Number of molecules")
plt.legend()
plt.show()
## mean value of distributions
print("Mean value of HOMO energies in training set: %0.2f eV" %np.mean(y_train))
print("Mean value of HOMO energies in test set: %0.2f eV" %np.mean(y_test))
Explanation: Check that the training data resemble the test data well by plotting the distribution of HOMO energies for both sets. The distributions should be centered around the same mean value and have the same shape.
End of explanation
# set up grids for alpha and gamma hyperparameters.
# first value: lower bound; second value: upper bound;
# third value: number of points to evaluate (here set to '3' --> '-2', '-1' and '0' are evaluated)
# --> make sure to change third value as well when changing the bounds!
alpha = np.logspace(-5, -2, 4)
gamma = np.logspace(-5, -2, 4)
cv_number = 5 ## choose into how many parts training set is divided for cross-validation
kernel = 'laplacian' # select kernel function here ('rbf': Gaussian kernel, 'laplacian': Laplacian kernel)
scoring_function = 'neg_mean_absolute_error' # it is called "negative" because scikit-learn interprets
# highest scoring value as best, but we want small errors
## define settings for grid search routine in scikit-learn with above defined grids as input
grid_search = GridSearchCV(KernelRidge(), #machine learning method (KRR here)
[{'kernel':[kernel],'alpha': alpha, 'gamma': gamma}],
cv = cv_number,
scoring = scoring_function,
verbose=1000) ## produces detailed output statements of grid search
# routine so we can see what is computed
# call the fit function in scikit-learn which fits the Coulomb matrices in the training set
# to their corresponding HOMO energies.
grid_search.fit(x_train, y_train)
Explanation: Training
In the training phase we use a kernel function to measure the distance between all pairs of molecules (represented by their Coulomb matrices) in the training set. We here employ one of two kernels, the Gaussian kernel or the Laplacian kernel. The Gaussian kernel is given by
\begin{equation}
k_{Gaussian}(\boldsymbol{x},\boldsymbol{x}')=e^{-\frac{||{\boldsymbol{x}-\boldsymbol{x}'}||_2^2}{2\gamma^2}},
\end{equation}
which employs the Euclidean distance as similarity measure. The parameter $\gamma$ is defined as $\frac{1}{2\sigma^2}$, where $\sigma$ is the standard deviation of the Gaussian kernel (kernel width). The Laplacian kernel is given by
\begin{equation}
k_{Laplacian}(\boldsymbol{x},\boldsymbol{x}')=e^{-\frac{||{\boldsymbol{x}-\boldsymbol{x}'}||_1}{\gamma}},
\end{equation}
which uses the 1-norm as similarity measure. Here, $\gamma$ is defined as $\frac{1}{\sigma}$, where $\sigma$ is the kernel width of the Laplacian kernel.
In the KRR training phase with $N$ training molecules, the machine learns the relationship between the molecules (represented by their Coulomb matrix) and their corresponding (pre-computed) HOMO energies. It does so by employing a function $f(\boldsymbol{x})$ that maps a training molecule $\boldsymbol{x}$ to its reference HOMO energy:
\begin{equation}
f(\boldsymbol{x}) = \sum_{i=1}^N \omega_i k(\boldsymbol{x}, \boldsymbol{x}_i) = HOMO^{ref},
\end{equation}
For a given training molecule $\boldsymbol{x}$, the distance to each molecule in the training set is computed by employing the kernel function $k$ (either Gaussian or Laplacian). Each kernel contribution (distance) is then weighted by a regression weight $\omega_i$. The above function is thus given by the weighted sum of kernel contributions (sum over $N$ training molecules). The purpose of training is to fit the regression weight $\omega_i$ so that HOMO$_{ref}$ is matched for each training molecule. In practice, the machine solves the minimization problem
\begin{equation}
\underset{\omega}{min} \sum_{i=1}^N (f(\boldsymbol{x}_i) - HOMO^{ref}_i)^2 + \alpha \boldsymbol{\omega}^T \mathbf{K} \boldsymbol{\omega}.
\end{equation}
for a vector $\boldsymbol{\omega} \in \mathbb{R}^N = (\omega_1, \omega_2, ..., \omega_N)$ of regression weights. In KRR, the penalty term $ \alpha \boldsymbol{\omega}^T \mathbf{K} \boldsymbol{\omega}$ is added to the minimization problem in order to avoid over- and underfitting. Overfitting occurs when the model learns the training data too well, even the noise and other unimportant details. The model is unable to generalize on unseen data and therefore yields high prediction errors on the test data. Underfitting occurs when the model is too simple and does not learn the training data at all, and therefore is not able to predict test data well either. Both behaviours can be avoided by tuning the parameter $\alpha \in \left[0,1\right]$ to a reasonable value. This has do be done separately from training. Both the regularization parameter $\alpha$ and the kernel width $\gamma$ are so called hyperparameters. Hyperparameters cannot be learned during training and have to be selected beforehand. However, it is not always obvious how to choose these hyperparameters and it often requires intuition or rules of thumb. We here employ a cross-validated grid search in order to find the best values for these two hyperparameters.
In grid search, a part of the training set is split off as validation set. We set up a grid of pre-defined hyperparameter values and train the machine on the remaining training set, for each possible combination of $\alpha$ and $\gamma$ values. We validate each possible combination by making predictions on the validation set. The two hyperparameter values that yield the best performance (lowest error) are then selected for the final model to make predictions on the test set.
In cross-validation, the roles of training and validation sets alternate. As described above, a part from the training set is split off as validation set. After training one combination of hyperparameters on the remaining training set and validating on the validation set, the validation set becomes the training set and vice versa, and the model is trained on the new training set and validated on the new validation set for the same combination of hyperparameters. The ratio can be varied, for example in 5-fold cross-validation, the training set is split in 5 equal parts. For each combination of hyperparameters, the model is trained on 80% of the data and validated on the other 20%. Then the roles of training and validation set rotate until each part has served as validation set exactly once. The final validation error for one particular combination of hyperparameters is computed as the mean from all 5 errors on the 5 validation sets. The combination with lowest average error is chosen for the final model.
The cross-validated grid search routine is implemented in scikit-learn.
End of explanation
means = grid_search.cv_results_['mean_test_score']
stds = grid_search.cv_results_['std_test_score']
for mean, std, params in zip(-means, stds, grid_search.cv_results_['params']):
print("%0.3f (+/-%0.03f) for %r" % (mean, std * 2, params))
Explanation: Grid search results
Print out the average validation errors and corresponding hyperparameter combinations
End of explanation
results = pd.DataFrame(grid_search.cv_results_)
#pd.DataFrame(grid_search.cv_results_)
pvt = pd.pivot_table(results, values='mean_test_score',
index='param_gamma', columns='param_alpha')
heatmap = sns.heatmap(-pvt, annot=True, cmap='viridis', cbar_kws={'label': "Mean absolute error [eV]"})
figure = heatmap.get_figure()
plt.show()
print("The best combinations of parameters are %s with a score of %0.3f eV on the validation set."
% (grid_search.best_params_, -grid_search.best_score_))
Explanation: Next, we visualize the grid search results by plotting a heatmap.
End of explanation
# predicted HOMO energies for all test molecules
y_pred = grid_search.predict(x_test) # scikit-learn automatically takes the best combination
# of hyperparameters from grid search
print("Mean absolute error on test set: %0.3f eV" %(np.abs(y_pred-y_test)).mean())
# do the regression plot
plt.plot(y_test, y_pred, 'o')
plt.plot([np.min(y_test),np.max(y_test)], [np.min(y_test),np.max(y_test)], '-')
plt.xlabel('reference HOMO energy [eV]')
plt.ylabel('predicted HOMO energy [eV]')
plt.show()
print("R^2 score on test set: %.3f" % r2_score(y_test, y_pred))
Explanation: Testing
With the best combination of hyperparameters, the model is once again trained on the entire training set (this is done automatically in scikit-learn). Then, with the best combination of hyperparameters, predictions are made on the test set to evaluate the final model. With the fitted regressions weights $\omega_i$ and the selected hyperparameter $\gamma$ (kernel width), the final model is used to predict the energies of the test molecules. The energy of a particular test molecule $\boldsymbol{x}$ is predicted by computing the weighted sum of kernel contributions $k(\boldsymbol{x}, \boldsymbol{x}_i)$ between the test molecule $\boldsymbol{x}$ and each of the $N$ molecules $\boldsymbol{x}_i$ in the training set (sum over $N$):
\begin{equation}
f(x) = \sum_{i=1}^N \omega_i k(\boldsymbol{x}, \boldsymbol{x}_i) = HOMO^{pred},
\end{equation}
The deviation of the predicted HOMO energies to the true reference HOMO energies yields the final error of the model. We compute the mean absolute error between predicted and reference HOMO energies for all $M$ test molecules (sum over $M$):
\begin{equation}
\sum_{i=1}^M \frac{1}{M} \big|HOMO^{pred} - HOMO^{ref}\big|
\end{equation}
End of explanation |
3,595 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In order to widen Open Context's interoperability with other scientific information systems, we are starting to cross-reference Open Context published biological taxonomy categores with GBIF (Global Biodiversity Information Facility, https
Step6: Now define some fuctions that we'll be using over and over.
Step7: Now that we have a main working dataset, we need to add cannonical and vernacular names to the GBIF IDs.
Step8: Now that we have added GBIF names to rows that have GBIF IDs, we will save our interim results.
Step9: At this point, we will still be missing GBIF IDs for many rows of EOL records. So now, we will use the GBIF search API to find related GBIF IDs. | Python Code:
import json
import os
import requests
from time import sleep
import numpy as np
import pandas as pd
# Get the root_path for this jupyter notebook repo.
repo_path = os.path.dirname(os.path.abspath(os.getcwd()))
# Path for the (gzip compressed) CSV data dump from EOL
# with GBIF names and EOL IDs.
eol_gbif_names_path = os.path.join(
repo_path, 'files', 'eol', 'eol-gbif.csv.gz'
)
# Path for the CSV data from Open Context of all EOL
# URIs and IDs currently referenced by Open Context.
oc_eol_path = os.path.join(
repo_path, 'files', 'eol', 'oc-eol-uris.csv'
)
# Path for the CSV data that has EOL URIs used by Open Context
# with GBIF URIs and missing GBIF URIs
oc_eol_gbif_w_missing_path = os.path.join(
repo_path, 'files', 'eol', 'oc-eol-gbif-with-missing.csv'
)
# Path for CSV data that has EOL URIs used by Open Context and
# corresponding GBIF URIs and Names.
oc_eol_gbif_path = os.path.join(
repo_path, 'files', 'eol', 'oc-eol-gbif.csv'
)
# Path for CSV data that has EOL URIs used by Open Context
# but no corresponding GBIF URIs.
oc_eol_no_gbif_path = os.path.join(
repo_path, 'files', 'eol', 'oc-eol-no-gbif.csv'
)
Explanation: In order to widen Open Context's interoperability with other scientific information systems, we are starting to cross-reference Open Context published biological taxonomy categores with GBIF (Global Biodiversity Information Facility, https://gbif.org) identifiers.
To start this process, this Jupyter notebooks will find GBIF identifiers that correspond with EOL (Encyclopedia of Life, https://eol.org) identifiers already used by Open Context.
The datasets used and created by this notebook are stored in the /files/eol directory. The files used and created by this notebook include:
eol-gbif.csv.gz (This source of the data is: https://opendata.eol.org/dataset/identifier-map, dated 2019-12-20. The data is filtered to only include records where the resource_id is 767, which corresponds to GBIF.)
oc-eol-uris.csv (This is a CSV dump of from the Open Context, current as of 2020-01-15, link_entities model where URIs started with 'http://eol.org'. It represents all of the EOL entities that Open Context uses to cross-reference project-specific biological taxonomic concepts.)
oc-eol-gbif-with-missing.csv (This is the scratch, working data file that has oc-eol-uri.csv data, with joined records from eol-gbif.csv. Execution of this notebook creates this file and periodically updates this file with names and new IDs resulting from requests to the GBIF API.)
oc-eol-gbif.csv (This notebook generates this file which describes equivalences between the EOL items used by Open Context and corresponding GBIF identifiers.)
oc-eol-no-gbif.csv (This notebook generates this file which describes EOL items used by Open Context that lack corresponding GBIF identifiers. These records will probably need manual curation.)
End of explanation
def save_result_files(
df,
path_with_gbif=oc_eol_gbif_path,
path_without_gbif=oc_eol_no_gbif_path
):
Saves files for outputs with and without GBIF ids
# Save the interim results with matches
gbif_index = ~df['gbif_id'].isnull()
df_ok_gbif = df[gbif_index].copy().reset_index(drop=True)
print('Saving EOL matches with GBIF...')
df_ok_gbif.to_csv(path_with_gbif, index=False)
no_gbif_index = df['gbif_id'].isnull()
df_ok_gbif = df[no_gbif_index].copy().reset_index(drop=True)
print('Saving EOL records without GBIF matches...')
df_ok_gbif.to_csv(path_without_gbif, index=False)
def get_gbif_cannonical_name(gbif_id, sleep_secs=0.25):
Get the cannonical name from the GBIF API for an ID
sleep(sleep_secs)
url = 'https://api.gbif.org/v1/species/{}'.format(gbif_id)
print('Get URL: {}'.format(url))
r = requests.get(url)
r.raise_for_status()
json_r = r.json()
return json_r.get('canonicalName')
def get_gbif_vernacular_name(gbif_id, lang_code='eng', sleep_secs=0.25):
Get the first vernacular name from the GBIF API for an ID
sleep(sleep_secs)
url = 'http://api.gbif.org/v1/species/{}/vernacularNames'.format(
gbif_id
)
print('Get URL: {}'.format(url))
r = requests.get(url)
r.raise_for_status()
json_r = r.json()
vern_name = None
for result in json_r.get('results', []):
if result.get('language') != lang_code:
continue
vern_name = result.get("vernacularName")
if vern_name is not None:
break
return vern_name
def add_names_to_gbif_ids(
df,
limit_by_method=None,
save_path=oc_eol_gbif_w_missing_path
):
Adds names to GBIF ids where those names are missing
gbif_index = ~df['gbif_id'].isnull()
df.loc[gbif_index, 'gbif_uri'] = df[gbif_index]['gbif_id'].apply(
lambda x: 'https://www.gbif.org/species/{}'.format(int(x))
)
df.to_csv(save_path, index=False)
# Now use the GBIF API to fetch cannonical names for GBIF items
# where we do not yet have those names.
need_can_name_index = (df['gbif_can_name'].isnull() & gbif_index)
if limit_by_method:
need_can_name_index &= (df['gbif_rel_method'] == limit_by_method)
df.loc[need_can_name_index, 'gbif_can_name'] = df[need_can_name_index]['gbif_id'].apply(
lambda x: get_gbif_cannonical_name(int(x))
)
df.to_csv(save_path, index=False)
# Now use the GBIF API to fetch vernacular names for GBIF items
# where we do not yet have those names.
need_vern_name_index = (df['gbif_vern_name'].isnull() & gbif_index)
if limit_by_method:
need_vern_name_index &= (df['gbif_rel_method'] == limit_by_method)
df.loc[need_vern_name_index, 'gbif_vern_name'] = df[need_vern_name_index]['gbif_id'].apply(
lambda x: get_gbif_vernacular_name(int(x))
)
df.to_csv(save_path, index=False)
return df
def get_gbif_id_by_name(name, sleep_secs=0.25, allow_alts=False):
Get a GBIF ID by seatching a name via the GBIF API
sleep(sleep_secs)
if ' ' in name:
# Only use the first 2 parts of a name with a space
name_sp = name.split(' ')
# This also turns the space into a '+', good for URL enconding.
if len(name_sp[0]) <= 2 or len(name_sp[1]) <= 2:
return np.nan
name = name_sp[0] + '+' + name_sp[1]
url = 'https://api.gbif.org/v1/species/match?verbose=true&dataset_key=d7dddbf4-2cf0-4f39-9b2a-bb099caae36c'
url += '&name={}'.format(name)
print('Get URL: {}'.format(url))
r = requests.get(url)
r.raise_for_status()
json_r = r.json()
id = json_r.get('usageKey')
if id is not None:
return int(id)
elif not allow_alts:
# We don't have an ID, but we're not yet allowing alternatives
return np.nan
# Below is for multiple equal matches
if not allow_alts or json_r.get('matchType') != 'NONE':
# We don't have an exact match
return np.nan
alts = json_r.get('alternatives', [])
if len(alts) == 0:
# We don't have alternatives
return np.nan
# Chose the first alternative.
id = alts[0].get('usageKey')
if not id:
return np.nan
return int(id)
if not os.path.isfile(oc_eol_gbif_w_missing_path):
# We don't have the oc_eol_gbif_with missing data
# so we need to make it.
df_eol_gbif_names = pd.read_csv(eol_gbif_names_path)
df_oc_eol = pd.read_csv(oc_eol_path, encoding='utf-8')
df_oc_eol.rename(columns={'id': 'page_id'}, inplace=True)
df = df_oc_eol.merge(df_eol_gbif_names, on=['page_id'], how='left')
print('We have {} rows of EOL uris in OC to relate to GBIF'.format(
len(df.index)
)
)
df.sort_values(by=['page_id'], inplace=True)
# Now pull out the GBIF integer ID
df['gbif_id'] = pd.to_numeric(
df['resource_pk'],
errors='coerce',
downcast='integer'
)
df['gbif_rel_method'] = np.nan
df['gbif_uri'] = np.nan
df['gbif_can_name'] = np.nan
df['gbif_vern_name'] = np.nan
# Now note that the rows where the gbif_id is not null
# come from the EOL-GBIF names dataset
gbif_index = ~df['gbif_id'].isnull()
df.loc[gbif_index, 'gbif_rel_method'] = 'EOL-GBIF-names'
df.to_csv(oc_eol_gbif_w_missing_path, index=False)
# Get our working dataframe, now that we know that it
# must have been initially created.
df = pd.read_csv(oc_eol_gbif_w_missing_path)
Explanation: Now define some fuctions that we'll be using over and over.
End of explanation
# Use GBIF API calls to add names to records with GBIF IDs but currently
# missing names.
df = add_names_to_gbif_ids(df, save_path=oc_eol_gbif_w_missing_path)
Explanation: Now that we have a main working dataset, we need to add cannonical and vernacular names to the GBIF IDs.
End of explanation
# Save the Open Context EOL URIs with clear GBIF matches,
# as well as a file without matches
save_result_files(df)
Explanation: Now that we have added GBIF names to rows that have GBIF IDs, we will save our interim results.
End of explanation
# Now try to look up GBIF items where we don't have
# clear matches.
look_ups = [
# Tuples are:
# (field_for_name, allow_alts, gbif_rel_method,),
('preferred_canonical_for_page', False, 'EOL-pref-page-GBIF-exact-search',),
('preferred_canonical_for_page', True, 'EOL-pref-page-GBIF-search-w-alts',),
('label', False, 'EOL-OC-label-GBIF-exact-search',),
('label', True, 'EOL-OC-label-GBIF-search-w-alts',),
]
# Now iterate through these look_up configs.
for field_for_name, allow_alts, gbif_rel_method in look_ups:
gbif_index = ~df['gbif_id'].isnull()
ok_eol = df[gbif_index]['uri'].unique().tolist()
no_gbif_index = (df['gbif_id'].isnull() & ~df['uri'].isin(ok_eol))
# Get the index where there's a preferred_canonical_for_page (EOL) name, but
# where we have no GBIF id yet.
no_gbif_index_w_name = (~df[field_for_name].isnull() & no_gbif_index)
# Use the GBIF API to lookup GBIF IDs.
df.loc[no_gbif_index_w_name, 'gbif_id'] = df[no_gbif_index_w_name][field_for_name].apply(
lambda x: get_gbif_id_by_name(x, allow_alts=allow_alts)
)
# The new GBIF IDs will have a gbif_rel_method of null. Make sure that we record
# the gbif_rel_method at this point.
new_gbif_id_index = (~df['gbif_id'].isnull() & df['gbif_rel_method'].isnull())
df.loc[new_gbif_id_index, 'gbif_rel_method'] = gbif_rel_method
# Save the interim results
df.to_csv(oc_eol_gbif_w_missing_path, index=False)
# Now add names to the rows where we just found new IDs.
df = add_names_to_gbif_ids(
df,
limit_by_method=gbif_rel_method,
save_path=oc_eol_gbif_w_missing_path
)
# Save the interim results, again.
df.to_csv(oc_eol_gbif_w_missing_path, index=False)
# Save the interim results with matches to a file
# and without matches to another file.
save_result_files(df)
Explanation: At this point, we will still be missing GBIF IDs for many rows of EOL records. So now, we will use the GBIF search API to find related GBIF IDs.
End of explanation |
3,596 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="https
Step1: Vérifiez quelle est votre version de Python
Step2: Exécutez cette cellule pour appliquer le style CSS utilisé dans ce notebook
Step3: Dans les séquences de travail, vous rencontrerez certains logos
Step4: Types/classes
Chaque valeur/objet possède un type/classe, qui indique ses capacités.
Les principaux types/classes natives sont
Step5: <div class="alert alert-block alert-danger travail">
**Ex2.1** Faites de même afin d'obtenir un type `str` contenant `réseaux`.
</div>
<div class="alert alert-block alert-danger travail">
**Ex2.2** Faites de même afin d'obtenir un type `bool`.
</div>
Pour les autres types, nous verrons ultérieurement comment procéder.
Specificité des entiers
Pour faciliter la lisibilité des grands nombres entiers, il est possible d'utiliser le underscore _ pour faire des séparations.
Par exemple 123_456_789 = 123456789.
Valeur maximale représentable
En Python 2, la taille des int était limité à 32 bits, il était donc possible possible de stocker des nombres entiers de –2 147 483 648 to 2 147 483 647.
Avec les entiers longs, il est possible d'étendre la taille à 63 bits, soit de –9 223 372 036 854 775 808 to 9 223 372 036 854 775 807. En Python 3, toutes ces limitations sont finies et les entiers peuvent être plus grands que 64 bits. Il est ainsi possible de représenter des nombres arbitrairement grands, par exemple un googol (un suivi de 100 zeros), qui était le nom prévu initialement de Google, avant de trouver un nom plus simple à épeler
Step6: Base et numération
Les entiers peuvent être rentrés directement dans différentes bases usuelles en utilisant un préfixe
Step7: <div class="alert alert-block alert-danger travail">
**Ex3.0** Combien vaut le nombre `"BAD"` de base 16 en décimal ?
</div>
Il est aussi possible de convertir des octets (de type bytes) en entiers, en spécifiant l'ordre de lecture (big ou little indian) et si les entiers sont signés ou non. C'est particulièrement utile pour lire des trames réseaux ou des fichiers binaires. Par exemple
Step8: <div class="alert alert-block alert-danger travail">
**Ex3.1** Combien vaut l'octet `'9E'` encodé en big indian et signé en décimal ?
</div>
Les conversions depuis les entiers
Il est possible de faire l'inverse et de convertir un nombre décimal en binaire avec bin() en octal avec oct() et en hexadécimal avec hex(). Par exemple
Step9: Opérateurs numériques
Les opérateurs permettent de réaliser des opérations sur les valeurs/objets.
Les opérateurs numériques usuels sont
Step10: <div class="alert alert-block alert-danger travail">
**Ex4.3 - `+` entre `int` et`str`**
Utilisez l'opérateur `+` entre un `int`et un `str` et affichez le résultat obtenu, et son type.
</div>
Step11: <div class="alert alert-block alert-info bilan">
**IMPORTANT**
Step12: <div class="alert alert-block alert-danger travail">
**Ex4.5 - Que font `/` et `//` ?**
Essayez de distinguer les comportements de `/` et `//`
Step13: <div class="alert alert-block alert-danger travail">
**Ex5.1 - Multiple de ?**
8751212 est-il un multiple de 3 ?</div>
<div class="alert alert-block alert-danger travail">
**Ex5.2 - Modulogâteau** 🍰🍰🍰🍰🍰
20 parts de gâteaux, 7 convives, combien de parts de gâteau par personne et combien de parts restantes ?</div>
Opérateurs d'affectation
Les opérateurs d'affectation sont
Step14: <div class="alert alert-block alert-danger travail">
**Ex6.1 - La sucre syntaxique**
Effectuez les mêmes opérations en utilisant les opérateurs `-=` et `/=`.</div>
Step15: <div class="alert alert-block alert-info bilan">
**IMPORTANT
Step16: Les opérateurs logiques usuels sont and, or, not.
Ils permettent d'associer une ou plusieurs valeurs de vérité et d'obtenir une valeur de vérité.
Par exemple
Step17: Exercice 6 - Quête de vérité
<div class="alert alert-block alert-danger travail">
**Ex7.0 - Minority report**
Ecrivez une expression utilisant opérateurs de comparaison et/ou opérateurs logiques, et permettant d'afficher si `bob` est mineur (`True`) ou majeur (`False`). **Attention, votre programme doit renvoyer `False` en cas de valeur négative**.
**MERCI DE BIEN LIRE CET ENONCE!**
Exemple
Step18: <div class="alert alert-block alert-danger travail">
**Ex7.1 - Minority report 2**
Même exercice, mais vous n'avez le droit d'utiliser que des opérateurs de comparaison (pas d'opérateurs logiques).</div>
<div class="alert alert-block alert-info bilan">
**IMPORTANT**
Step19: <div class="alert alert-block alert-danger travail">
**Ex7.3 - Même signe**
Ecrivez une expression permettant d'afficher si `n` et `m` sont bien de même signe.</div>
Step20: <div class="alert alert-block alert-danger travail">
**Ex7.4 - Table de vérité du OU**
Complétez l'affichage de la table de vérité (toutes les possibilités d'association de booléens) de l'opérateur OU.</div>
Step21: <img src="https | Python Code:
print("C'est parti") # affiche le texte en dessous
# essayez de modifier le texte et ré-exécuter
Explanation: <img src="https://live.staticflickr.com/3089/3086874879_5eeb26eda6_w_d.jpg" align=center>
SAÉ 03 - TP1 - Tour d'horizon de Python
Bienvenue sur le Jupyter pour préparer la SAÉ Traitement numérique du signal du département RT de Vélizy. Les notebooks sont une adaptation de ceux proposés par le département info et de leur profs d'info géniaux.
Dans cette fiche, nous allons passer rapidement en revue certains concepts de python que nous développerons de façon plus approfondie dans la SAÉ. Cela vous permettra de bien maîtriser les notions dont vous aurez besoin ce semestre.
En route !
Ce print permet d'afficher du texte, il est dans une cellule que vous pouvez exécuter et modifier :
End of explanation
# Exécutez cette cellule !
import platform
print("Vous travaillez actuellement sur la version", platform.python_version())
Explanation: Vérifiez quelle est votre version de Python :
End of explanation
# Exécutez cette cellule !
from IPython.core.display import HTML
styles = "<style>\n.travail {\n background-size: 30px;\n background-image: url('https://cdn.pixabay.com/photo/2018/01/04/16/53/building-3061124_960_720.png');\n background-position: left top;\n background-repeat: no-repeat;\n padding-left: 40px;\n}\n\n.bilan {\n background-size: 30px;\n background-image: url('https://cdn.pixabay.com/photo/2016/10/18/19/40/anatomy-1751201_960_720.png');\n background-position: left top;\n background-repeat: no-repeat;\n padding-left: 40px;\n}\n</style>"
HTML(styles)
Explanation: Exécutez cette cellule pour appliquer le style CSS utilisé dans ce notebook :
End of explanation
l'avion = "rafale"
tire-bouchon = True
7ici = "Vélizy"
Explanation: Dans les séquences de travail, vous rencontrerez certains logos :
<div class="travail alert alert-block alert-danger"> indique un exercice à réaliser </div>
<div class="bilan alert alert-block alert-info"> indique un point important qui nécessite une réponse de votre part et sera ensuite demandé lors du bilan de la séquence</div>
Valeurs, objets, identifiants
Un identifiant est un lien vers un espace de stockage de données.
Lorsqu'on écrit a = 2 python crée automatiquement un espace de stockage pour la valeur 2, qu'on peut atteindre avec l'identifiant a. Dans les langages de programmation plus anciens, on parlait de variable...et comme nous avons pris de mauvaises habitudes on risque de continuer à utiliser ce terme. ;)
Un objet est une valeur qui possède des super-pouvoirs, et dont nous expliquerons le fonctionnement plus tard.
Retenez juste que depuis la version 3 de python, tout est objet.
Règles pour les identifiants
Un identifiant permet donc d'accéder à une variable/objet. Cet identifiant doit respecter:
* des contraintes grammaticales (obligatoires)
* des conventions d'écriture (facultatives)
Les règles de bon usage de python sont décrites dans des documents officiels nommés PEP. Vous trouverez les contraintes et conventions d'écriture dans la PEP8.
Les principales contraintes grammaticales pour les identifiants sont :
* pas de chiffre en début d'identifiant
* pas de symboles de ponctuation, d'espace, d'apostrophe, de guillements, de slash; seul underscore _ est autorisé
Et les principales conventions d'écriture sont :
* variables/objets en minuscules : truc, et pas Truc ni TRUC
* constantes (qui n'existent pas formellement en python) en majuscules : TRUC
* des séparations de mots par majuscules (Camel case) : nbBilles, distVilleDepart
* des espaces de séparation afin d'alléger la lecture: a = 2
Exercice 1 - identifiants
<div class="travail alert alert-block alert-danger">Corrigez les problèmes ci-dessous</div>
End of explanation
a = 12
print(type(a))
Explanation: Types/classes
Chaque valeur/objet possède un type/classe, qui indique ses capacités.
Les principaux types/classes natives sont :
* int : type entier relatif, pas de valeur maximale depuis python3
* float : nombres décimaux
* str : chaînes de caractères (lettres, mots, phrases, délimités par des \' ou des \" )
* bool : booléens (True ou False)
* list : listes, ou tableaux (symbolisés par des [ ] )
* dict : dictionnaires (symbolisés par des { } )
* set : ensembles au sens mathématique
* tuples : couples, triplets, n-uplets (ex: (5, 2) ou (3, 9, 2) )
<div class="alert alert-block alert-info bilan">
**TRES IMPORTANT** : [la documentation officielle de python](https://docs.python.org/fr/3/) en français vous aidera énormément, prenez l'habitude de la consulter afin d'avoir les informations exactes !
**ATTENTION**, veillez bien à ce que la version de la doc python (en haut à gauche) corresponde à celle que vous utilisez.
</div>
Exercice 2 - Types
<div class="alert alert-block alert-danger travail">
**Ex2.0** Dans l'exemple ci-dessous, modifiez la valeur de `a` afin d'obtenir un type `float`.
</div>
End of explanation
googol = 10_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000
print(googol)
Explanation: <div class="alert alert-block alert-danger travail">
**Ex2.1** Faites de même afin d'obtenir un type `str` contenant `réseaux`.
</div>
<div class="alert alert-block alert-danger travail">
**Ex2.2** Faites de même afin d'obtenir un type `bool`.
</div>
Pour les autres types, nous verrons ultérieurement comment procéder.
Specificité des entiers
Pour faciliter la lisibilité des grands nombres entiers, il est possible d'utiliser le underscore _ pour faire des séparations.
Par exemple 123_456_789 = 123456789.
Valeur maximale représentable
En Python 2, la taille des int était limité à 32 bits, il était donc possible possible de stocker des nombres entiers de –2 147 483 648 to 2 147 483 647.
Avec les entiers longs, il est possible d'étendre la taille à 63 bits, soit de –9 223 372 036 854 775 808 to 9 223 372 036 854 775 807. En Python 3, toutes ces limitations sont finies et les entiers peuvent être plus grands que 64 bits. Il est ainsi possible de représenter des nombres arbitrairement grands, par exemple un googol (un suivi de 100 zeros), qui était le nom prévu initialement de Google, avant de trouver un nom plus simple à épeler :
End of explanation
# depuis du binaire
a = int("0101_1111_0101", 2)
print(a)
# depuis la base 7
a = int("263", 7)
print(a)
Explanation: Base et numération
Les entiers peuvent être rentrés directement dans différentes bases usuelles en utilisant un préfixe :
- 0b : système binaire, par exemple 0b0110 en base 2 vaut 6 en base 10
- 0o : système octal, par exemple 0o42 en base 8 vaut 34 en base 10
- rien : système décimal, c'est celui utilisé habituellement. 42 vaut 42 en base 10
- 0x : système hexadécimal, par exemple 0x1313C en base 16 vaut 78140
On peut mentionner qu'il est possible d'utiliser la notation scientifique ($a \times 10^n$), par exemple 1.5e3 = 1500.0 et 5e-2 = 0.005, mais attention ce sont des float et pas des int.
Les conversions vers les entiers
On peut convertir un nombre en int (entier décimal) à partir de n'importe quelle base avec int("nombre", base) où le nombre est à indiquer sous forme de str. Par exemple :
End of explanation
octet = bytes.fromhex('20')
a = int.from_bytes(octet, byteorder='little', signed=False)
print(a)
Explanation: <div class="alert alert-block alert-danger travail">
**Ex3.0** Combien vaut le nombre `"BAD"` de base 16 en décimal ?
</div>
Il est aussi possible de convertir des octets (de type bytes) en entiers, en spécifiant l'ordre de lecture (big ou little indian) et si les entiers sont signés ou non. C'est particulièrement utile pour lire des trames réseaux ou des fichiers binaires. Par exemple
End of explanation
a = 42
print("En binaire, 42 = ", bin(a))
print("En octal, 42 = ", oct(a))
print("En hexadécimal, 42 = ", hex(a))
Explanation: <div class="alert alert-block alert-danger travail">
**Ex3.1** Combien vaut l'octet `'9E'` encodé en big indian et signé en décimal ?
</div>
Les conversions depuis les entiers
Il est possible de faire l'inverse et de convertir un nombre décimal en binaire avec bin() en octal avec oct() et en hexadécimal avec hex(). Par exemple :
End of explanation
a = 'réseaux' + 'télécom'
Explanation: Opérateurs numériques
Les opérateurs permettent de réaliser des opérations sur les valeurs/objets.
Les opérateurs numériques usuels sont : + - * / // % **
Leur comportement dépend des types concernés.
Exercices 4 - Opérateurs standards
<div class="alert alert-block alert-danger travail">
**Ex4.0 - `+` entre `int`**
Utilisez l'opérateur `+` entre deux `int` et affichez le type du résultat obtenu.
</div>
<div class="alert alert-block alert-danger travail">
**Ex4.1 - `+` entre `float` et `int`**
Utilisez l'opérateur `+` entre un `float`et un `int` et affichez le résultat obtenu, et son type.
</div>
<div class="alert alert-block alert-info bilan">
**IMPORTANT** : lorsqu'on utilise des opérateurs, il existe un mécanisme de conversion implicite qui...
</div>
<div class="alert alert-block alert-danger travail">
**Ex4.2 - `+` entre `str`**
Utilisez l'opérateur `+` entre deux `str` et affichez le résultat obtenu, et son type.
</div>
End of explanation
a = 1 + "fini"
Explanation: <div class="alert alert-block alert-danger travail">
**Ex4.3 - `+` entre `int` et`str`**
Utilisez l'opérateur `+` entre un `int`et un `str` et affichez le résultat obtenu, et son type.
</div>
End of explanation
a = 8 * "simple,basique,"
Explanation: <div class="alert alert-block alert-info bilan">
**IMPORTANT** : Pour l'opérateur `+`, nous pouvons en conclure que...</div>
<div class="alert alert-block alert-danger travail">
**Ex4.4 - `*` entre`str` et `int`**
Utilisez maintenant l'opérateur `*` entre un `str`et un `int` et affichez le résultat obtenu, et son type.
</div>
End of explanation
n = 35
Explanation: <div class="alert alert-block alert-danger travail">
**Ex4.5 - Que font `/` et `//` ?**
Essayez de distinguer les comportements de `/` et `//` :
* quelles opérations réalisent-ils ?
* à quel(s) type(s) peuvent-ils être associés?
</div>
<div class="alert alert-block alert-info bilan">
**IMPORTANT** : les opérateurs `/` et `//` réalisent respectivement des opérations de...
Ils s'appliquent aux valeurs de type...</div>
<div class="alert alert-block alert-danger travail">
**Ex4.6 - Que fait l'opérateur** `**` **?**
</div>
<div class="alert alert-block alert-info bilan">
**IMPORTANT** : l'opérateur `**` permet de...</div>
Exercice 5 - modulo
L'opérateur modulo % permet d'obtenir le reste de la division entière.
<div class="alert alert-block alert-danger travail">
**Ex5.0 - Pair ?**
Quel calcul effectuer afin de savoir si `n` est pair ?</div>
End of explanation
a = 18
Explanation: <div class="alert alert-block alert-danger travail">
**Ex5.1 - Multiple de ?**
8751212 est-il un multiple de 3 ?</div>
<div class="alert alert-block alert-danger travail">
**Ex5.2 - Modulogâteau** 🍰🍰🍰🍰🍰
20 parts de gâteaux, 7 convives, combien de parts de gâteau par personne et combien de parts restantes ?</div>
Opérateurs d'affectation
Les opérateurs d'affectation sont : = += -+ *+ /= //* %=.
Le = permet évidemment d'associer une valeur/objet à un identificateur.
Les autres opérateurs d'affectation sont en fait du sucre syntaxique: ils ne sont pas indispensables mais simplifient les écritures.
Exercice 5 - Modifier une valeur
<div class="alert alert-block alert-danger travail">
**Ex6.0 - La classique**
Dans le code suivant, en utilisant les opérateurs standards `-` et `/`, effectuez des opérations diminuant `a` de 4 puis réduisant de moitié sa valeur. </div>
End of explanation
a = 18
Explanation: <div class="alert alert-block alert-danger travail">
**Ex6.1 - La sucre syntaxique**
Effectuez les mêmes opérations en utilisant les opérateurs `-=` et `/=`.</div>
End of explanation
a = 18
print(a == 12)
Explanation: <div class="alert alert-block alert-info bilan">
**IMPORTANT :** l'opérateur `+=` permet de remplacer...</div>
Opérateurs de comparaison et opérateurs logiques
Les opérateurs de comparaison sont : ==, <, <=, >=, >, !=.
Associés à des expressions (comme 2 * a + 1 ou 5.2), ils permettent d'obtenir une valeur de vérité (True ou False). Par exemple:
End of explanation
a = 18
b = 12
print(a >= 18 and b != 5)
Explanation: Les opérateurs logiques usuels sont and, or, not.
Ils permettent d'associer une ou plusieurs valeurs de vérité et d'obtenir une valeur de vérité.
Par exemple:
End of explanation
bob = 17
Explanation: Exercice 6 - Quête de vérité
<div class="alert alert-block alert-danger travail">
**Ex7.0 - Minority report**
Ecrivez une expression utilisant opérateurs de comparaison et/ou opérateurs logiques, et permettant d'afficher si `bob` est mineur (`True`) ou majeur (`False`). **Attention, votre programme doit renvoyer `False` en cas de valeur négative**.
**MERCI DE BIEN LIRE CET ENONCE!**
Exemple:
12 : `True`
-2 : `False` </div>
End of explanation
n = 12
Explanation: <div class="alert alert-block alert-danger travail">
**Ex7.1 - Minority report 2**
Même exercice, mais vous n'avez le droit d'utiliser que des opérateurs de comparaison (pas d'opérateurs logiques).</div>
<div class="alert alert-block alert-info bilan">
**IMPORTANT** : en python, on peut écrire des égalités, des inégalités, mais également des...</div>
<div class="alert alert-block alert-danger travail">
**Ex7.2 - Chiffre pair**
Ecrivez une expression permettant d'afficher si `n` est bien un __chiffre__ pair.
Exemple: 12 est pair mais n'est pas un chiffre donc `False` , 3 est un chiffre impair donc `False` </div>
End of explanation
n = 12
m = -2
Explanation: <div class="alert alert-block alert-danger travail">
**Ex7.3 - Même signe**
Ecrivez une expression permettant d'afficher si `n` et `m` sont bien de même signe.</div>
End of explanation
print(True, "or", True, "=", True or True)
# etc...
Explanation: <div class="alert alert-block alert-danger travail">
**Ex7.4 - Table de vérité du OU**
Complétez l'affichage de la table de vérité (toutes les possibilités d'association de booléens) de l'opérateur OU.</div>
End of explanation
# 6 - 2
# 6 - 3.2
# 6 * 4.3
# 5 // 2
# 5 / 2
# 6 / 2
# 6 % 2
# "hello" + "ça va ?"
# "hello" * 3
# 2 < 4
# (2 < 4) or (x == 2) # donnez une valeur à x
# not (2 < 4 and False)
# 2 <= x < 34 # donnez une valeur à x
Explanation: <img src="https://cdn.pixabay.com/photo/2021/04/13/09/50/road-6175186_960_720.jpg" width=350>
Bilan
<div class="alert alert-block alert-danger travail">
**Petit bilan sur les types et opérateurs vus**
Vous devez maintenant être capables d'évaluer la valeur obtenue et de reconnaître son type une fois que les expressions suivantes sont évaluées. Essayez de deviner, puis vérifiez.</div>
End of explanation |
3,597 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
how to read a .csv file using python
| Python Code::
import pandas as pd
df = pd.read_csv('data.csv')
df.head()
|
3,598 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Implement the sorting algorithm you came up with in pseudocode with Python
Test the sorting algorithm with a list of 10, 100, 1000 random numbers and compare the result using the %time to time your code and submit your results in code comments
Input
Step1: binary sorting
Step2: sort_list(list10) was 29 µs, so this one is slower
Step3: sort_list() was total | Python Code:
import random
list10 = []
for x in range(10):
list10.append(random.randrange(100))
list100 = []
for x in range(100):
list100.append(random.randrange(100))
list1000 = []
for x in range(1000):
list1000.append(random.randrange(100))
def sort_list(old_list):
def find_new_index(old_i):
for new_i in range(1, len(new_list)):
if new_list[new_i] > old_list[old_i]:
return new_i
new_list = [old_list[0]]
for old_i in range(1, len(old_list)):
if old_list[old_i] <= new_list[0]:
new_list.insert(0, old_list[old_i])
elif old_list[old_i] >= new_list[len(new_list)-1]:
new_list.insert(len(new_list), old_list[old_i])
else:
new_list.insert(find_new_index(old_i), old_list[old_i])
return new_list
%time sort_list(list10)
%time sort_list(list100)
%time sort_list(list1000)
Explanation: Implement the sorting algorithm you came up with in pseudocode with Python
Test the sorting algorithm with a list of 10, 100, 1000 random numbers and compare the result using the %time to time your code and submit your results in code comments
Input: a list of integer values
Operation: sort the list
Output: a sorted list of values
Note: Don't worry about checking data types, assume they'll always be integers but design for other error conditions
End of explanation
def bsort_list(old_list):
new_list = [old_list[0]]
def find_new_index(old_i):
start_index = 0
end_index = len(new_list) - 1
while end_index - start_index > 1:
middle_index = int((end_index - start_index) / 2 + start_index)
if old_list[old_i] == new_list[start_index]:
new_i = start_index
return new_i
elif old_list[old_i] == new_list[end_index]:
new_i = end_index
return new_i
elif old_list[old_i] == new_list[middle_index]:
new_i = middle_index
return new_i
elif old_list[old_i] < new_list[middle_index]:
end_index = middle_index
else:
start_index = middle_index
new_i = end_index
return new_i
for old_i in range(1, len(old_list)):
if old_list[old_i] < new_list[0]:
new_list.insert(0, old_list[old_i])
elif old_list[old_i] > new_list[len(new_list) - 1]:
new_list.insert(len(new_list), old_list[old_i])
else:
new_list.insert(find_new_index(old_i), old_list[old_i])
return new_list
print(list10)
print(bsort_list(list10))
%time bsort_list(list10)
Explanation: binary sorting
End of explanation
%time bsort_list(list100)
Explanation: sort_list(list10) was 29 µs, so this one is slower
End of explanation
%time bsort_list(list1000)
Explanation: sort_list() was total: 586 µs, so this one is slower
End of explanation |
3,599 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
「%%bigquery」に続いてSQLを記述するとBigQueryにクエリを投げることができます
例えば、WebUIから実行した「重複なしでバイクステーションの数をカウントする」クエリは以下のように実行します
Step1: 同じように、WebUIから実行した各種クエリを実行してみます。
営業しているバイクステーション
Step2: ユーザーの課金モデル
Step3: バイクの借り方の傾向
Step4: 結果の解釈(一例)
Central Parkの南に地下鉄の駅がある
観光客がCentral Parkの観光に利用している
12 Ave & W 40 St => West St & Chambers St
通勤での利用(居住区からオフィス街への移動)
南北方面ではなく東西方面の移動が多い
地下鉄は南北方向に駅がある
NY在住者は自転車で東西方向に移動して、南北方向に地下鉄を利用する傾向がある
単純にBigQueryに対してクエリを実行するだけではなく、データの簡易的な可視化などの機能も提供されます。
利用者の調査
最も利用者が多いstart_station_name="Central Park S & 6 Ave", end_station_name="Central Park S & 6 Ave"の利用時間を調査します。
%%bigqueryコマンドに続いて変数名を渡すことで、BigQueryの結果をpandasのDataFrameとして保存することができます。
Step5: Pythonによるデータ可視化
データの概要を掴むためにヒストグラム(データのばらつきを確認するための図)を描きます。 | Python Code:
%%bigquery
SELECT
COUNT(DISTINCT station_id) as cnt
FROM
`bigquery-public-data.new_york.citibike_stations`
Explanation: 「%%bigquery」に続いてSQLを記述するとBigQueryにクエリを投げることができます
例えば、WebUIから実行した「重複なしでバイクステーションの数をカウントする」クエリは以下のように実行します
End of explanation
%%bigquery
SELECT
COUNT(station_id) as cnt
FROM
`bigquery-public-data.new_york.citibike_stations`
WHERE
is_installed = TRUE
AND is_renting = TRUE
AND is_returning = TRUE
Explanation: 同じように、WebUIから実行した各種クエリを実行してみます。
営業しているバイクステーション
End of explanation
%%bigquery
SELECT
usertype,
gender,
COUNT(gender) AS cnt
FROM
`bigquery-public-data.new_york.citibike_trips`
GROUP BY
usertype,
gender
ORDER BY
cnt DESC
Explanation: ユーザーの課金モデル
End of explanation
%%bigquery
SELECT
start_station_name,
end_station_name,
COUNT(end_station_name) AS cnt
FROM
`bigquery-public-data.new_york.citibike_trips`
GROUP BY
start_station_name,
end_station_name
ORDER BY
cnt DESC
Explanation: バイクの借り方の傾向
End of explanation
%%bigquery utilization_time
SELECT
starttime, stoptime,
TIMESTAMP_DIFF(stoptime, starttime, MINUTE) as minute,
usertype, birth_year, gender
FROM
`bigquery-public-data.new_york.citibike_trips`
WHERE
start_station_name = 'Central Park S & 6 Ave' and end_station_name = 'Central Park S & 6 Ave'
# utilization_timeの中身の確認
utilization_time
Explanation: 結果の解釈(一例)
Central Parkの南に地下鉄の駅がある
観光客がCentral Parkの観光に利用している
12 Ave & W 40 St => West St & Chambers St
通勤での利用(居住区からオフィス街への移動)
南北方面ではなく東西方面の移動が多い
地下鉄は南北方向に駅がある
NY在住者は自転車で東西方向に移動して、南北方向に地下鉄を利用する傾向がある
単純にBigQueryに対してクエリを実行するだけではなく、データの簡易的な可視化などの機能も提供されます。
利用者の調査
最も利用者が多いstart_station_name="Central Park S & 6 Ave", end_station_name="Central Park S & 6 Ave"の利用時間を調査します。
%%bigqueryコマンドに続いて変数名を渡すことで、BigQueryの結果をpandasのDataFrameとして保存することができます。
End of explanation
# 必要となるライブラリのインポート及び警告が表示されないような設定
import pandas as pd
import warnings
warnings.filterwarnings('ignore')
# ヒストグラムの描画
utilization_time['minute'].hist(bins=range(0,100,2))
Explanation: Pythonによるデータ可視化
データの概要を掴むためにヒストグラム(データのばらつきを確認するための図)を描きます。
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.