Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
6,100 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fine Tune Language Model
The results of the above query can be downloaded as a csv file from this link
Step2: Filter Labels By YAML File
Step3: Explore The Data
Count Labels
Filter Issues Again To Remove Those w/o Labels occuring at least 50 times
Step4: Remaining Issues
Step5: Number of Issues By Time (after filtering)
(Issues were filtered out in previous years due to having deprecated issue labels)
Step6: Sig/ Labels
Parse Issue Bodies
Step7: Join the labels back onto the parsed text
Step8: Create Classifier w/ Pre-trained Encoder
The pretrained Encoder comes from the language model
Step9: Manual Learning Rate Annealing
Step10: Unfreeze and keep training
Step11: Measure Performance on Validation Set
Step12: Notes | Python Code:
import os
import torch
from torch.cuda import empty_cache
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"]="3"
import pandas as pd
import numpy as np
import re
pd.set_option('max_colwidth', 1000)
df = pd.read_csv('https://storage.googleapis.com/issue_label_bot/k8s_issues/000000000000.csv')
df.labels = df.labels.apply(lambda x: eval(x))
#remove target leakage from kubernetes which are the bot commands
df['body'] = df.body.apply(lambda x: re.sub('(/sig|/kind|/status/triage/|priority) \S+', '', str(x)))
df['last_time'] = pd.to_datetime(df.last_time)
Explanation: Fine Tune Language Model
The results of the above query can be downloaded as a csv file from this link:
https://storage.googleapis.com/issue_label_bot/k8s_issues/000000000000.csv
End of explanation
import base64
import requests
import yaml
def get_current_labels(url="https://raw.githubusercontent.com/kubernetes/test-infra/master/label_sync/labels.yaml"):
Get list of valid issue labels (b/c labels get deprecated over time).
See: https://kubernetes.slack.com/archives/C1TU9EB9S/p1561570627363100
req = requests.get(url)
yml = yaml.safe_load(req.content)
return [x['name'] for x in yml['default']['labels']]
current_labels = get_current_labels()
# remove deprecated labels
df.labels = df.labels.apply(lambda x: [l for l in x if l in current_labels])
# filter out issues without any labels
df = df[df.labels.apply(lambda x: x != [])]
print(f'Number of labeled issues after filtering: {df.shape[0]:,}')
df.head(1)
Explanation: Filter Labels By YAML File
End of explanation
from collections import Counter
c = Counter()
for row in df.labels:
c.update(row)
min_threshold = 50
min_threshold_labels = [k for k in c if c[k] >= min_threshold]
print(f'{len(min_threshold_labels)} labels that occur at least {min_threshold} times.')
df['labels'] = df.labels.apply(lambda x: [l for l in x if l in min_threshold_labels])
df = df[df.labels.apply(lambda x: x != [])]
print(f'Number of labeled issues after filtering again: {df.shape[0]:,}')
Explanation: Explore The Data
Count Labels
Filter Issues Again To Remove Those w/o Labels occuring at least 50 times
End of explanation
for l in min_threshold_labels:
print(f'{l}: {c[l]}')
Explanation: Remaining Issues
End of explanation
df['year'] = df.last_time.apply(lambda x: x.year)
df.groupby('year')['body'].count()
Explanation: Number of Issues By Time (after filtering)
(Issues were filtered out in previous years due to having deprecated issue labels)
End of explanation
from inference import InferenceWrapper, pass_through
from sklearn.model_selection import train_test_split
parsed_df = InferenceWrapper.process_df(df)
Explanation: Sig/ Labels
Parse Issue Bodies
End of explanation
assert parsed_df.shape[0] == df.shape[0]
ml_df = pd.concat([df.reset_index(drop=True), parsed_df], axis=1)[['text', 'labels']]
# must delimit the labels by something (here using space) for fastai
ml_df['labels'] = ml_df.labels.apply(lambda x: ' '.join(x))
assert len(ml_df) == len(parsed_df) == len(df)
ml_df.head(2)
ml_df.to_hdf('ml_df.hdf')
Explanation: Join the labels back onto the parsed text
End of explanation
from fastai.text.data import TextClasDataBunch
from inference import InferenceWrapper, pass_through
from fastai.text import text_classifier_learner
from sklearn.model_selection import train_test_split
ml_df = pd.read_hdf('ml_df.hdf')
train_df, val_df = train_test_split(ml_df, train_size=.8, random_state=1234)
print(f' # of training rows: {len(train_df):,}\n # of validation rows: {len(val_df):,}')
train_label_set = set()
for labels in train_df.labels:
train_label_set.update(labels)
val_label_set = set()
for labels in val_df.labels:
val_label_set.update(labels)
# make sure labels are not unique in either the val or the training set
diff_set = train_label_set ^ val_label_set
assert not diff_set
from fastai.text.transform import Tokenizer
tokenizer = Tokenizer(pre_rules=[pass_through], n_cpus=31)
from fastai.basic_train import load_learner
model_path='/ds/lang_model/models_22zkdqlr/'
model_file_name='trained_model_22zkdqlr.hdf'
learn = load_learner(path=model_path, file=model_file_name)
data_multi_label = TextClasDataBunch.from_df(path='/ds/multi_class_model/',
train_df=train_df,
valid_df=val_df,
tokenizer=tokenizer,
text_cols='text',
label_cols='labels',
label_delim=' ',
vocab=learn.data.vocab,
bs=32)
data_multi_label.save()
from fastai.text.models import AWD_LSTM, awd_lstm_lm_config
emb_sz=800
qrnn=False
bidir=False
n_layers=4
n_hid=2400
awd_lstm_lm_config.update(dict(emb_sz=emb_sz, qrnn=qrnn, bidir=bidir, n_layers=n_layers, n_hid=n_hid))
awd_lstm_lm_config.pop('tie_weights', None)
awd_lstm_lm_config.pop('out_bias', None)
tcl = text_classifier_learner(data=data_multi_label,
pretrained=False,
arch=AWD_LSTM,
config=awd_lstm_lm_config)
tcl.load_encoder('trained_model_encoder_22zkdqlr')
tcl.freeze()
tcl.lr_find()
tcl.recorder.plot()
from torch.cuda import empty_cache
empty_cache()
tcl.fit_one_cycle(3, max_lr=.1)
Explanation: Create Classifier w/ Pre-trained Encoder
The pretrained Encoder comes from the language model
End of explanation
tcl.fit(epochs=1, lr=slice(.0004))
Explanation: Manual Learning Rate Annealing
End of explanation
tcl.freeze_to(-2)
tcl.fit(epochs=1, lr=slice(.0001))
tcl.fit(epochs=1, lr=slice(.0004))
classifier_model_path = tcl.save(file='classifier_best_model',
return_path=True)
classifier_model_path
Explanation: Unfreeze and keep training
End of explanation
val_preds = tcl.get_preds()
val_preds2 = val_df.text.apply(lambda x: tcl.predict(x)[2].cpu().numpy())
val_preds2_matrix = np.stack(val_preds2.values)
val_proba = val_preds[0].cpu().numpy()
val_proba.shape
val_df.head()
i = 10
idx = np.argmax(val_preds2_matrix[i, :])
print(f'{class_list[i]}')
print(f'ground truth: {val_df.iloc[idx]}')
class_list = tcl.data.classes
assert len(class_list) == val_proba.shape[1]
val_scores = {}
for i, lbl in enumerate(class_list):
ground_truth = val_df.labels.apply(lambda x: lbl in x).values
predicted_probs = val_preds2_matrix[:, i]
val_scores[lbl] = {'yhat': predicted_probs, 'y': ground_truth}
from sklearn.metrics import roc_auc_score as auc
auc_scores = []
labels = []
for lbl in val_scores:
auc_scores.append(auc(val_scores[lbl]['y'], val_scores[lbl]['yhat']))
labels.append(lbl)
assert len(auc_scores) == len(labels)
score_df = pd.DataFrame({'label':labels, 'auc': auc_scores})
score_df
score_df.to_hdf('score_df.hdf')
score_df = pd.DataFrame({'label'f'] = pivot.apply(lambda x: abs(x.deep - x.baseline), axis=1)
pivot['label c: labels,
'auc': auc_scores})
pivot = compare_df.pivot(index='label', columns='category', values='auc')
pivot['winner'] = pivot.apply(lambda x: 'deep' if x.deep > x.baseline else 'baseline', axis=1)
pivot['abs diff'] = pivot.apply(lambda x: abs(x.deep - x.baseline), axis=1)
pivot['label count'] = [c[x] for x in pivot.index.values]
pivot.sort_values(by=['label count'], ascending=False)
Explanation: Measure Performance on Validation Set
End of explanation
pred = tcl.predict(val_df.text.iloc[1])
pred
val_df.labels.iloc[1]
tcl.data.classes[torch.argmax(pred[1]).item()]
pred_proba = [(v,k) for k, v in zip(tcl.data.classes, pred[2].data.tolist())]
pred_proba.sort(reverse=True)
pred_proba
Explanation: Notes: How To Do Model Inference
End of explanation |
6,101 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PARTIAL ORDER PLANNER
A partial-order planning algorithm is significantly different from a total-order planner.
The way a partial-order plan works enables it to take advantage of problem decomposition and work on each subproblem separately.
It works on several subgoals independently, solves them with several subplans, and then combines the plan.
<br>
A partial-order planner also follows the least commitment strategy, where it delays making choices for as long as possible.
Variables are not bound unless it is absolutely necessary and new actions are chosen only if the existing actions cannot fulfil the required precondition.
<br>
Any planning algorithm that can place two actions into a plan without specifying which comes first is called a partial-order planner.
A partial-order planner searches through the space of plans rather than the space of states, which makes it perform better for certain problems.
<br>
<br>
Let's have a look at the PartialOrderPlanner class.
Step1: We will first describe the data-structures and helper methods used, followed by the algorithm used to find a partial-order plan.
Each plan has the following four components
Step2: We observe that in the given partial order plan, Remove(Flat, Axle) and Remove(Spare, Trunk) are in the same set.
This means that the order of performing these actions does not affect the final outcome.
That aside, we also see that the PutOn(Spare, Axle) action has to be performed after both the Remove actions are complete, which seems logically consistent.
Step3: We see that this plan does not have flexibility in selecting actions, ie, actions should be performed in this order and this order only, to successfully reach the goal state.
Step4: This plan again doesn't have constraints in selecting socks or shoes.
As long as both socks are worn before both shoes, we are fine.
Notice however, there is one valid solution,
<br>
LeftSock -> LeftShoe -> RightSock -> RightShoe
<br>
that the algorithm could not find as it cannot be represented as a general partially-ordered plan but is a specific total-order solution.
Runtime differences
Let's briefly take a look at the running time of all the three algorithms on the socks_and_shoes problem. | Python Code:
from planning import *
from notebook import psource
psource(PartialOrderPlanner)
Explanation: PARTIAL ORDER PLANNER
A partial-order planning algorithm is significantly different from a total-order planner.
The way a partial-order plan works enables it to take advantage of problem decomposition and work on each subproblem separately.
It works on several subgoals independently, solves them with several subplans, and then combines the plan.
<br>
A partial-order planner also follows the least commitment strategy, where it delays making choices for as long as possible.
Variables are not bound unless it is absolutely necessary and new actions are chosen only if the existing actions cannot fulfil the required precondition.
<br>
Any planning algorithm that can place two actions into a plan without specifying which comes first is called a partial-order planner.
A partial-order planner searches through the space of plans rather than the space of states, which makes it perform better for certain problems.
<br>
<br>
Let's have a look at the PartialOrderPlanner class.
End of explanation
st = spare_tire()
pop = PartialOrderPlanner(st)
pop.execute()
Explanation: We will first describe the data-structures and helper methods used, followed by the algorithm used to find a partial-order plan.
Each plan has the following four components:
actions: a set of actions that make up the steps of the plan.
actions is always a subset of pddl.actions the set of possible actions for the given planning problem.
The start and finish actions are dummy actions defined to bring uniformity to the problem. The start action has no preconditions and its effects constitute the initial state of the planning problem.
The finish action has no effects and its preconditions constitute the goal state of the planning problem.
The empty plan consists of just these two dummy actions.
constraints: a set of temporal constraints that define the order of performing the actions relative to each other.
constraints does not define a linear ordering, rather it usually represents a directed graph which is also acyclic if the plan is consistent.
Each ordering is of the form A < B, which reads as "A before B" and means that action A must be executed sometime before action B, but not necessarily immediately before.
constraints stores these as a set of tuples (Action(A), Action(B)) which is interpreted as given above.
A constraint cannot be added to constraints if it breaks the acyclicity of the existing graph.
causal_links: a set of causal-links.
A causal link between two actions A and B in the plan is written as A --p--> B and is read as "A achieves p for B".
This imples that p is an effect of A and a precondition of B.
It also asserts that p must remain true from the time of action A to the time of action B.
Any violation of this rule is called a threat and must be resolved immediately by adding suitable ordering constraints.
causal_links stores this information as tuples (Action(A), precondition(p), Action(B)) which is interpreted as given above.
Causal-links can also be called protection-intervals, because the link A --p--> B protects p from being negated over the interval from A to B.
agenda: a set of open-preconditions.
A precondition is open if it is not achieved by some action in the plan.
Planners will work to reduce the set of open preconditions to the empty set, without introducing a contradiction.
agenda stored this information as tuples (precondition(p), Action(A)) where p is a precondition of the action A.
A consistent plan is a plan in which there are no cycles in the ordering constraints and no conflicts with the causal-links.
A consistent plan with no open preconditions is a solution.
<br>
<br>
Let's briefly glance over the helper functions before going into the actual algorithm.
<br>
expand_actions: generates all possible actions with variable bindings for use as a heuristic of selection of an open precondition.
<br>
find_open_precondition: finds a precondition from the agenda with the least number of actions that fulfil that precondition.
This heuristic helps form mandatory ordering constraints and causal-links to further simplify the problem and reduce the probability of encountering a threat.
<br>
find_action_for_precondition: finds an action that fulfils the given precondition along with the absolutely necessary variable bindings in accordance with the principle of least commitment.
In case of multiple possible actions, the action with the least number of effects is chosen to minimize the chances of encountering a threat.
<br>
cyclic: checks if a directed graph is cyclic.
<br>
add_const: adds constraint to constraints if the newly formed graph is acyclic and returns constraints otherwise.
<br>
is_a_threat: checks if the given effect negates the given precondition.
<br>
protect: checks if the given action poses a threat to the given causal_link.
If so, the threat is resolved by either promotion or demotion, whichever generates acyclic temporal constraints.
If neither promotion or demotion work, the chosen action is not the correct fit or the planning problem cannot be solved altogether.
<br>
convert: converts a graph from a list of edges to an Action : set mapping, for use in topological sorting.
<br>
toposort: a generator function that generates a topological ordering of a given graph as a list of sets.
Each set contains an action or several actions.
If a set has more that one action in it, it means that permutations between those actions also produce a valid plan.
<br>
display_plan: displays the causal_links, constraints and the partial order plan generated from toposort.
<br>
The execute method executes the algorithm, which is summarized below:
<br>
1. An open precondition is selected (a sub-goal that we want to achieve).
2. An action that fulfils the open precondition is chosen.
3. Temporal constraints are updated.
4. Existing causal links are protected. Protection is a method that checks if the causal links conflict
and if they do, temporal constraints are added to fix the threats.
5. The set of open preconditions is updated.
6. Temporal constraints of the selected action and the next action are established.
7. A new causal link is added between the selected action and the owner of the open precondition.
8. The set of new causal links is checked for threats and if found, the threat is removed by either promotion or demotion.
If promotion or demotion is unable to solve the problem, the planning problem cannot be solved with the current sequence of actions
or it may not be solvable at all.
9. These steps are repeated until the set of open preconditions is empty.
A partial-order plan can be used to generate different valid total-order plans.
This step is called linearization of the partial-order plan.
All possible linearizations of a partial-order plan for socks_and_shoes looks like this.
<br>
<br>
Linearization can be carried out in many ways, but the most efficient way is to represent the set of temporal constraints as a directed graph.
We can easily realize that the graph should also be acyclic as cycles in constraints means that the constraints are inconsistent.
This acyclicity is enforced by the add_const method, which adds a new constraint only if the acyclicity of the existing graph is not violated.
The protect method also checks for acyclicity of the newly-added temporal constraints to make a decision between promotion and demotion in case of a threat.
This property of a graph created from the temporal constraints of a valid partial-order plan allows us to use topological sort to order the constraints linearly.
A topological sort may produce several different valid solutions for a given directed acyclic graph.
Now that we know how PartialOrderPlanner works, let's solve a few problems using it.
End of explanation
sbw = simple_blocks_world()
pop = PartialOrderPlanner(sbw)
pop.execute()
Explanation: We observe that in the given partial order plan, Remove(Flat, Axle) and Remove(Spare, Trunk) are in the same set.
This means that the order of performing these actions does not affect the final outcome.
That aside, we also see that the PutOn(Spare, Axle) action has to be performed after both the Remove actions are complete, which seems logically consistent.
End of explanation
ss = socks_and_shoes()
pop = PartialOrderPlanner(ss)
pop.execute()
Explanation: We see that this plan does not have flexibility in selecting actions, ie, actions should be performed in this order and this order only, to successfully reach the goal state.
End of explanation
ss = socks_and_shoes()
%%timeit
GraphPlan(ss).execute()
%%timeit
Linearize(ss).execute()
%%timeit
PartialOrderPlanner(ss).execute(display=False)
Explanation: This plan again doesn't have constraints in selecting socks or shoes.
As long as both socks are worn before both shoes, we are fine.
Notice however, there is one valid solution,
<br>
LeftSock -> LeftShoe -> RightSock -> RightShoe
<br>
that the algorithm could not find as it cannot be represented as a general partially-ordered plan but is a specific total-order solution.
Runtime differences
Let's briefly take a look at the running time of all the three algorithms on the socks_and_shoes problem.
End of explanation |
6,102 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Widget
Step1: Then, create an instance of CAD
Step2: Display the widget | Python Code:
from ipcad.widgets import CAD
Explanation: Widget: CAD
<i class="fa fa-info-circle fa-2x text-primary"></i> Execute each of these cells in order, such as with <label class="label label-default">Shift+Enter</label>
First, load CAD from your module:
End of explanation
cadExample = CAD(assembly_url="examples/data/cutter/index.json", height=500)
Explanation: Then, create an instance of CAD:
End of explanation
cadExample
from IPython.html.widgets import interact
@interact(near=(1, 100), far=(100, 400))
def cam(near, far):
cadExample.camera_near, cadExample.camera_far = near, far
Explanation: Display the widget:
End of explanation |
6,103 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Solving problems by Searching
This notebook serves as supporting material for topics covered in Chapter 3 - Solving Problems by Searching and Chapter 4 - Beyond Classical Search from the book Artificial Intelligence
Step1: CONTENTS
Overview
Problem
Node
Simple Problem Solving Agent
Search Algorithms Visualization
Breadth-First Tree Search
Breadth-First Search
Best First Search
Uniform Cost Search
Greedy Best First Search
A* Search
Hill Climbing
Simulated Annealing
Genetic Algorithm
AND-OR Graph Search
Online DFS Agent
LRTA* Agent
OVERVIEW
Here, we learn about a specific kind of problem solving - building goal-based agents that can plan ahead to solve problems. In particular, we examine navigation problem/route finding problem. We must begin by precisely defining problems and their solutions. We will look at several general-purpose search algorithms.
Search algorithms can be classified into two types
Step2: PROBLEM
Let's see how we define a Problem. Run the next cell to see how abstract class Problem is defined in the search module.
Step3: The Problem class has six methods.
__init__(self, initial, goal)
Step4: The Node class has nine methods. The first is the __init__ method.
__init__(self, state, parent, action, path_cost)
Step5: Have a look at our romania_map, which is an Undirected Graph containing a dict of nodes as keys and neighbours as values.
Step6: It is pretty straightforward to understand this romania_map. The first node Arad has three neighbours named Zerind, Sibiu, Timisoara. Each of these nodes are 75, 140, 118 units apart from Arad respectively. And the same goes with other nodes.
And romania_map.locations contains the positions of each of the nodes. We will use the straight line distance (which is different from the one provided in romania_map) between two cities in algorithms like A*-search and Recursive Best First Search.
Define a problem
Step7: Romania Map Visualisation
Let's have a visualisation of Romania map [Figure 3.2] from the book and see how different searching algorithms perform / how frontier expands in each search algorithm for a simple problem named romania_problem.
Have a look at romania_locations. It is a dictionary defined in search module. We will use these location values to draw the romania graph using networkx.
Step8: Let's get started by initializing an empty graph. We will add nodes, place the nodes in their location as shown in the book, add edges to the graph.
Step9: We have completed building our graph based on romania_map and its locations. It's time to display it here in the notebook. This function show_map(node_colors) helps us do that. We will be calling this function later on to display the map at each and every interval step while searching, using variety of algorithms from the book.
We can simply call the function with node_colors dictionary object to display it.
Step10: Voila! You see, the romania map as shown in the Figure[3.2] in the book. Now, see how different searching algorithms perform with our problem statements.
SIMPLE PROBLEM SOLVING AGENT PROGRAM
Let us now define a Simple Problem Solving Agent Program. Run the next cell to see how the abstract class SimpleProblemSolvingAgentProgram is defined in the search module.
Step11: The SimpleProblemSolvingAgentProgram class has six methods
Step12: Now, we will define all the 8 states and create an object of the above class. Then, we will pass it different states and check the output
Step14: SEARCHING ALGORITHMS VISUALIZATION
In this section, we have visualizations of the following searching algorithms
Step15: Now, we use ipywidgets to display a slider, a button and our romania map. By sliding the slider we can have a look at all the intermediate steps of a particular search algorithm. By pressing the button Visualize, you can see all the steps without interacting with the slider. These two helper functions are the callback functions which are called when we interact with the slider and the button.
Step17: 2. DEPTH-FIRST TREE SEARCH
Now let's discuss another searching algorithm, Depth-First Tree Search.
Step18: 3. BREADTH-FIRST GRAPH SEARCH
Let's change all the node_colors to starting position and define a different problem statement.
Step21: 4. DEPTH-FIRST GRAPH SEARCH
Although we have a working implementation in search module, we have to make a few changes in the algorithm to make it suitable for visualization.
Step23: 5. BEST FIRST SEARCH
Let's change all the node_colors to starting position and define a different problem statement.
Step24: 6. UNIFORM COST SEARCH
Let's change all the node_colors to starting position and define a different problem statement.
Step26: 7. DEPTH LIMITED SEARCH
Let's change all the 'node_colors' to starting position and define a different problem statement.
Although we have a working implementation, but we need to make changes.
Step27: 8. ITERATIVE DEEPENING SEARCH
Let's change all the 'node_colors' to starting position and define a different problem statement.
Step29: 9. GREEDY BEST FIRST SEARCH
Let's change all the node_colors to starting position and define a different problem statement.
Step31: 10. A* SEARCH
Let's change all the node_colors to starting position and define a different problem statement.
Step33: 11. RECURSIVE BEST FIRST SEARCH
Let's change all the node_colors to starting position and define a different problem statement.
Step34: RECURSIVE BEST-FIRST SEARCH
Recursive best-first search is a simple recursive algorithm that improves upon heuristic search by reducing the memory requirement.
RBFS uses only linear space and it attempts to mimic the operation of standard best-first search.
Its structure is similar to recursive depth-first search but it doesn't continue indefinitely down the current path, the f_limit variable is used to keep track of the f-value of the best alternative path available from any ancestor of the current node.
RBFS remembers the f-value of the best leaf in the forgotten subtree and can decide whether it is worth re-expanding the tree later.
<br>
However, RBFS still suffers from excessive node regeneration.
<br>
Let's have a look at the implementation.
Step35: This is how recursive_best_first_search can solve the romania_problem
Step36: recursive_best_first_search can be used to solve the 8 puzzle problem too, as discussed later.
Step37: A* HEURISTICS
Different heuristics provide different efficiency in solving A* problems which are generally defined by the number of explored nodes as well as the branching factor. With the classic 8 puzzle we can show the efficiency of different heuristics through the number of explored nodes.
8 Puzzle Problem
The 8 Puzzle Problem consists of a 3x3 tray in which the goal is to get the initial configuration to the goal state by shifting the numbered tiles into the blank space.
example
Step38: Heuristics
Step39: We can solve the puzzle using the astar_search method.
Step40: This case is solvable, let's proceed.
<br>
The default heuristic function returns the number of misplaced tiles.
Step41: In the following cells, we use different heuristic functions.
<br>
Step42: And here's how recursive_best_first_search can be used to solve this problem too.
Step43: Even though all the heuristic functions give the same solution, the difference lies in the computation time.
<br>
This might make all the difference in a scenario where high computational efficiency is required.
<br>
Let's define a few puzzle states and time astar_search for every heuristic function.
We will use the %%timeit magic for this.
Step44: The default heuristic function is the same as the linear heuristic function, but we'll still check both.
Step45: We can infer that the manhattan heuristic function works the fastest.
<br>
sqrt_manhattan has an extra sqrt operation which makes it quite a lot slower than the others.
<br>
max_heuristic should have been a bit slower as it calls two functions, but in this case, those values were already calculated which saved some time.
Feel free to play around with these functions.
For comparison, this is how RBFS performs on this problem.
Step46: It is quite a lot slower than astar_search as we can see.
HILL CLIMBING
Hill Climbing is a heuristic search used for optimization problems.
Given a large set of inputs and a good heuristic function, it tries to find a sufficiently good solution to the problem.
This solution may or may not be the global optimum.
The algorithm is a variant of generate and test algorithm.
<br>
As a whole, the algorithm works as follows
Step53: We will find an approximate solution to the traveling salespersons problem using this algorithm.
<br>
We need to define a class for this problem.
<br>
Problem will be used as a base class.
Step54: We will use cities from the Romania map as our cities for this problem.
<br>
A list of all cities and a dictionary storing distances between them will be populated.
Step55: Next, we need to populate the individual lists inside the dictionary with the manhattan distance between the cities.
Step58: The way neighbours are chosen currently isn't suitable for the travelling salespersons problem.
We need a neighboring state that is similar in total path distance to the current state.
<br>
We need to change the function that finds neighbors.
Step59: An instance of the TSP_problem class will be created.
Step60: We can now generate an approximate solution to the problem by calling hill_climbing.
The results will vary a bit each time you run it.
Step61: The solution looks like this.
It is not difficult to see why this might be a good solution.
<br>
SIMULATED ANNEALING
The intuition behind Hill Climbing was developed from the metaphor of climbing up the graph of a function to find its peak.
There is a fundamental problem in the implementation of the algorithm however.
To find the highest hill, we take one step at a time, always uphill, hoping to find the highest point,
but if we are unlucky to start from the shoulder of the second-highest hill, there is no way we can find the highest one.
The algorithm will always converge to the local optimum.
Hill Climbing is also bad at dealing with functions that flatline in certain regions.
If all neighboring states have the same value, we cannot find the global optimum using this algorithm.
<br>
<br>
Let's now look at an algorithm that can deal with these situations.
<br>
Simulated Annealing is quite similar to Hill Climbing,
but instead of picking the best move every iteration, it picks a random move.
If this random move brings us closer to the global optimum, it will be accepted,
but if it doesn't, the algorithm may accept or reject the move based on a probability dictated by the temperature.
When the temperature is high, the algorithm is more likely to accept a random move even if it is bad.
At low temperatures, only good moves are accepted, with the occasional exception.
This allows exploration of the state space and prevents the algorithm from getting stuck at the local optimum.
Step62: The temperature is gradually decreased over the course of the iteration.
This is done by a scheduling routine.
The current implementation uses exponential decay of temperature, but we can use a different scheduling routine instead.
Step63: Next, we'll define a peak-finding problem and try to solve it using Simulated Annealing.
Let's define the grid and the initial state first.
Step64: We want to allow only four directions, namely N, S, E and W.
Let's use the predefined directions4 dictionary.
Step65: Define a problem with these parameters.
Step66: We'll run simulated_annealing a few times and store the solutions in a set.
Step67: Hence, the maximum value is 9.
Let's find the peak of a two-dimensional gaussian distribution.
We'll use the gaussian_kernel function from notebook.py to get the distribution.
Step68: Let's use the heatmap function from notebook.py to plot this.
Step69: Let's define the problem.
This time, we will allow movement in eight directions as defined in directions8.
Step70: We'll solve the problem just like we did last time.
<br>
Let's also time it.
Step71: The peak is at 1.0 which is how gaussian distributions are defined.
<br>
This could also be solved by Hill Climbing as follows.
Step72: As you can see, Hill-Climbing is about 24 times faster than Simulated Annealing.
(Notice that we ran Simulated Annealing for 100 iterations whereas we ran Hill Climbing only once.)
<br>
Simulated Annealing makes up for its tardiness by its ability to be applicable in a larger number of scenarios than Hill Climbing as illustrated by the example below.
<br>
Let's define a 2D surface as a matrix.
Step73: The peak value is 32 at the lower right corner.
<br>
The region at the upper left corner is planar.
Let's instantiate PeakFindingProblem one last time.
Step74: Solution by Hill Climbing
Step75: Solution by Simulated Annealing
Step76: Notice that even though both algorithms started at the same initial state,
Hill Climbing could never escape from the planar region and gave a locally optimum solution of 0,
whereas Simulated Annealing could reach the peak at 32.
<br>
A very similar situation arises when there are two peaks of different heights.
One should carefully consider the possible search space before choosing the algorithm for the task.
GENETIC ALGORITHM
Genetic algorithms (or GA) are inspired by natural evolution and are particularly useful in optimization and search problems with large state spaces.
Given a problem, algorithms in the domain make use of a population of solutions (also called states), where each solution/state represents a feasible solution. At each iteration (often called generation), the population gets updated using methods inspired by biology and evolution, like crossover, mutation and natural selection.
Overview
A genetic algorithm works in the following way
Step77: The algorithm takes the following input
Step78: The method picks at random a point and merges the parents (x and y) around it.
The mutation is done in the method mutate
Step79: We pick a gene in x to mutate and a gene from the gene pool to replace it with.
To help initializing the population we have the helper function init_population"
Step80: The function takes as input the number of individuals in the population, the gene pool and the length of each individual/state. It creates individuals with random genes and returns the population when done.
Explanation
Before we solve problems using the genetic algorithm, we will explain how to intuitively understand the algorithm using a trivial example.
Generating Phrases
In this problem, we use a genetic algorithm to generate a particular target phrase from a population of random strings. This is a classic example that helps build intuition about how to use this algorithm in other problems as well. Before we break the problem down, let us try to brute force the solution. Let us say that we want to generate the phrase "genetic algorithm". The phrase is 17 characters long. We can use any character from the 26 lowercase characters and the space character. To generate a random phrase of length 17, each space can be filled in 27 ways. So the total number of possible phrases is
$$ 27^{17} = 2153693963075557766310747 $$
which is a massive number. If we wanted to generate the phrase "Genetic Algorithm", we would also have to include all the 26 uppercase characters into consideration thereby increasing the sample space from 27 characters to 53 characters and the total number of possible phrases then would be
$$ 53^{17} = 205442259656281392806087233013 $$
If we wanted to include punctuations and numerals into the sample space, we would have further complicated an already impossible problem. Hence, brute forcing is not an option. Now we'll apply the genetic algorithm and see how it significantly reduces the search space. We essentially want to evolve our population of random strings so that they better approximate the target phrase as the number of generations increase. Genetic algorithms work on the principle of Darwinian Natural Selection according to which, there are three key concepts that need to be in place for evolution to happen. They are
Step81: We then need to define our gene pool, i.e the elements which an individual from the population might comprise of. Here, the gene pool contains all uppercase and lowercase letters of the English alphabet and the space character.
Step82: We now need to define the maximum size of each population. Larger populations have more variation but are computationally more expensive to run algorithms on.
Step83: As our population is not very large, we can afford to keep a relatively large mutation rate.
Step84: Great! Now, we need to define the most important metric for the genetic algorithm, i.e the fitness function. This will simply return the number of matching characters between the generated sample and the target phrase.
Step85: Before we run our genetic algorithm, we need to initialize a random population. We will use the init_population function to do this. We need to pass in the maximum population size, the gene pool and the length of each individual, which in this case will be the same as the length of the target phrase.
Step86: We will now define how the individuals in the population should change as the number of generations increases. First, the select function will be run on the population to select two individuals with high fitness values. These will be the parents which will then be recombined using the recombine function to generate the child.
Step87: Next, we need to apply a mutation according to the mutation rate. We call the mutate function on the child with the gene pool and mutation rate as the additional arguments.
Step88: The above lines can be condensed into
child = mutate(recombine(*select(2, population, fitness_fn)), gene_pool, mutation_rate)
And, we need to do this for every individual in the current population to generate the new population.
Step89: The individual with the highest fitness can then be found using the max function.
Step90: Let's print this out
Step91: We see that this is a list of characters. This can be converted to a string using the join function
Step92: We now need to define the conditions to terminate the algorithm. This can happen in two ways
1. Termination after a predefined number of generations
2. Termination when the fitness of the best individual of the current generation reaches a predefined threshold value.
We define these variables below
Step93: To generate ngen number of generations, we run a for loop ngen number of times. After each generation, we calculate the fitness of the best individual of the generation and compare it to the value of f_thres using the fitness_threshold function. After every generation, we print out the best individual of the generation and the corresponding fitness value. Lets now write a function to do this.
Step94: The function defined above is essentially the same as the one defined in search.py with the added functionality of printing out the data of each generation.
Step95: We have defined all the required functions and variables. Let's now create a new population and test the function we wrote above.
Step96: The genetic algorithm was able to converge!
We implore you to rerun the above cell and play around with target, max_population, f_thres, ngen etc parameters to get a better intuition of how the algorithm works. To summarize, if we can define the problem states in simple array format and if we can create a fitness function to gauge how good or bad our approximate solutions are, there is a high chance that we can get a satisfactory solution using a genetic algorithm.
- There is also a better GUI version of this program genetic_algorithm_example.py in the GUI folder for you to play around with.
Usage
Below we give two example usages for the genetic algorithm, for a graph coloring problem and the 8 queens problem.
Graph Coloring
First we will take on the simpler problem of coloring a small graph with two colors. Before we do anything, let's imagine how a solution might look. First, we have to represent our colors. Say, 'R' for red and 'G' for green. These make up our gene pool. What of the individual solutions though? For that, we will look at our problem. We stated we have a graph. A graph has nodes and edges, and we want to color the nodes. Naturally, we want to store each node's color. If we have four nodes, we can store their colors in a list of genes, one for each node. A possible solution will then look like this
Step97: Edge 'A' connects nodes 0 and 1, edge 'B' connects nodes 0 and 3 etc.
We already said our gene pool is 'R' and 'G', so we can jump right into initializing our population. Since we have only four nodes, state_length should be 4. For the number of individuals, we will try 8. We can increase this number if we need higher accuracy, but be careful! Larger populations need more computating power and take longer. You need to strike that sweet balance between accuracy and cost (the ultimate dilemma of the programmer!).
Step98: We created and printed the population. You can see that the genes in the individuals are random and there are 8 individuals each with 4 genes.
Next we need to write our fitness function. We previously said we want the function to count how many edges are valid. So, given a coloring/individual c, we will do just that
Step99: Great! Now we will run the genetic algorithm and see what solution it gives.
Step100: The algorithm converged to a solution. Let's check its score
Step101: The solution has a score of 4. Which means it is optimal, since we have exactly 4 edges in our graph, meaning all are valid!
NOTE
Step102: We have a population of 100 and each individual has 8 genes. The gene pool is the integers from 0 to 7, in string form. Above you can see the first five individuals.
Next we need to write our fitness function. Remember, queens threaten each other if they are at the same row, column or diagonal.
Since positionings are mutual, we must take care not to count them twice. Therefore for each queen, we will only check for conflicts for the queens after her.
A gene's value in an individual q denotes the queen's column, and the position of the gene denotes its row. We can check if the aforementioned values between two genes are the same. We also need to check for diagonals. A queen a is in the diagonal of another queen, b, if the difference of the rows between them is equal to either their difference in columns (for the diagonal on the right of a) or equal to the negative difference of their columns (for the left diagonal of a). Below is given the fitness function.
Step103: Note that the best score achievable is 28. That is because for each queen we only check for the queens after her. For the first queen we check 7 other queens, for the second queen 6 others and so on. In short, the number of checks we make is the sum 7+6+5+...+1. Which is equal to 7*(7+1)/2 = 28.
Because it is very hard and will take long to find a perfect solution, we will set the fitness threshold at 25. If we find an individual with a score greater or equal to that, we will halt. Let's see how the genetic algorithm will fare.
Step104: Above you can see the solution and its fitness score, which should be no less than 25.
This is where we conclude Genetic Algorithms.
N-Queens Problem
Here, we will look at the generalized cae of the Eight Queens problem.
<br>
We are given a N x N chessboard, with N queens, and we need to place them in such a way that no two queens can attack each other.
<br>
We will solve this problem using search algorithms.
To do this, we already have a NQueensProblem class in search.py.
Step105: In csp.ipynb we have seen that the N-Queens problem can be formulated as a CSP and can be solved by
the min_conflicts algorithm in a way similar to Hill-Climbing.
Here, we want to solve it using heuristic search algorithms and even some classical search algorithms.
The NQueensProblem class derives from the Problem class and is implemented in such a way that the search algorithms we already have, can solve it.
<br>
Let's instantiate the class.
Step106: Let's use depth_first_tree_search first.
<br>
We will also use the %%timeit magic with each algorithm to see how much time they take.
Step107: breadth_first_tree_search
Step108: uniform_cost_search
Step109: depth_first_tree_search is almost 20 times faster than breadth_first_tree_search and more than 200 times faster than uniform_cost_search.
We can also solve this problem using astar_search with a suitable heuristic function.
<br>
The best heuristic function for this scenario will be one that returns the number of conflicts in the current state.
Step110: astar_search is faster than both uniform_cost_search and breadth_first_tree_search.
Step111: AND-OR GRAPH SEARCH
An AND-OR graph is a graphical representation of the reduction of goals to conjunctions and disjunctions of subgoals.
<br>
An AND-OR graph can be seen as a generalization of a directed graph.
It contains a number of vertices and generalized edges that connect the vertices.
<br>
Each connector in an AND-OR graph connects a set of vertices $V$ to a single vertex, $v_0$.
A connector can be an AND connector or an OR connector.
An AND connector connects two edges having a logical AND relationship,
while and OR connector connects two edges having a logical OR relationship.
<br>
A vertex can have more than one AND or OR connector.
This is why AND-OR graphs can be expressed as logical statements.
<br>
<br>
AND-OR graphs also provide a computational model for executing logic programs and you will come across this data-structure in the logic module as well.
AND-OR graphs can be searched in depth-first, breadth-first or best-first ways searching the state sapce linearly or parallely.
<br>
Our implementation of AND-OR search searches over graphs generated by non-deterministic environments and returns a conditional plan that reaches a goal state in all circumstances.
Let's have a look at the implementation of and_or_graph_search.
Step112: The search is carried out by two functions and_search and or_search that recursively call each other, traversing nodes sequentially.
It is a recursive depth-first algorithm for searching an AND-OR graph.
<br>
A very similar algorithm fol_bc_ask can be found in the logic module, which carries out inference on first-order logic knowledge bases using AND-OR graph-derived data-structures.
<br>
AND-OR trees can also be used to represent the search spaces for two-player games, where a vertex of the tree represents the problem of one of the players winning the game, starting from the initial state of the game.
<br>
Problems involving MIN-MAX trees can be reformulated as AND-OR trees by representing MAX nodes as OR nodes and MIN nodes as AND nodes.
and_or_graph_search can then be used to find the optimal solution.
Standard algorithms like minimax and expectiminimax (for belief states) can also be applied on it with a few modifications.
Here's how and_or_graph_search can be applied to a simple vacuum-world example.
Step113: ONLINE DFS AGENT
So far, we have seen agents that use offline search algorithms,
which is a class of algorithms that compute a complete solution before executing it.
In contrast, an online search agent interleaves computation and action.
Online search is better for most dynamic environments and necessary for unknown environments.
<br>
Online search problems are solved by an agent executing actions, rather than just by pure computation.
For a fully observable environment, an online agent cycles through three steps
Step114: It maintains two dictionaries untried and unbacktracked.
untried contains nodes that have not been visited yet.
unbacktracked contains the sequence of nodes that the agent has visited so it can backtrack to it later, if required.
s and a store the state and the action respectively and result stores the final path or solution of the problem.
<br>
Let's look at another online search algorithm.
LRTA* AGENT
We can infer now that hill-climbing is an online search algorithm, but it is not very useful natively because for complicated search spaces, it might converge to the local minima and indefinitely stay there.
In such a case, we can choose to randomly restart it a few times with different starting conditions and return the result with the lowest total cost.
Sometimes, it is better to use random walks instead of random restarts depending on the problem, but progress can still be very slow.
<br>
A better improvement would be to give hill-climbing a memory element.
We store the current best heuristic estimate and it is updated as the agent gains experience in the state space.
The estimated optimal cost is made more and more accurate as time passes and each time the the local minima is "flattened out" until we escape it.
<br>
This learning scheme is a simple improvement upon traditional hill-climbing and is called learning real-time A_ or __LRTA__.
Similar to _Online DFS-Agent, it builds a map of the environment and chooses the best possible move according to its current heuristic estimates.
<br>
Actions that haven't been tried yet are assumed to lead immediately to the goal with the least possible cost.
This is called optimism under uncertainty and encourages the agent to explore new promising paths.
This algorithm might not terminate if the state space is infinite, unlike A* search.
<br>
Let's have a look at the LRTAStarAgent class.
Step115: H stores the heuristic cost of the paths the agent may travel to.
<br>
s and a store the state and the action respectively.
<br>
problem stores the problem definition and the current map of the environment is stored in problem.result.
<br>
The LRTA_cost method computes the cost of a new path given the current state s, the action a, the next state s1 and the estimated cost to get from s to s1 is extracted from H.
Let's use LRTAStarAgent to solve a simple problem.
We'll define a new LRTA_problem instance based on our one_dim_state_space.
Step116: Let's define an instance of OnlineSearchProblem.
Step117: Now we initialize a LRTAStarAgent object for the problem we just defined.
Step118: We'll pass the percepts [State_3, State_4, State_3, State_4, State_5] one-by-one to our agent to see what action it comes up with at each timestep.
Step119: If you manually try to see what the optimal action should be at each step, the outputs of the lrta_agent will start to make sense if it doesn't already. | Python Code:
from search import *
from notebook import psource, heatmap, gaussian_kernel, show_map, final_path_colors, display_visual, plot_NQueens
# Needed to hide warnings in the matplotlib sections
import warnings
warnings.filterwarnings("ignore")
Explanation: Solving problems by Searching
This notebook serves as supporting material for topics covered in Chapter 3 - Solving Problems by Searching and Chapter 4 - Beyond Classical Search from the book Artificial Intelligence: A Modern Approach. This notebook uses implementations from search.py module. Let's start by importing everything from search module.
End of explanation
%matplotlib inline
import networkx as nx
import matplotlib.pyplot as plt
from matplotlib import lines
from ipywidgets import interact
import ipywidgets as widgets
from IPython.display import display
import time
Explanation: CONTENTS
Overview
Problem
Node
Simple Problem Solving Agent
Search Algorithms Visualization
Breadth-First Tree Search
Breadth-First Search
Best First Search
Uniform Cost Search
Greedy Best First Search
A* Search
Hill Climbing
Simulated Annealing
Genetic Algorithm
AND-OR Graph Search
Online DFS Agent
LRTA* Agent
OVERVIEW
Here, we learn about a specific kind of problem solving - building goal-based agents that can plan ahead to solve problems. In particular, we examine navigation problem/route finding problem. We must begin by precisely defining problems and their solutions. We will look at several general-purpose search algorithms.
Search algorithms can be classified into two types:
Uninformed search algorithms: Search algorithms which explore the search space without having any information about the problem other than its definition.
Examples:
Breadth First Search
Depth First Search
Depth Limited Search
Iterative Deepening Search
Informed search algorithms: These type of algorithms leverage any information (heuristics, path cost) on the problem to search through the search space to find the solution efficiently.
Examples:
Best First Search
Uniform Cost Search
A* Search
Recursive Best First Search
Don't miss the visualisations of these algorithms solving the route-finding problem defined on Romania map at the end of this notebook.
For visualisations, we use networkx and matplotlib to show the map in the notebook and we use ipywidgets to interact with the map to see how the searching algorithm works. These are imported as required in notebook.py.
End of explanation
psource(Problem)
Explanation: PROBLEM
Let's see how we define a Problem. Run the next cell to see how abstract class Problem is defined in the search module.
End of explanation
psource(Node)
Explanation: The Problem class has six methods.
__init__(self, initial, goal) : This is what is called a constructor. It is the first method called when you create an instance of the class as Problem(initial, goal). The variable initial specifies the initial state $s_0$ of the search problem. It represents the beginning state. From here, our agent begins its task of exploration to find the goal state(s) which is given in the goal parameter.
actions(self, state) : This method returns all the possible actions agent can execute in the given state state.
result(self, state, action) : This returns the resulting state if action action is taken in the state state. This Problem class only deals with deterministic outcomes. So we know for sure what every action in a state would result to.
goal_test(self, state) : Return a boolean for a given state - True if it is a goal state, else False.
path_cost(self, c, state1, action, state2) : Return the cost of the path that arrives at state2 as a result of taking action from state1, assuming total cost of c to get up to state1.
value(self, state) : This acts as a bit of extra information in problems where we try to optimise a value when we cannot do a goal test.
NODE
Let's see how we define a Node. Run the next cell to see how abstract class Node is defined in the search module.
End of explanation
psource(GraphProblem)
Explanation: The Node class has nine methods. The first is the __init__ method.
__init__(self, state, parent, action, path_cost) : This method creates a node. parent represents the node that this is a successor of and action is the action required to get from the parent node to this node. path_cost is the cost to reach current node from parent node.
The next 4 methods are specific Node-related functions.
expand(self, problem) : This method lists all the neighbouring(reachable in one step) nodes of current node.
child_node(self, problem, action) : Given an action, this method returns the immediate neighbour that can be reached with that action.
solution(self) : This returns the sequence of actions required to reach this node from the root node.
path(self) : This returns a list of all the nodes that lies in the path from the root to this node.
The remaining 4 methods override standards Python functionality for representing an object as a string, the less-than ($<$) operator, the equal-to ($=$) operator, and the hash function.
__repr__(self) : This returns the state of this node.
__lt__(self, node) : Given a node, this method returns True if the state of current node is less than the state of the node. Otherwise it returns False.
__eq__(self, other) : This method returns True if the state of current node is equal to the other node. Else it returns False.
__hash__(self) : This returns the hash of the state of current node.
We will use the abstract class Problem to define our real problem named GraphProblem. You can see how we define GraphProblem by running the next cell.
End of explanation
romania_map = UndirectedGraph(dict(
Arad=dict(Zerind=75, Sibiu=140, Timisoara=118),
Bucharest=dict(Urziceni=85, Pitesti=101, Giurgiu=90, Fagaras=211),
Craiova=dict(Drobeta=120, Rimnicu=146, Pitesti=138),
Drobeta=dict(Mehadia=75),
Eforie=dict(Hirsova=86),
Fagaras=dict(Sibiu=99),
Hirsova=dict(Urziceni=98),
Iasi=dict(Vaslui=92, Neamt=87),
Lugoj=dict(Timisoara=111, Mehadia=70),
Oradea=dict(Zerind=71, Sibiu=151),
Pitesti=dict(Rimnicu=97),
Rimnicu=dict(Sibiu=80),
Urziceni=dict(Vaslui=142)))
romania_map.locations = dict(
Arad=(91, 492), Bucharest=(400, 327), Craiova=(253, 288),
Drobeta=(165, 299), Eforie=(562, 293), Fagaras=(305, 449),
Giurgiu=(375, 270), Hirsova=(534, 350), Iasi=(473, 506),
Lugoj=(165, 379), Mehadia=(168, 339), Neamt=(406, 537),
Oradea=(131, 571), Pitesti=(320, 368), Rimnicu=(233, 410),
Sibiu=(207, 457), Timisoara=(94, 410), Urziceni=(456, 350),
Vaslui=(509, 444), Zerind=(108, 531))
Explanation: Have a look at our romania_map, which is an Undirected Graph containing a dict of nodes as keys and neighbours as values.
End of explanation
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
Explanation: It is pretty straightforward to understand this romania_map. The first node Arad has three neighbours named Zerind, Sibiu, Timisoara. Each of these nodes are 75, 140, 118 units apart from Arad respectively. And the same goes with other nodes.
And romania_map.locations contains the positions of each of the nodes. We will use the straight line distance (which is different from the one provided in romania_map) between two cities in algorithms like A*-search and Recursive Best First Search.
Define a problem:
Now it's time to define our problem. We will define it by passing initial, goal, graph to GraphProblem. So, our problem is to find the goal state starting from the given initial state on the provided graph.
Say we want to start exploring from Arad and try to find Bucharest in our romania_map. So, this is how we do it.
End of explanation
romania_locations = romania_map.locations
print(romania_locations)
Explanation: Romania Map Visualisation
Let's have a visualisation of Romania map [Figure 3.2] from the book and see how different searching algorithms perform / how frontier expands in each search algorithm for a simple problem named romania_problem.
Have a look at romania_locations. It is a dictionary defined in search module. We will use these location values to draw the romania graph using networkx.
End of explanation
# node colors, node positions and node label positions
node_colors = {node: 'white' for node in romania_map.locations.keys()}
node_positions = romania_map.locations
node_label_pos = { k:[v[0],v[1]-10] for k,v in romania_map.locations.items() }
edge_weights = {(k, k2) : v2 for k, v in romania_map.graph_dict.items() for k2, v2 in v.items()}
romania_graph_data = { 'graph_dict' : romania_map.graph_dict,
'node_colors': node_colors,
'node_positions': node_positions,
'node_label_positions': node_label_pos,
'edge_weights': edge_weights
}
Explanation: Let's get started by initializing an empty graph. We will add nodes, place the nodes in their location as shown in the book, add edges to the graph.
End of explanation
show_map(romania_graph_data)
Explanation: We have completed building our graph based on romania_map and its locations. It's time to display it here in the notebook. This function show_map(node_colors) helps us do that. We will be calling this function later on to display the map at each and every interval step while searching, using variety of algorithms from the book.
We can simply call the function with node_colors dictionary object to display it.
End of explanation
psource(SimpleProblemSolvingAgentProgram)
Explanation: Voila! You see, the romania map as shown in the Figure[3.2] in the book. Now, see how different searching algorithms perform with our problem statements.
SIMPLE PROBLEM SOLVING AGENT PROGRAM
Let us now define a Simple Problem Solving Agent Program. Run the next cell to see how the abstract class SimpleProblemSolvingAgentProgram is defined in the search module.
End of explanation
class vacuumAgent(SimpleProblemSolvingAgentProgram):
def update_state(self, state, percept):
return percept
def formulate_goal(self, state):
goal = [state7, state8]
return goal
def formulate_problem(self, state, goal):
problem = state
return problem
def search(self, problem):
if problem == state1:
seq = ["Suck", "Right", "Suck"]
elif problem == state2:
seq = ["Suck", "Left", "Suck"]
elif problem == state3:
seq = ["Right", "Suck"]
elif problem == state4:
seq = ["Suck"]
elif problem == state5:
seq = ["Suck"]
elif problem == state6:
seq = ["Left", "Suck"]
return seq
Explanation: The SimpleProblemSolvingAgentProgram class has six methods:
__init__(self, intial_state=None): This is the contructor of the class and is the first method to be called when the class is instantiated. It takes in a keyword argument, initial_state which is initially None. The argument initial_state represents the state from which the agent starts.
__call__(self, percept): This method updates the state of the agent based on its percept using the update_state method. It then formulates a goal with the help of formulate_goal method and a problem using the formulate_problem method and returns a sequence of actions to solve it (using the search method).
update_state(self, percept): This method updates the state of the agent based on its percept.
formulate_goal(self, state): Given a state of the agent, this method formulates the goal for it.
formulate_problem(self, state, goal): It is used in problem formulation given a state and a goal for the agent.
search(self, problem): This method is used to search a sequence of actions to solve a problem.
Let us now define a Simple Problem Solving Agent Program. We will create a simple vacuumAgent class which will inherit from the abstract class SimpleProblemSolvingAgentProgram and overrides its methods. We will create a simple intelligent vacuum agent which can be in any one of the following states. It will move to any other state depending upon the current state as shown in the picture by arrows:
End of explanation
state1 = [(0, 0), [(0, 0), "Dirty"], [(1, 0), ["Dirty"]]]
state2 = [(1, 0), [(0, 0), "Dirty"], [(1, 0), ["Dirty"]]]
state3 = [(0, 0), [(0, 0), "Clean"], [(1, 0), ["Dirty"]]]
state4 = [(1, 0), [(0, 0), "Clean"], [(1, 0), ["Dirty"]]]
state5 = [(0, 0), [(0, 0), "Dirty"], [(1, 0), ["Clean"]]]
state6 = [(1, 0), [(0, 0), "Dirty"], [(1, 0), ["Clean"]]]
state7 = [(0, 0), [(0, 0), "Clean"], [(1, 0), ["Clean"]]]
state8 = [(1, 0), [(0, 0), "Clean"], [(1, 0), ["Clean"]]]
a = vacuumAgent(state1)
print(a(state6))
print(a(state1))
print(a(state3))
Explanation: Now, we will define all the 8 states and create an object of the above class. Then, we will pass it different states and check the output:
End of explanation
def tree_breadth_search_for_vis(problem):
Search through the successors of a problem to find a goal.
The argument frontier should be an empty queue.
Don't worry about repeated paths to a state. [Figure 3.7]
# we use these two variables at the time of visualisations
iterations = 0
all_node_colors = []
node_colors = {k : 'white' for k in problem.graph.nodes()}
#Adding first node to the queue
frontier = deque([Node(problem.initial)])
node_colors[Node(problem.initial).state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
while frontier:
#Popping first node of queue
node = frontier.popleft()
# modify the currently searching node to red
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
# modify goal node to green after reaching the goal
node_colors[node.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, node)
frontier.extend(node.expand(problem))
for n in node.expand(problem):
node_colors[n.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
# modify the color of explored nodes to gray
node_colors[node.state] = "gray"
iterations += 1
all_node_colors.append(dict(node_colors))
return None
def breadth_first_tree_search(problem):
"Search the shallowest nodes in the search tree first."
iterations, all_node_colors, node = tree_breadth_search_for_vis(problem)
return(iterations, all_node_colors, node)
Explanation: SEARCHING ALGORITHMS VISUALIZATION
In this section, we have visualizations of the following searching algorithms:
Breadth First Tree Search
Depth First Tree Search
Breadth First Search
Depth First Graph Search
Best First Graph Search
Uniform Cost Search
Depth Limited Search
Iterative Deepening Search
Greedy Best First Search
A*-Search
Recursive Best First Search
We add the colors to the nodes to have a nice visualisation when displaying. So, these are the different colors we are using in these visuals:
* Un-explored nodes - <font color='black'>white</font>
* Frontier nodes - <font color='orange'>orange</font>
* Currently exploring node - <font color='red'>red</font>
* Already explored nodes - <font color='gray'>gray</font>
1. BREADTH-FIRST TREE SEARCH
We have a working implementation in search module. But as we want to interact with the graph while it is searching, we need to modify the implementation. Here's the modified breadth first tree search.
End of explanation
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
a, b, c = breadth_first_tree_search(romania_problem)
display_visual(romania_graph_data, user_input=False,
algorithm=breadth_first_tree_search,
problem=romania_problem)
Explanation: Now, we use ipywidgets to display a slider, a button and our romania map. By sliding the slider we can have a look at all the intermediate steps of a particular search algorithm. By pressing the button Visualize, you can see all the steps without interacting with the slider. These two helper functions are the callback functions which are called when we interact with the slider and the button.
End of explanation
def tree_depth_search_for_vis(problem):
Search through the successors of a problem to find a goal.
The argument frontier should be an empty queue.
Don't worry about repeated paths to a state. [Figure 3.7]
# we use these two variables at the time of visualisations
iterations = 0
all_node_colors = []
node_colors = {k : 'white' for k in problem.graph.nodes()}
#Adding first node to the stack
frontier = [Node(problem.initial)]
node_colors[Node(problem.initial).state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
while frontier:
#Popping first node of stack
node = frontier.pop()
# modify the currently searching node to red
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
# modify goal node to green after reaching the goal
node_colors[node.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, node)
frontier.extend(node.expand(problem))
for n in node.expand(problem):
node_colors[n.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
# modify the color of explored nodes to gray
node_colors[node.state] = "gray"
iterations += 1
all_node_colors.append(dict(node_colors))
return None
def depth_first_tree_search(problem):
"Search the deepest nodes in the search tree first."
iterations, all_node_colors, node = tree_depth_search_for_vis(problem)
return(iterations, all_node_colors, node)
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(romania_graph_data, user_input=False,
algorithm=depth_first_tree_search,
problem=romania_problem)
Explanation: 2. DEPTH-FIRST TREE SEARCH
Now let's discuss another searching algorithm, Depth-First Tree Search.
End of explanation
def breadth_first_search_graph(problem):
"[Figure 3.11]"
# we use these two variables at the time of visualisations
iterations = 0
all_node_colors = []
node_colors = {k : 'white' for k in problem.graph.nodes()}
node = Node(problem.initial)
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
node_colors[node.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, node)
frontier = deque([node])
# modify the color of frontier nodes to blue
node_colors[node.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
explored = set()
while frontier:
node = frontier.popleft()
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
explored.add(node.state)
for child in node.expand(problem):
if child.state not in explored and child not in frontier:
if problem.goal_test(child.state):
node_colors[child.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, child)
frontier.append(child)
node_colors[child.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
node_colors[node.state] = "gray"
iterations += 1
all_node_colors.append(dict(node_colors))
return None
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(romania_graph_data, user_input=False,
algorithm=breadth_first_search_graph,
problem=romania_problem)
Explanation: 3. BREADTH-FIRST GRAPH SEARCH
Let's change all the node_colors to starting position and define a different problem statement.
End of explanation
def graph_search_for_vis(problem):
Search through the successors of a problem to find a goal.
The argument frontier should be an empty queue.
If two paths reach a state, only use the first one. [Figure 3.7]
# we use these two variables at the time of visualisations
iterations = 0
all_node_colors = []
node_colors = {k : 'white' for k in problem.graph.nodes()}
frontier = [(Node(problem.initial))]
explored = set()
# modify the color of frontier nodes to orange
node_colors[Node(problem.initial).state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
while frontier:
# Popping first node of stack
node = frontier.pop()
# modify the currently searching node to red
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
# modify goal node to green after reaching the goal
node_colors[node.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, node)
explored.add(node.state)
frontier.extend(child for child in node.expand(problem)
if child.state not in explored and
child not in frontier)
for n in frontier:
# modify the color of frontier nodes to orange
node_colors[n.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
# modify the color of explored nodes to gray
node_colors[node.state] = "gray"
iterations += 1
all_node_colors.append(dict(node_colors))
return None
def depth_first_graph_search(problem):
Search the deepest nodes in the search tree first.
iterations, all_node_colors, node = graph_search_for_vis(problem)
return(iterations, all_node_colors, node)
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(romania_graph_data, user_input=False,
algorithm=depth_first_graph_search,
problem=romania_problem)
Explanation: 4. DEPTH-FIRST GRAPH SEARCH
Although we have a working implementation in search module, we have to make a few changes in the algorithm to make it suitable for visualization.
End of explanation
def best_first_graph_search_for_vis(problem, f):
Search the nodes with the lowest f scores first.
You specify the function f(node) that you want to minimize; for example,
if f is a heuristic estimate to the goal, then we have greedy best
first search; if f is node.depth then we have breadth-first search.
There is a subtlety: the line "f = memoize(f, 'f')" means that the f
values will be cached on the nodes as they are computed. So after doing
a best first search you can examine the f values of the path returned.
# we use these two variables at the time of visualisations
iterations = 0
all_node_colors = []
node_colors = {k : 'white' for k in problem.graph.nodes()}
f = memoize(f, 'f')
node = Node(problem.initial)
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
node_colors[node.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, node)
frontier = PriorityQueue('min', f)
frontier.append(node)
node_colors[node.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
explored = set()
while frontier:
node = frontier.pop()
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
node_colors[node.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, node)
explored.add(node.state)
for child in node.expand(problem):
if child.state not in explored and child not in frontier:
frontier.append(child)
node_colors[child.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
elif child in frontier:
incumbent = frontier[child]
if f(child) < f(incumbent):
del frontier[incumbent]
frontier.append(child)
node_colors[child.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
node_colors[node.state] = "gray"
iterations += 1
all_node_colors.append(dict(node_colors))
return None
Explanation: 5. BEST FIRST SEARCH
Let's change all the node_colors to starting position and define a different problem statement.
End of explanation
def uniform_cost_search_graph(problem):
"[Figure 3.14]"
#Uniform Cost Search uses Best First Search algorithm with f(n) = g(n)
iterations, all_node_colors, node = best_first_graph_search_for_vis(problem, lambda node: node.path_cost)
return(iterations, all_node_colors, node)
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(romania_graph_data, user_input=False,
algorithm=uniform_cost_search_graph,
problem=romania_problem)
Explanation: 6. UNIFORM COST SEARCH
Let's change all the node_colors to starting position and define a different problem statement.
End of explanation
def depth_limited_search_graph(problem, limit = -1):
'''
Perform depth first search of graph g.
if limit >= 0, that is the maximum depth of the search.
'''
# we use these two variables at the time of visualisations
iterations = 0
all_node_colors = []
node_colors = {k : 'white' for k in problem.graph.nodes()}
frontier = [Node(problem.initial)]
explored = set()
cutoff_occurred = False
node_colors[Node(problem.initial).state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
while frontier:
# Popping first node of queue
node = frontier.pop()
# modify the currently searching node to red
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
# modify goal node to green after reaching the goal
node_colors[node.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, node)
elif limit >= 0:
cutoff_occurred = True
limit += 1
all_node_color.pop()
iterations -= 1
node_colors[node.state] = "gray"
explored.add(node.state)
frontier.extend(child for child in node.expand(problem)
if child.state not in explored and
child not in frontier)
for n in frontier:
limit -= 1
# modify the color of frontier nodes to orange
node_colors[n.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
# modify the color of explored nodes to gray
node_colors[node.state] = "gray"
iterations += 1
all_node_colors.append(dict(node_colors))
return 'cutoff' if cutoff_occurred else None
def depth_limited_search_for_vis(problem):
Search the deepest nodes in the search tree first.
iterations, all_node_colors, node = depth_limited_search_graph(problem)
return(iterations, all_node_colors, node)
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(romania_graph_data, user_input=False,
algorithm=depth_limited_search_for_vis,
problem=romania_problem)
Explanation: 7. DEPTH LIMITED SEARCH
Let's change all the 'node_colors' to starting position and define a different problem statement.
Although we have a working implementation, but we need to make changes.
End of explanation
def iterative_deepening_search_for_vis(problem):
for depth in range(sys.maxsize):
iterations, all_node_colors, node=depth_limited_search_for_vis(problem)
if iterations:
return (iterations, all_node_colors, node)
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(romania_graph_data, user_input=False,
algorithm=iterative_deepening_search_for_vis,
problem=romania_problem)
Explanation: 8. ITERATIVE DEEPENING SEARCH
Let's change all the 'node_colors' to starting position and define a different problem statement.
End of explanation
def greedy_best_first_search(problem, h=None):
Greedy Best-first graph search is an informative searching algorithm with f(n) = h(n).
You need to specify the h function when you call best_first_search, or
else in your Problem subclass.
h = memoize(h or problem.h, 'h')
iterations, all_node_colors, node = best_first_graph_search_for_vis(problem, lambda n: h(n))
return(iterations, all_node_colors, node)
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(romania_graph_data, user_input=False,
algorithm=greedy_best_first_search,
problem=romania_problem)
Explanation: 9. GREEDY BEST FIRST SEARCH
Let's change all the node_colors to starting position and define a different problem statement.
End of explanation
def astar_search_graph(problem, h=None):
A* search is best-first graph search with f(n) = g(n)+h(n).
You need to specify the h function when you call astar_search, or
else in your Problem subclass.
h = memoize(h or problem.h, 'h')
iterations, all_node_colors, node = best_first_graph_search_for_vis(problem,
lambda n: n.path_cost + h(n))
return(iterations, all_node_colors, node)
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(romania_graph_data, user_input=False,
algorithm=astar_search_graph,
problem=romania_problem)
Explanation: 10. A* SEARCH
Let's change all the node_colors to starting position and define a different problem statement.
End of explanation
def recursive_best_first_search_for_vis(problem, h=None):
[Figure 3.26] Recursive best-first search
# we use these two variables at the time of visualizations
iterations = 0
all_node_colors = []
node_colors = {k : 'white' for k in problem.graph.nodes()}
h = memoize(h or problem.h, 'h')
def RBFS(problem, node, flimit):
nonlocal iterations
def color_city_and_update_map(node, color):
node_colors[node.state] = color
nonlocal iterations
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
color_city_and_update_map(node, 'green')
return (iterations, all_node_colors, node), 0 # the second value is immaterial
successors = node.expand(problem)
if len(successors) == 0:
color_city_and_update_map(node, 'gray')
return (iterations, all_node_colors, None), infinity
for s in successors:
color_city_and_update_map(s, 'orange')
s.f = max(s.path_cost + h(s), node.f)
while True:
# Order by lowest f value
successors.sort(key=lambda x: x.f)
best = successors[0]
if best.f > flimit:
color_city_and_update_map(node, 'gray')
return (iterations, all_node_colors, None), best.f
if len(successors) > 1:
alternative = successors[1].f
else:
alternative = infinity
node_colors[node.state] = 'gray'
node_colors[best.state] = 'red'
iterations += 1
all_node_colors.append(dict(node_colors))
result, best.f = RBFS(problem, best, min(flimit, alternative))
if result[2] is not None:
color_city_and_update_map(node, 'green')
return result, best.f
else:
color_city_and_update_map(node, 'red')
node = Node(problem.initial)
node.f = h(node)
node_colors[node.state] = 'red'
iterations += 1
all_node_colors.append(dict(node_colors))
result, bestf = RBFS(problem, node, infinity)
return result
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(romania_graph_data, user_input=False,
algorithm=recursive_best_first_search_for_vis,
problem=romania_problem)
all_node_colors = []
# display_visual(romania_graph_data, user_input=True, algorithm=breadth_first_tree_search)
algorithms = { "Breadth First Tree Search": tree_breadth_search_for_vis,
"Depth First Tree Search": tree_depth_search_for_vis,
"Breadth First Search": breadth_first_search_graph,
"Depth First Graph Search": graph_search_for_vis,
"Best First Graph Search": best_first_graph_search_for_vis,
"Uniform Cost Search": uniform_cost_search_graph,
"Depth Limited Search": depth_limited_search_for_vis,
"Iterative Deepening Search": iterative_deepening_search_for_vis,
"Greedy Best First Search": greedy_best_first_search,
"A-star Search": astar_search_graph,
"Recursive Best First Search": recursive_best_first_search_for_vis}
display_visual(romania_graph_data, algorithm=algorithms, user_input=True)
Explanation: 11. RECURSIVE BEST FIRST SEARCH
Let's change all the node_colors to starting position and define a different problem statement.
End of explanation
psource(recursive_best_first_search)
Explanation: RECURSIVE BEST-FIRST SEARCH
Recursive best-first search is a simple recursive algorithm that improves upon heuristic search by reducing the memory requirement.
RBFS uses only linear space and it attempts to mimic the operation of standard best-first search.
Its structure is similar to recursive depth-first search but it doesn't continue indefinitely down the current path, the f_limit variable is used to keep track of the f-value of the best alternative path available from any ancestor of the current node.
RBFS remembers the f-value of the best leaf in the forgotten subtree and can decide whether it is worth re-expanding the tree later.
<br>
However, RBFS still suffers from excessive node regeneration.
<br>
Let's have a look at the implementation.
End of explanation
recursive_best_first_search(romania_problem).solution()
Explanation: This is how recursive_best_first_search can solve the romania_problem
End of explanation
puzzle = EightPuzzle((2, 4, 3, 1, 5, 6, 7, 8, 0))
assert puzzle.check_solvability((2, 4, 3, 1, 5, 6, 7, 8, 0))
recursive_best_first_search(puzzle).solution()
Explanation: recursive_best_first_search can be used to solve the 8 puzzle problem too, as discussed later.
End of explanation
goal = [1, 2, 3, 4, 5, 6, 7, 8, 0]
Explanation: A* HEURISTICS
Different heuristics provide different efficiency in solving A* problems which are generally defined by the number of explored nodes as well as the branching factor. With the classic 8 puzzle we can show the efficiency of different heuristics through the number of explored nodes.
8 Puzzle Problem
The 8 Puzzle Problem consists of a 3x3 tray in which the goal is to get the initial configuration to the goal state by shifting the numbered tiles into the blank space.
example:-
Initial State Goal State
| 7 | 2 | 4 | | 1 | 2 | 3 |
| 5 | 0 | 6 | | 4 | 5 | 6 |
| 8 | 3 | 1 | | 7 | 8 | 0 |
We have a total of 9 blank tiles giving us a total of 9! initial configuration but not all of these are solvable. The solvability of a configuration can be checked by calculating the Inversion Permutation. If the total Inversion Permutation is even then the initial configuration is solvable else the initial configuration is not solvable which means that only 9!/2 initial states lead to a solution.
<br>
Let's define our goal state.
End of explanation
# Heuristics for 8 Puzzle Problem
def linear(node):
return sum([1 if node.state[i] != goal[i] else 0 for i in range(8)])
def manhattan(node):
state = node.state
index_goal = {0:[2,2], 1:[0,0], 2:[0,1], 3:[0,2], 4:[1,0], 5:[1,1], 6:[1,2], 7:[2,0], 8:[2,1]}
index_state = {}
index = [[0,0], [0,1], [0,2], [1,0], [1,1], [1,2], [2,0], [2,1], [2,2]]
x, y = 0, 0
for i in range(len(state)):
index_state[state[i]] = index[i]
mhd = 0
for i in range(8):
for j in range(2):
mhd = abs(index_goal[i][j] - index_state[i][j]) + mhd
return mhd
def sqrt_manhattan(node):
state = node.state
index_goal = {0:[2,2], 1:[0,0], 2:[0,1], 3:[0,2], 4:[1,0], 5:[1,1], 6:[1,2], 7:[2,0], 8:[2,1]}
index_state = {}
index = [[0,0], [0,1], [0,2], [1,0], [1,1], [1,2], [2,0], [2,1], [2,2]]
x, y = 0, 0
for i in range(len(state)):
index_state[state[i]] = index[i]
mhd = 0
for i in range(8):
for j in range(2):
mhd = (index_goal[i][j] - index_state[i][j])**2 + mhd
return math.sqrt(mhd)
def max_heuristic(node):
score1 = manhattan(node)
score2 = linear(node)
return max(score1, score2)
Explanation: Heuristics :-
1) Manhattan Distance:- For the 8 puzzle problem Manhattan distance is defined as the distance of a tile from its goal state( for the tile numbered '1' in the initial configuration Manhattan distance is 4 "2 for left and 2 for upward displacement").
2) No. of Misplaced Tiles:- The heuristic calculates the number of misplaced tiles between the current state and goal state.
3) Sqrt of Manhattan Distance:- It calculates the square root of Manhattan distance.
4) Max Heuristic:- It assign the score as the maximum between "Manhattan Distance" and "No. of Misplaced Tiles".
End of explanation
# Solving the puzzle
puzzle = EightPuzzle((2, 4, 3, 1, 5, 6, 7, 8, 0))
puzzle.check_solvability((2, 4, 3, 1, 5, 6, 7, 8, 0)) # checks whether the initialized configuration is solvable or not
Explanation: We can solve the puzzle using the astar_search method.
End of explanation
astar_search(puzzle).solution()
Explanation: This case is solvable, let's proceed.
<br>
The default heuristic function returns the number of misplaced tiles.
End of explanation
astar_search(puzzle, linear).solution()
astar_search(puzzle, manhattan).solution()
astar_search(puzzle, sqrt_manhattan).solution()
astar_search(puzzle, max_heuristic).solution()
Explanation: In the following cells, we use different heuristic functions.
<br>
End of explanation
recursive_best_first_search(puzzle, manhattan).solution()
Explanation: And here's how recursive_best_first_search can be used to solve this problem too.
End of explanation
puzzle_1 = EightPuzzle((2, 4, 3, 1, 5, 6, 7, 8, 0))
puzzle_2 = EightPuzzle((1, 2, 3, 4, 5, 6, 0, 7, 8))
puzzle_3 = EightPuzzle((1, 2, 3, 4, 5, 7, 8, 6, 0))
Explanation: Even though all the heuristic functions give the same solution, the difference lies in the computation time.
<br>
This might make all the difference in a scenario where high computational efficiency is required.
<br>
Let's define a few puzzle states and time astar_search for every heuristic function.
We will use the %%timeit magic for this.
End of explanation
%%timeit
astar_search(puzzle_1)
astar_search(puzzle_2)
astar_search(puzzle_3)
%%timeit
astar_search(puzzle_1, linear)
astar_search(puzzle_2, linear)
astar_search(puzzle_3, linear)
%%timeit
astar_search(puzzle_1, manhattan)
astar_search(puzzle_2, manhattan)
astar_search(puzzle_3, manhattan)
%%timeit
astar_search(puzzle_1, sqrt_manhattan)
astar_search(puzzle_2, sqrt_manhattan)
astar_search(puzzle_3, sqrt_manhattan)
%%timeit
astar_search(puzzle_1, max_heuristic)
astar_search(puzzle_2, max_heuristic)
astar_search(puzzle_3, max_heuristic)
Explanation: The default heuristic function is the same as the linear heuristic function, but we'll still check both.
End of explanation
%%timeit
recursive_best_first_search(puzzle_1, linear)
recursive_best_first_search(puzzle_2, linear)
recursive_best_first_search(puzzle_3, linear)
Explanation: We can infer that the manhattan heuristic function works the fastest.
<br>
sqrt_manhattan has an extra sqrt operation which makes it quite a lot slower than the others.
<br>
max_heuristic should have been a bit slower as it calls two functions, but in this case, those values were already calculated which saved some time.
Feel free to play around with these functions.
For comparison, this is how RBFS performs on this problem.
End of explanation
psource(hill_climbing)
Explanation: It is quite a lot slower than astar_search as we can see.
HILL CLIMBING
Hill Climbing is a heuristic search used for optimization problems.
Given a large set of inputs and a good heuristic function, it tries to find a sufficiently good solution to the problem.
This solution may or may not be the global optimum.
The algorithm is a variant of generate and test algorithm.
<br>
As a whole, the algorithm works as follows:
- Evaluate the initial state.
- If it is equal to the goal state, return.
- Find a neighboring state (one which is heuristically similar to the current state)
- Evaluate this state. If it is closer to the goal state than before, replace the initial state with this state and repeat these steps.
<br>
End of explanation
class TSP_problem(Problem):
subclass of Problem to define various functions
def two_opt(self, state):
Neighbour generating function for Traveling Salesman Problem
neighbour_state = state[:]
left = random.randint(0, len(neighbour_state) - 1)
right = random.randint(0, len(neighbour_state) - 1)
if left > right:
left, right = right, left
neighbour_state[left: right + 1] = reversed(neighbour_state[left: right + 1])
return neighbour_state
def actions(self, state):
action that can be excuted in given state
return [self.two_opt]
def result(self, state, action):
result after applying the given action on the given state
return action(state)
def path_cost(self, c, state1, action, state2):
total distance for the Traveling Salesman to be covered if in state2
cost = 0
for i in range(len(state2) - 1):
cost += distances[state2[i]][state2[i + 1]]
cost += distances[state2[0]][state2[-1]]
return cost
def value(self, state):
value of path cost given negative for the given state
return -1 * self.path_cost(None, None, None, state)
Explanation: We will find an approximate solution to the traveling salespersons problem using this algorithm.
<br>
We need to define a class for this problem.
<br>
Problem will be used as a base class.
End of explanation
distances = {}
all_cities = []
for city in romania_map.locations.keys():
distances[city] = {}
all_cities.append(city)
all_cities.sort()
print(all_cities)
Explanation: We will use cities from the Romania map as our cities for this problem.
<br>
A list of all cities and a dictionary storing distances between them will be populated.
End of explanation
import numpy as np
for name_1, coordinates_1 in romania_map.locations.items():
for name_2, coordinates_2 in romania_map.locations.items():
distances[name_1][name_2] = np.linalg.norm(
[coordinates_1[0] - coordinates_2[0], coordinates_1[1] - coordinates_2[1]])
distances[name_2][name_1] = np.linalg.norm(
[coordinates_1[0] - coordinates_2[0], coordinates_1[1] - coordinates_2[1]])
Explanation: Next, we need to populate the individual lists inside the dictionary with the manhattan distance between the cities.
End of explanation
def hill_climbing(problem):
From the initial node, keep choosing the neighbor with highest value,
stopping when no neighbor is better. [Figure 4.2]
def find_neighbors(state, number_of_neighbors=100):
finds neighbors using two_opt method
neighbors = []
for i in range(number_of_neighbors):
new_state = problem.two_opt(state)
neighbors.append(Node(new_state))
state = new_state
return neighbors
# as this is a stochastic algorithm, we will set a cap on the number of iterations
iterations = 10000
current = Node(problem.initial)
while iterations:
neighbors = find_neighbors(current.state)
if not neighbors:
break
neighbor = argmax_random_tie(neighbors,
key=lambda node: problem.value(node.state))
if problem.value(neighbor.state) <= problem.value(current.state):
current.state = neighbor.state
iterations -= 1
return current.state
Explanation: The way neighbours are chosen currently isn't suitable for the travelling salespersons problem.
We need a neighboring state that is similar in total path distance to the current state.
<br>
We need to change the function that finds neighbors.
End of explanation
tsp = TSP_problem(all_cities)
Explanation: An instance of the TSP_problem class will be created.
End of explanation
hill_climbing(tsp)
Explanation: We can now generate an approximate solution to the problem by calling hill_climbing.
The results will vary a bit each time you run it.
End of explanation
psource(simulated_annealing)
Explanation: The solution looks like this.
It is not difficult to see why this might be a good solution.
<br>
SIMULATED ANNEALING
The intuition behind Hill Climbing was developed from the metaphor of climbing up the graph of a function to find its peak.
There is a fundamental problem in the implementation of the algorithm however.
To find the highest hill, we take one step at a time, always uphill, hoping to find the highest point,
but if we are unlucky to start from the shoulder of the second-highest hill, there is no way we can find the highest one.
The algorithm will always converge to the local optimum.
Hill Climbing is also bad at dealing with functions that flatline in certain regions.
If all neighboring states have the same value, we cannot find the global optimum using this algorithm.
<br>
<br>
Let's now look at an algorithm that can deal with these situations.
<br>
Simulated Annealing is quite similar to Hill Climbing,
but instead of picking the best move every iteration, it picks a random move.
If this random move brings us closer to the global optimum, it will be accepted,
but if it doesn't, the algorithm may accept or reject the move based on a probability dictated by the temperature.
When the temperature is high, the algorithm is more likely to accept a random move even if it is bad.
At low temperatures, only good moves are accepted, with the occasional exception.
This allows exploration of the state space and prevents the algorithm from getting stuck at the local optimum.
End of explanation
psource(exp_schedule)
Explanation: The temperature is gradually decreased over the course of the iteration.
This is done by a scheduling routine.
The current implementation uses exponential decay of temperature, but we can use a different scheduling routine instead.
End of explanation
initial = (0, 0)
grid = [[3, 7, 2, 8], [5, 2, 9, 1], [5, 3, 3, 1]]
Explanation: Next, we'll define a peak-finding problem and try to solve it using Simulated Annealing.
Let's define the grid and the initial state first.
End of explanation
directions4
Explanation: We want to allow only four directions, namely N, S, E and W.
Let's use the predefined directions4 dictionary.
End of explanation
problem = PeakFindingProblem(initial, grid, directions4)
Explanation: Define a problem with these parameters.
End of explanation
solutions = {problem.value(simulated_annealing(problem)) for i in range(100)}
max(solutions)
Explanation: We'll run simulated_annealing a few times and store the solutions in a set.
End of explanation
grid = gaussian_kernel()
Explanation: Hence, the maximum value is 9.
Let's find the peak of a two-dimensional gaussian distribution.
We'll use the gaussian_kernel function from notebook.py to get the distribution.
End of explanation
heatmap(grid, cmap='jet', interpolation='spline16')
Explanation: Let's use the heatmap function from notebook.py to plot this.
End of explanation
directions8
Explanation: Let's define the problem.
This time, we will allow movement in eight directions as defined in directions8.
End of explanation
problem = PeakFindingProblem(initial, grid, directions8)
%%timeit
solutions = {problem.value(simulated_annealing(problem)) for i in range(100)}
max(solutions)
Explanation: We'll solve the problem just like we did last time.
<br>
Let's also time it.
End of explanation
%%timeit
solution = problem.value(hill_climbing(problem))
solution = problem.value(hill_climbing(problem))
solution
Explanation: The peak is at 1.0 which is how gaussian distributions are defined.
<br>
This could also be solved by Hill Climbing as follows.
End of explanation
grid = [[0, 0, 0, 1, 4],
[0, 0, 2, 8, 10],
[0, 0, 2, 4, 12],
[0, 2, 4, 8, 16],
[1, 4, 8, 16, 32]]
heatmap(grid, cmap='jet', interpolation='spline16')
Explanation: As you can see, Hill-Climbing is about 24 times faster than Simulated Annealing.
(Notice that we ran Simulated Annealing for 100 iterations whereas we ran Hill Climbing only once.)
<br>
Simulated Annealing makes up for its tardiness by its ability to be applicable in a larger number of scenarios than Hill Climbing as illustrated by the example below.
<br>
Let's define a 2D surface as a matrix.
End of explanation
problem = PeakFindingProblem(initial, grid, directions8)
Explanation: The peak value is 32 at the lower right corner.
<br>
The region at the upper left corner is planar.
Let's instantiate PeakFindingProblem one last time.
End of explanation
solution = problem.value(hill_climbing(problem))
solution
Explanation: Solution by Hill Climbing
End of explanation
solutions = {problem.value(simulated_annealing(problem)) for i in range(100)}
max(solutions)
Explanation: Solution by Simulated Annealing
End of explanation
psource(genetic_algorithm)
Explanation: Notice that even though both algorithms started at the same initial state,
Hill Climbing could never escape from the planar region and gave a locally optimum solution of 0,
whereas Simulated Annealing could reach the peak at 32.
<br>
A very similar situation arises when there are two peaks of different heights.
One should carefully consider the possible search space before choosing the algorithm for the task.
GENETIC ALGORITHM
Genetic algorithms (or GA) are inspired by natural evolution and are particularly useful in optimization and search problems with large state spaces.
Given a problem, algorithms in the domain make use of a population of solutions (also called states), where each solution/state represents a feasible solution. At each iteration (often called generation), the population gets updated using methods inspired by biology and evolution, like crossover, mutation and natural selection.
Overview
A genetic algorithm works in the following way:
1) Initialize random population.
2) Calculate population fitness.
3) Select individuals for mating.
4) Mate selected individuals to produce new population.
* Random chance to mutate individuals.
5) Repeat from step 2) until an individual is fit enough or the maximum number of iterations was reached.
Glossary
Before we continue, we will lay the basic terminology of the algorithm.
Individual/State: A list of elements (called genes) that represent possible solutions.
Population: The list of all the individuals/states.
Gene pool: The alphabet of possible values for an individual's genes.
Generation/Iteration: The number of times the population will be updated.
Fitness: An individual's score, calculated by a function specific to the problem.
Crossover
Two individuals/states can "mate" and produce one child. This offspring bears characteristics from both of its parents. There are many ways we can implement this crossover. Here we will take a look at the most common ones. Most other methods are variations of those below.
Point Crossover: The crossover occurs around one (or more) point. The parents get "split" at the chosen point or points and then get merged. In the example below we see two parents get split and merged at the 3rd digit, producing the following offspring after the crossover.
Uniform Crossover: This type of crossover chooses randomly the genes to get merged. Here the genes 1, 2 and 5 were chosen from the first parent, so the genes 3, 4 were added by the second parent.
Mutation
When an offspring is produced, there is a chance it will mutate, having one (or more, depending on the implementation) of its genes altered.
For example, let's say the new individual to undergo mutation is "abcde". Randomly we pick to change its third gene to 'z'. The individual now becomes "abzde" and is added to the population.
Selection
At each iteration, the fittest individuals are picked randomly to mate and produce offsprings. We measure an individual's fitness with a fitness function. That function depends on the given problem and it is used to score an individual. Usually the higher the better.
The selection process is this:
1) Individuals are scored by the fitness function.
2) Individuals are picked randomly, according to their score (higher score means higher chance to get picked). Usually the formula to calculate the chance to pick an individual is the following (for population P and individual i):
$$ chance(i) = \dfrac{fitness(i)}{\sum_{k \, in \, P}{fitness(k)}} $$
Implementation
Below we look over the implementation of the algorithm in the search module.
First the implementation of the main core of the algorithm:
End of explanation
psource(recombine)
Explanation: The algorithm takes the following input:
population: The initial population.
fitness_fn: The problem's fitness function.
gene_pool: The gene pool of the states/individuals. By default 0 and 1.
f_thres: The fitness threshold. If an individual reaches that score, iteration stops. By default 'None', which means the algorithm will not halt until the generations are ran.
ngen: The number of iterations/generations.
pmut: The probability of mutation.
The algorithm gives as output the state with the largest score.
For each generation, the algorithm updates the population. First it calculates the fitnesses of the individuals, then it selects the most fit ones and finally crosses them over to produce offsprings. There is a chance that the offspring will be mutated, given by pmut. If at the end of the generation an individual meets the fitness threshold, the algorithm halts and returns that individual.
The function of mating is accomplished by the method recombine:
End of explanation
psource(mutate)
Explanation: The method picks at random a point and merges the parents (x and y) around it.
The mutation is done in the method mutate:
End of explanation
psource(init_population)
Explanation: We pick a gene in x to mutate and a gene from the gene pool to replace it with.
To help initializing the population we have the helper function init_population":
End of explanation
target = 'Genetic Algorithm'
Explanation: The function takes as input the number of individuals in the population, the gene pool and the length of each individual/state. It creates individuals with random genes and returns the population when done.
Explanation
Before we solve problems using the genetic algorithm, we will explain how to intuitively understand the algorithm using a trivial example.
Generating Phrases
In this problem, we use a genetic algorithm to generate a particular target phrase from a population of random strings. This is a classic example that helps build intuition about how to use this algorithm in other problems as well. Before we break the problem down, let us try to brute force the solution. Let us say that we want to generate the phrase "genetic algorithm". The phrase is 17 characters long. We can use any character from the 26 lowercase characters and the space character. To generate a random phrase of length 17, each space can be filled in 27 ways. So the total number of possible phrases is
$$ 27^{17} = 2153693963075557766310747 $$
which is a massive number. If we wanted to generate the phrase "Genetic Algorithm", we would also have to include all the 26 uppercase characters into consideration thereby increasing the sample space from 27 characters to 53 characters and the total number of possible phrases then would be
$$ 53^{17} = 205442259656281392806087233013 $$
If we wanted to include punctuations and numerals into the sample space, we would have further complicated an already impossible problem. Hence, brute forcing is not an option. Now we'll apply the genetic algorithm and see how it significantly reduces the search space. We essentially want to evolve our population of random strings so that they better approximate the target phrase as the number of generations increase. Genetic algorithms work on the principle of Darwinian Natural Selection according to which, there are three key concepts that need to be in place for evolution to happen. They are:
Heredity: There must be a process in place by which children receive the properties of their parents. <br>
For this particular problem, two strings from the population will be chosen as parents and will be split at a random index and recombined as described in the recombine function to create a child. This child string will then be added to the new generation.
Variation: There must be a variety of traits present in the population or a means with which to introduce variation. <br>If there is no variation in the sample space, we might never reach the global optimum. To ensure that there is enough variation, we can initialize a large population, but this gets computationally expensive as the population gets larger. Hence, we often use another method called mutation. In this method, we randomly change one or more characters of some strings in the population based on a predefined probability value called the mutation rate or mutation probability as described in the mutate function. The mutation rate is usually kept quite low. A mutation rate of zero fails to introduce variation in the population and a high mutation rate (say 50%) is as good as a coin flip and the population fails to benefit from the previous recombinations. An optimum balance has to be maintained between population size and mutation rate so as to reduce the computational cost as well as have sufficient variation in the population.
Selection: There must be some mechanism by which some members of the population have the opportunity to be parents and pass down their genetic information and some do not. This is typically referred to as "survival of the fittest". <br>
There has to be some way of determining which phrases in our population have a better chance of eventually evolving into the target phrase. This is done by introducing a fitness function that calculates how close the generated phrase is to the target phrase. The function will simply return a scalar value corresponding to the number of matching characters between the generated phrase and the target phrase.
Before solving the problem, we first need to define our target phrase.
End of explanation
# The ASCII values of uppercase characters ranges from 65 to 91
u_case = [chr(x) for x in range(65, 91)]
# The ASCII values of lowercase characters ranges from 97 to 123
l_case = [chr(x) for x in range(97, 123)]
gene_pool = []
gene_pool.extend(u_case) # adds the uppercase list to the gene pool
gene_pool.extend(l_case) # adds the lowercase list to the gene pool
gene_pool.append(' ') # adds the space character to the gene pool
Explanation: We then need to define our gene pool, i.e the elements which an individual from the population might comprise of. Here, the gene pool contains all uppercase and lowercase letters of the English alphabet and the space character.
End of explanation
max_population = 100
Explanation: We now need to define the maximum size of each population. Larger populations have more variation but are computationally more expensive to run algorithms on.
End of explanation
mutation_rate = 0.07 # 7%
Explanation: As our population is not very large, we can afford to keep a relatively large mutation rate.
End of explanation
def fitness_fn(sample):
# initialize fitness to 0
fitness = 0
for i in range(len(sample)):
# increment fitness by 1 for every matching character
if sample[i] == target[i]:
fitness += 1
return fitness
Explanation: Great! Now, we need to define the most important metric for the genetic algorithm, i.e the fitness function. This will simply return the number of matching characters between the generated sample and the target phrase.
End of explanation
population = init_population(max_population, gene_pool, len(target))
Explanation: Before we run our genetic algorithm, we need to initialize a random population. We will use the init_population function to do this. We need to pass in the maximum population size, the gene pool and the length of each individual, which in this case will be the same as the length of the target phrase.
End of explanation
parents = select(2, population, fitness_fn)
# The recombine function takes two parents as arguments, so we need to unpack the previous variable
child = recombine(*parents)
Explanation: We will now define how the individuals in the population should change as the number of generations increases. First, the select function will be run on the population to select two individuals with high fitness values. These will be the parents which will then be recombined using the recombine function to generate the child.
End of explanation
child = mutate(child, gene_pool, mutation_rate)
Explanation: Next, we need to apply a mutation according to the mutation rate. We call the mutate function on the child with the gene pool and mutation rate as the additional arguments.
End of explanation
population = [mutate(recombine(*select(2, population, fitness_fn)), gene_pool, mutation_rate) for i in range(len(population))]
Explanation: The above lines can be condensed into
child = mutate(recombine(*select(2, population, fitness_fn)), gene_pool, mutation_rate)
And, we need to do this for every individual in the current population to generate the new population.
End of explanation
current_best = max(population, key=fitness_fn)
Explanation: The individual with the highest fitness can then be found using the max function.
End of explanation
print(current_best)
Explanation: Let's print this out
End of explanation
current_best_string = ''.join(current_best)
print(current_best_string)
Explanation: We see that this is a list of characters. This can be converted to a string using the join function
End of explanation
ngen = 1200 # maximum number of generations
# we set the threshold fitness equal to the length of the target phrase
# i.e the algorithm only terminates whne it has got all the characters correct
# or it has completed 'ngen' number of generations
f_thres = len(target)
Explanation: We now need to define the conditions to terminate the algorithm. This can happen in two ways
1. Termination after a predefined number of generations
2. Termination when the fitness of the best individual of the current generation reaches a predefined threshold value.
We define these variables below
End of explanation
def genetic_algorithm_stepwise(population, fitness_fn, gene_pool=[0, 1], f_thres=None, ngen=1200, pmut=0.1):
for generation in range(ngen):
population = [mutate(recombine(*select(2, population, fitness_fn)), gene_pool, pmut) for i in range(len(population))]
# stores the individual genome with the highest fitness in the current population
current_best = ''.join(max(population, key=fitness_fn))
print(f'Current best: {current_best}\t\tGeneration: {str(generation)}\t\tFitness: {fitness_fn(current_best)}\r', end='')
# compare the fitness of the current best individual to f_thres
fittest_individual = fitness_threshold(fitness_fn, f_thres, population)
# if fitness is greater than or equal to f_thres, we terminate the algorithm
if fittest_individual:
return fittest_individual, generation
return max(population, key=fitness_fn) , generation
Explanation: To generate ngen number of generations, we run a for loop ngen number of times. After each generation, we calculate the fitness of the best individual of the generation and compare it to the value of f_thres using the fitness_threshold function. After every generation, we print out the best individual of the generation and the corresponding fitness value. Lets now write a function to do this.
End of explanation
psource(genetic_algorithm)
Explanation: The function defined above is essentially the same as the one defined in search.py with the added functionality of printing out the data of each generation.
End of explanation
population = init_population(max_population, gene_pool, len(target))
solution, generations = genetic_algorithm_stepwise(population, fitness_fn, gene_pool, f_thres, ngen, mutation_rate)
Explanation: We have defined all the required functions and variables. Let's now create a new population and test the function we wrote above.
End of explanation
edges = {
'A': [0, 1],
'B': [0, 3],
'C': [1, 2],
'D': [2, 3]
}
Explanation: The genetic algorithm was able to converge!
We implore you to rerun the above cell and play around with target, max_population, f_thres, ngen etc parameters to get a better intuition of how the algorithm works. To summarize, if we can define the problem states in simple array format and if we can create a fitness function to gauge how good or bad our approximate solutions are, there is a high chance that we can get a satisfactory solution using a genetic algorithm.
- There is also a better GUI version of this program genetic_algorithm_example.py in the GUI folder for you to play around with.
Usage
Below we give two example usages for the genetic algorithm, for a graph coloring problem and the 8 queens problem.
Graph Coloring
First we will take on the simpler problem of coloring a small graph with two colors. Before we do anything, let's imagine how a solution might look. First, we have to represent our colors. Say, 'R' for red and 'G' for green. These make up our gene pool. What of the individual solutions though? For that, we will look at our problem. We stated we have a graph. A graph has nodes and edges, and we want to color the nodes. Naturally, we want to store each node's color. If we have four nodes, we can store their colors in a list of genes, one for each node. A possible solution will then look like this: ['R', 'R', 'G', 'R']. In the general case, we will represent each solution with a list of chars ('R' and 'G'), with length the number of nodes.
Next we need to come up with a fitness function that appropriately scores individuals. Again, we will look at the problem definition at hand. We want to color a graph. For a solution to be optimal, no edge should connect two nodes of the same color. How can we use this information to score a solution? A naive (and ineffective) approach would be to count the different colors in the string. So ['R', 'R', 'R', 'R'] has a score of 1 and ['R', 'R', 'G', 'G'] has a score of 2. Why that fitness function is not ideal though? Why, we forgot the information about the edges! The edges are pivotal to the problem and the above function only deals with node colors. We didn't use all the information at hand and ended up with an ineffective answer. How, then, can we use that information to our advantage?
We said that the optimal solution will have all the edges connecting nodes of different color. So, to score a solution we can count how many edges are valid (aka connecting nodes of different color). That is a great fitness function!
Let's jump into solving this problem using the genetic_algorithm function.
First we need to represent the graph. Since we mostly need information about edges, we will just store the edges. We will denote edges with capital letters and nodes with integers:
End of explanation
population = init_population(8, ['R', 'G'], 4)
print(population)
Explanation: Edge 'A' connects nodes 0 and 1, edge 'B' connects nodes 0 and 3 etc.
We already said our gene pool is 'R' and 'G', so we can jump right into initializing our population. Since we have only four nodes, state_length should be 4. For the number of individuals, we will try 8. We can increase this number if we need higher accuracy, but be careful! Larger populations need more computating power and take longer. You need to strike that sweet balance between accuracy and cost (the ultimate dilemma of the programmer!).
End of explanation
def fitness(c):
return sum(c[n1] != c[n2] for (n1, n2) in edges.values())
Explanation: We created and printed the population. You can see that the genes in the individuals are random and there are 8 individuals each with 4 genes.
Next we need to write our fitness function. We previously said we want the function to count how many edges are valid. So, given a coloring/individual c, we will do just that:
End of explanation
solution = genetic_algorithm(population, fitness, gene_pool=['R', 'G'])
print(solution)
Explanation: Great! Now we will run the genetic algorithm and see what solution it gives.
End of explanation
print(fitness(solution))
Explanation: The algorithm converged to a solution. Let's check its score:
End of explanation
population = init_population(100, range(8), 8)
print(population[:5])
Explanation: The solution has a score of 4. Which means it is optimal, since we have exactly 4 edges in our graph, meaning all are valid!
NOTE: Because the algorithm is non-deterministic, there is a chance a different solution is given. It might even be wrong, if we are very unlucky!
Eight Queens
Let's take a look at a more complicated problem.
In the Eight Queens problem, we are tasked with placing eight queens on an 8x8 chessboard without any queen threatening the others (aka queens should not be in the same row, column or diagonal). In its general form the problem is defined as placing N queens in an NxN chessboard without any conflicts.
First we need to think about the representation of each solution. We can go the naive route of representing the whole chessboard with the queens' placements on it. That is definitely one way to go about it, but for the purpose of this tutorial we will do something different. We have eight queens, so we will have a gene for each of them. The gene pool will be numbers from 0 to 7, for the different columns. The position of the gene in the state will denote the row the particular queen is placed in.
For example, we can have the state "03304577". Here the first gene with a value of 0 means "the queen at row 0 is placed at column 0", for the second gene "the queen at row 1 is placed at column 3" and so forth.
We now need to think about the fitness function. On the graph coloring problem we counted the valid edges. The same thought process can be applied here. Instead of edges though, we have positioning between queens. If two queens are not threatening each other, we say they are at a "non-attacking" positioning. We can, therefore, count how many such positionings are there.
Let's dive right in and initialize our population:
End of explanation
def fitness(q):
non_attacking = 0
for row1 in range(len(q)):
for row2 in range(row1+1, len(q)):
col1 = int(q[row1])
col2 = int(q[row2])
row_diff = row1 - row2
col_diff = col1 - col2
if col1 != col2 and row_diff != col_diff and row_diff != -col_diff:
non_attacking += 1
return non_attacking
Explanation: We have a population of 100 and each individual has 8 genes. The gene pool is the integers from 0 to 7, in string form. Above you can see the first five individuals.
Next we need to write our fitness function. Remember, queens threaten each other if they are at the same row, column or diagonal.
Since positionings are mutual, we must take care not to count them twice. Therefore for each queen, we will only check for conflicts for the queens after her.
A gene's value in an individual q denotes the queen's column, and the position of the gene denotes its row. We can check if the aforementioned values between two genes are the same. We also need to check for diagonals. A queen a is in the diagonal of another queen, b, if the difference of the rows between them is equal to either their difference in columns (for the diagonal on the right of a) or equal to the negative difference of their columns (for the left diagonal of a). Below is given the fitness function.
End of explanation
solution = genetic_algorithm(population, fitness, f_thres=25, gene_pool=range(8))
print(solution)
print(fitness(solution))
Explanation: Note that the best score achievable is 28. That is because for each queen we only check for the queens after her. For the first queen we check 7 other queens, for the second queen 6 others and so on. In short, the number of checks we make is the sum 7+6+5+...+1. Which is equal to 7*(7+1)/2 = 28.
Because it is very hard and will take long to find a perfect solution, we will set the fitness threshold at 25. If we find an individual with a score greater or equal to that, we will halt. Let's see how the genetic algorithm will fare.
End of explanation
psource(NQueensProblem)
Explanation: Above you can see the solution and its fitness score, which should be no less than 25.
This is where we conclude Genetic Algorithms.
N-Queens Problem
Here, we will look at the generalized cae of the Eight Queens problem.
<br>
We are given a N x N chessboard, with N queens, and we need to place them in such a way that no two queens can attack each other.
<br>
We will solve this problem using search algorithms.
To do this, we already have a NQueensProblem class in search.py.
End of explanation
nqp = NQueensProblem(8)
Explanation: In csp.ipynb we have seen that the N-Queens problem can be formulated as a CSP and can be solved by
the min_conflicts algorithm in a way similar to Hill-Climbing.
Here, we want to solve it using heuristic search algorithms and even some classical search algorithms.
The NQueensProblem class derives from the Problem class and is implemented in such a way that the search algorithms we already have, can solve it.
<br>
Let's instantiate the class.
End of explanation
%%timeit
depth_first_tree_search(nqp)
dfts = depth_first_tree_search(nqp).solution()
plot_NQueens(dfts)
Explanation: Let's use depth_first_tree_search first.
<br>
We will also use the %%timeit magic with each algorithm to see how much time they take.
End of explanation
%%timeit
breadth_first_tree_search(nqp)
bfts = breadth_first_tree_search(nqp).solution()
plot_NQueens(bfts)
Explanation: breadth_first_tree_search
End of explanation
%%timeit
uniform_cost_search(nqp)
ucs = uniform_cost_search(nqp).solution()
plot_NQueens(ucs)
Explanation: uniform_cost_search
End of explanation
psource(NQueensProblem.h)
%%timeit
astar_search(nqp)
Explanation: depth_first_tree_search is almost 20 times faster than breadth_first_tree_search and more than 200 times faster than uniform_cost_search.
We can also solve this problem using astar_search with a suitable heuristic function.
<br>
The best heuristic function for this scenario will be one that returns the number of conflicts in the current state.
End of explanation
astar = astar_search(nqp).solution()
plot_NQueens(astar)
Explanation: astar_search is faster than both uniform_cost_search and breadth_first_tree_search.
End of explanation
psource(and_or_graph_search)
Explanation: AND-OR GRAPH SEARCH
An AND-OR graph is a graphical representation of the reduction of goals to conjunctions and disjunctions of subgoals.
<br>
An AND-OR graph can be seen as a generalization of a directed graph.
It contains a number of vertices and generalized edges that connect the vertices.
<br>
Each connector in an AND-OR graph connects a set of vertices $V$ to a single vertex, $v_0$.
A connector can be an AND connector or an OR connector.
An AND connector connects two edges having a logical AND relationship,
while and OR connector connects two edges having a logical OR relationship.
<br>
A vertex can have more than one AND or OR connector.
This is why AND-OR graphs can be expressed as logical statements.
<br>
<br>
AND-OR graphs also provide a computational model for executing logic programs and you will come across this data-structure in the logic module as well.
AND-OR graphs can be searched in depth-first, breadth-first or best-first ways searching the state sapce linearly or parallely.
<br>
Our implementation of AND-OR search searches over graphs generated by non-deterministic environments and returns a conditional plan that reaches a goal state in all circumstances.
Let's have a look at the implementation of and_or_graph_search.
End of explanation
vacuum_world = GraphProblemStochastic('State_1', ['State_7', 'State_8'], vacuum_world)
plan = and_or_graph_search(vacuum_world)
plan
def run_plan(state, problem, plan):
if problem.goal_test(state):
return True
if len(plan) is not 2:
return False
predicate = lambda x: run_plan(x, problem, plan[1][x])
return all(predicate(r) for r in problem.result(state, plan[0]))
run_plan('State_1', vacuum_world, plan)
Explanation: The search is carried out by two functions and_search and or_search that recursively call each other, traversing nodes sequentially.
It is a recursive depth-first algorithm for searching an AND-OR graph.
<br>
A very similar algorithm fol_bc_ask can be found in the logic module, which carries out inference on first-order logic knowledge bases using AND-OR graph-derived data-structures.
<br>
AND-OR trees can also be used to represent the search spaces for two-player games, where a vertex of the tree represents the problem of one of the players winning the game, starting from the initial state of the game.
<br>
Problems involving MIN-MAX trees can be reformulated as AND-OR trees by representing MAX nodes as OR nodes and MIN nodes as AND nodes.
and_or_graph_search can then be used to find the optimal solution.
Standard algorithms like minimax and expectiminimax (for belief states) can also be applied on it with a few modifications.
Here's how and_or_graph_search can be applied to a simple vacuum-world example.
End of explanation
psource(OnlineDFSAgent)
Explanation: ONLINE DFS AGENT
So far, we have seen agents that use offline search algorithms,
which is a class of algorithms that compute a complete solution before executing it.
In contrast, an online search agent interleaves computation and action.
Online search is better for most dynamic environments and necessary for unknown environments.
<br>
Online search problems are solved by an agent executing actions, rather than just by pure computation.
For a fully observable environment, an online agent cycles through three steps: taking an action, computing the step cost and checking if the goal has been reached.
<br>
For online algorithms in partially-observable environments, there is usually a tradeoff between exploration and exploitation to be taken care of.
<br>
<br>
Whenever an online agent takes an action, it receives a percept or an observation that tells it something about its immediate environment.
Using this percept, the agent can augment its map of the current environment.
For a partially observable environment, this is called the belief state.
<br>
Online algorithms expand nodes in a local order, just like depth-first search as it does not have the option of observing farther nodes like A* search.
Whenever an action from the current state has not been explored, the agent tries that action.
<br>
Difficulty arises when the agent has tried all actions in a particular state.
An offline search algorithm would simply drop the state from the queue in this scenario whereas an online search agent has to physically move back to the previous state.
To do this, the agent needs to maintain a table where it stores the order of nodes it has been to.
This is how our implementation of Online DFS-Agent works.
This agent works only in state spaces where the action is reversible, because of the use of backtracking.
<br>
Let's have a look at the OnlineDFSAgent class.
End of explanation
psource(LRTAStarAgent)
Explanation: It maintains two dictionaries untried and unbacktracked.
untried contains nodes that have not been visited yet.
unbacktracked contains the sequence of nodes that the agent has visited so it can backtrack to it later, if required.
s and a store the state and the action respectively and result stores the final path or solution of the problem.
<br>
Let's look at another online search algorithm.
LRTA* AGENT
We can infer now that hill-climbing is an online search algorithm, but it is not very useful natively because for complicated search spaces, it might converge to the local minima and indefinitely stay there.
In such a case, we can choose to randomly restart it a few times with different starting conditions and return the result with the lowest total cost.
Sometimes, it is better to use random walks instead of random restarts depending on the problem, but progress can still be very slow.
<br>
A better improvement would be to give hill-climbing a memory element.
We store the current best heuristic estimate and it is updated as the agent gains experience in the state space.
The estimated optimal cost is made more and more accurate as time passes and each time the the local minima is "flattened out" until we escape it.
<br>
This learning scheme is a simple improvement upon traditional hill-climbing and is called learning real-time A_ or __LRTA__.
Similar to _Online DFS-Agent, it builds a map of the environment and chooses the best possible move according to its current heuristic estimates.
<br>
Actions that haven't been tried yet are assumed to lead immediately to the goal with the least possible cost.
This is called optimism under uncertainty and encourages the agent to explore new promising paths.
This algorithm might not terminate if the state space is infinite, unlike A* search.
<br>
Let's have a look at the LRTAStarAgent class.
End of explanation
one_dim_state_space
Explanation: H stores the heuristic cost of the paths the agent may travel to.
<br>
s and a store the state and the action respectively.
<br>
problem stores the problem definition and the current map of the environment is stored in problem.result.
<br>
The LRTA_cost method computes the cost of a new path given the current state s, the action a, the next state s1 and the estimated cost to get from s to s1 is extracted from H.
Let's use LRTAStarAgent to solve a simple problem.
We'll define a new LRTA_problem instance based on our one_dim_state_space.
End of explanation
LRTA_problem = OnlineSearchProblem('State_3', 'State_5', one_dim_state_space)
Explanation: Let's define an instance of OnlineSearchProblem.
End of explanation
lrta_agent = LRTAStarAgent(LRTA_problem)
Explanation: Now we initialize a LRTAStarAgent object for the problem we just defined.
End of explanation
lrta_agent('State_3')
lrta_agent('State_4')
lrta_agent('State_3')
lrta_agent('State_4')
Explanation: We'll pass the percepts [State_3, State_4, State_3, State_4, State_5] one-by-one to our agent to see what action it comes up with at each timestep.
End of explanation
lrta_agent('State_5')
Explanation: If you manually try to see what the optimal action should be at each step, the outputs of the lrta_agent will start to make sense if it doesn't already.
End of explanation |
6,104 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Keras tutorial - the Happy House
Welcome to the first assignment of week 2. In this assignment, you will
Step1: Note
Step3: Details of the "Happy" dataset
Step4: You have now built a function to describe your model. To train and test this model, there are four steps in Keras
Step5: Exercise
Step6: Exercise
Step7: Note that if you run fit() again, the model will continue to train with the parameters it has already learnt instead of reinitializing them.
Exercise
Step8: If your happyModel() function worked, you should have observed much better than random-guessing (50%) accuracy on the train and test sets.
To give you a point of comparison, our model gets around 95% test accuracy in 40 epochs (and 99% train accuracy) with a mini batch size of 16 and "adam" optimizer. But our model gets decent accuracy after just 2-5 epochs, so if you're comparing different models you can also train a variety of models on just a few epochs and see how they compare.
If you have not yet achieved a very good accuracy (let's say more than 80%), here're some things you can play around with to try to achieve it
Step9: 5 - Other useful functions in Keras (Optional)
Two other basic features of Keras that you'll find useful are | Python Code:
import numpy as np
from keras import layers
from keras.layers import Input, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D
from keras.layers import AveragePooling2D, MaxPooling2D, Dropout, GlobalMaxPooling2D, GlobalAveragePooling2D
from keras.models import Model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from kt_utils import *
import keras.backend as K
K.set_image_data_format('channels_last')
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
%matplotlib inline
Explanation: Keras tutorial - the Happy House
Welcome to the first assignment of week 2. In this assignment, you will:
1. Learn to use Keras, a high-level neural networks API (programming framework), written in Python and capable of running on top of several lower-level frameworks including TensorFlow and CNTK.
2. See how you can in a couple of hours build a deep learning algorithm.
Why are we using Keras? Keras was developed to enable deep learning engineers to build and experiment with different models very quickly. Just as TensorFlow is a higher-level framework than Python, Keras is an even higher-level framework and provides additional abstractions. Being able to go from idea to result with the least possible delay is key to finding good models. However, Keras is more restrictive than the lower-level frameworks, so there are some very complex models that you can implement in TensorFlow but not (without more difficulty) in Keras. That being said, Keras will work fine for many common models.
In this exercise, you'll work on the "Happy House" problem, which we'll explain below. Let's load the required packages and solve the problem of the Happy House!
End of explanation
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
# Normalize image vectors
X_train = X_train_orig/255.
X_test = X_test_orig/255.
# Reshape
Y_train = Y_train_orig.T
Y_test = Y_test_orig.T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
Explanation: Note: As you can see, we've imported a lot of functions from Keras. You can use them easily just by calling them directly in the notebook. Ex: X = Input(...) or X = ZeroPadding2D(...).
1 - The Happy House
For your next vacation, you decided to spend a week with five of your friends from school. It is a very convenient house with many things to do nearby. But the most important benefit is that everybody has commited to be happy when they are in the house. So anyone wanting to enter the house must prove their current state of happiness.
<img src="images/happy-house.jpg" style="width:350px;height:270px;">
<caption><center> <u> <font color='purple'> Figure 1 </u><font color='purple'> : the Happy House</center></caption>
As a deep learning expert, to make sure the "Happy" rule is strictly applied, you are going to build an algorithm which that uses pictures from the front door camera to check if the person is happy or not. The door should open only if the person is happy.
You have gathered pictures of your friends and yourself, taken by the front-door camera. The dataset is labbeled.
<img src="images/house-members.png" style="width:550px;height:250px;">
Run the following code to normalize the dataset and learn about its shapes.
End of explanation
# GRADED FUNCTION: HappyModel
def HappyModel(input_shape):
Implementation of the HappyModel.
Arguments:
input_shape -- shape of the images of the dataset
Returns:
model -- a Model() instance in Keras
### START CODE HERE ###
# Feel free to use the suggested outline in the text above to get started, and run through the whole
# exercise (including the later portions of this notebook) once. The come back also try out other
# network architectures as well.
X_input = Input(input_shape)
X = ZeroPadding2D((3, 3))(X_input)
X = Conv2D(32, (7, 7), strides = (1, 1), name = 'conv0')(X)
X = BatchNormalization(axis = 3, name = 'bn0')(X)
X = Activation('relu')(X)
X = MaxPooling2D((2, 2), name = 'max_pool')(X)
X = Flatten()(X)
X = Dense(1, activation = 'sigmoid', name = 'fc')(X)
model = Model(inputs = X_input, outputs = X, name = 'HappyModel')
### END CODE HERE ###
return model
Explanation: Details of the "Happy" dataset:
- Images are of shape (64,64,3)
- Training: 600 pictures
- Test: 150 pictures
It is now time to solve the "Happy" Challenge.
2 - Building a model in Keras
Keras is very good for rapid prototyping. In just a short time you will be able to build a model that achieves outstanding results.
Here is an example of a model in Keras:
```python
def model(input_shape):
# Define the input placeholder as a tensor with shape input_shape. Think of this as your input image!
X_input = Input(input_shape)
# Zero-Padding: pads the border of X_input with zeroes
X = ZeroPadding2D((3, 3))(X_input)
# CONV -> BN -> RELU Block applied to X
X = Conv2D(32, (7, 7), strides = (1, 1), name = 'conv0')(X)
X = BatchNormalization(axis = 3, name = 'bn0')(X)
X = Activation('relu')(X)
# MAXPOOL
X = MaxPooling2D((2, 2), name='max_pool')(X)
# FLATTEN X (means convert it to a vector) + FULLYCONNECTED
X = Flatten()(X)
X = Dense(1, activation='sigmoid', name='fc')(X)
# Create model. This creates your Keras model instance, you'll use this instance to train/test the model.
model = Model(inputs = X_input, outputs = X, name='HappyModel')
return model
```
Note that Keras uses a different convention with variable names than we've previously used with numpy and TensorFlow. In particular, rather than creating and assigning a new variable on each step of forward propagation such as X, Z1, A1, Z2, A2, etc. for the computations for the different layers, in Keras code each line above just reassigns X to a new value using X = .... In other words, during each step of forward propagation, we are just writing the latest value in the commputation into the same variable X. The only exception was X_input, which we kept separate and did not overwrite, since we needed it at the end to create the Keras model instance (model = Model(inputs = X_input, ...) above).
Exercise: Implement a HappyModel(). This assignment is more open-ended than most. We suggest that you start by implementing a model using the architecture we suggest, and run through the rest of this assignment using that as your initial model. But after that, come back and take initiative to try out other model architectures. For example, you might take inspiration from the model above, but then vary the network architecture and hyperparameters however you wish. You can also use other functions such as AveragePooling2D(), GlobalMaxPooling2D(), Dropout().
Note: You have to be careful with your data's shapes. Use what you've learned in the videos to make sure your convolutional, pooling and fully-connected layers are adapted to the volumes you're applying it to.
End of explanation
### START CODE HERE ### (1 line)
happyModel = HappyModel((64, 64, 3))
### END CODE HERE ###
Explanation: You have now built a function to describe your model. To train and test this model, there are four steps in Keras:
1. Create the model by calling the function above
2. Compile the model by calling model.compile(optimizer = "...", loss = "...", metrics = ["accuracy"])
3. Train the model on train data by calling model.fit(x = ..., y = ..., epochs = ..., batch_size = ...)
4. Test the model on test data by calling model.evaluate(x = ..., y = ...)
If you want to know more about model.compile(), model.fit(), model.evaluate() and their arguments, refer to the official Keras documentation.
Exercise: Implement step 1, i.e. create the model.
End of explanation
### START CODE HERE ### (1 line)
happyModel.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
### END CODE HERE ###
Explanation: Exercise: Implement step 2, i.e. compile the model to configure the learning process. Choose the 3 arguments of compile() wisely. Hint: the Happy Challenge is a binary classification problem.
End of explanation
### START CODE HERE ### (1 line)
happyModel.fit(x = X_train, y = Y_train, epochs = 40, batch_size = 16)
### END CODE HERE ###
Explanation: Exercise: Implement step 3, i.e. train the model. Choose the number of epochs and the batch size.
End of explanation
### START CODE HERE ### (1 line)
preds = happyModel.evaluate(x=X_test, y=Y_test, batch_size=16, verbose=1, sample_weight=None)
### END CODE HERE ###
print()
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
Explanation: Note that if you run fit() again, the model will continue to train with the parameters it has already learnt instead of reinitializing them.
Exercise: Implement step 4, i.e. test/evaluate the model.
End of explanation
### START CODE HERE ###
img_path = 'images/64.jpg'
### END CODE HERE ###
img = image.load_img(img_path, target_size=(64, 64))
imshow(img)
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print(happyModel.predict(x))
Explanation: If your happyModel() function worked, you should have observed much better than random-guessing (50%) accuracy on the train and test sets.
To give you a point of comparison, our model gets around 95% test accuracy in 40 epochs (and 99% train accuracy) with a mini batch size of 16 and "adam" optimizer. But our model gets decent accuracy after just 2-5 epochs, so if you're comparing different models you can also train a variety of models on just a few epochs and see how they compare.
If you have not yet achieved a very good accuracy (let's say more than 80%), here're some things you can play around with to try to achieve it:
Try using blocks of CONV->BATCHNORM->RELU such as:
python
X = Conv2D(32, (3, 3), strides = (1, 1), name = 'conv0')(X)
X = BatchNormalization(axis = 3, name = 'bn0')(X)
X = Activation('relu')(X)
until your height and width dimensions are quite low and your number of channels quite large (≈32 for example). You are encoding useful information in a volume with a lot of channels. You can then flatten the volume and use a fully-connected layer.
You can use MAXPOOL after such blocks. It will help you lower the dimension in height and width.
Change your optimizer. We find Adam works well.
If the model is struggling to run and you get memory issues, lower your batch_size (12 is usually a good compromise)
Run on more epochs, until you see the train accuracy plateauing.
Even if you have achieved a good accuracy, please feel free to keep playing with your model to try to get even better results.
Note: If you perform hyperparameter tuning on your model, the test set actually becomes a dev set, and your model might end up overfitting to the test (dev) set. But just for the purpose of this assignment, we won't worry about that here.
3 - Conclusion
Congratulations, you have solved the Happy House challenge!
Now, you just need to link this model to the front-door camera of your house. We unfortunately won't go into the details of how to do that here.
<font color='blue'>
What we would like you to remember from this assignment:
- Keras is a tool we recommend for rapid prototyping. It allows you to quickly try out different model architectures. Are there any applications of deep learning to your daily life that you'd like to implement using Keras?
- Remember how to code a model in Keras and the four steps leading to the evaluation of your model on the test set. Create->Compile->Fit/Train->Evaluate/Test.
4 - Test with your own image (Optional)
Congratulations on finishing this assignment. You can now take a picture of your face and see if you could enter the Happy House. To do that:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Write your image's name in the following code
4. Run the code and check if the algorithm is right (0 is unhappy, 1 is happy)!
The training/test sets were quite similar; for example, all the pictures were taken against the same background (since a front door camera is always mounted in the same position). This makes the problem easier, but a model trained on this data may or may not work on your own data. But feel free to give it a try!
End of explanation
happyModel.summary()
plot_model(happyModel, to_file='HappyModel.png')
SVG(model_to_dot(happyModel).create(prog='dot', format='svg'))
Explanation: 5 - Other useful functions in Keras (Optional)
Two other basic features of Keras that you'll find useful are:
- model.summary(): prints the details of your layers in a table with the sizes of its inputs/outputs
- plot_model(): plots your graph in a nice layout. You can even save it as ".png" using SVG() if you'd like to share it on social media ;). It is saved in "File" then "Open..." in the upper bar of the notebook.
Run the following code.
End of explanation |
6,105 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Overfitting demo
Create a dataset based on a true sinusoidal relationship
Let's look at a synthetic dataset consisting of 30 points drawn from the sinusoid $y = \sin(4x)$
Step1: Create random values for x in interval [0,1)
Step2: Compute y
Step3: Add random Gaussian noise to y
Step4: Put data into an SFrame to manipulate later
Step5: Create a function to plot the data, since we'll do it many times
Step6: Define some useful polynomial regression functions
Define a function to create our features for a polynomial regression model of any degree
Step7: Define a function to fit a polynomial linear regression model of degree "deg" to the data in "data"
Step8: Define function to plot data and predictions made, since we are going to use it many times.
Step9: Create a function that prints the polynomial coefficients in a pretty way
Step10: Fit a degree-2 polynomial
Fit our degree-2 polynomial to the data generated above
Step11: Inspect learned parameters
Step12: Form and plot our predictions along a grid of x values
Step13: Fit a degree-4 polynomial
Step14: Fit a degree-16 polynomial
Step15: Woah!!!! Those coefficients are crazy! On the order of 10^6.
Step16: Above
Step17: Perform a ridge fit of a degree-16 polynomial using a very small penalty strength
Step18: Perform a ridge fit of a degree-16 polynomial using a very large penalty strength
Step19: Let's look at fits for a sequence of increasing lambda values
Step20: Perform a ridge fit of a degree-16 polynomial using a "good" penalty strength
We will learn about cross validation later in this course as a way to select a good value of the tuning parameter (penalty strength) lambda. Here, we consider "leave one out" (LOO) cross validation, which one can show approximates average mean square error (MSE). As a result, choosing lambda to minimize the LOO error is equivalent to choosing lambda to minimize an approximation to average MSE.
Step21: Run LOO cross validation for "num" values of lambda, on a log scale
Step22: Plot results of estimating LOO for each value of lambda
Step23: Find the value of lambda, $\lambda_{\mathrm{CV}}$, that minimizes the LOO cross validation error, and plot resulting fit
Step24: Lasso Regression
Lasso regression jointly shrinks coefficients to avoid overfitting, and implicitly performs feature selection by setting some coefficients exactly to 0 for sufficiently large penalty strength lambda (here called "L1_penalty"). In particular, lasso takes the RSS term of standard least squares and adds a 1-norm cost of the coefficients $\|w\|$.
Define our function to solve the lasso objective for a polynomial regression model of any degree
Step25: Explore the lasso solution as a function of a few different penalty strengths
We refer to lambda in the lasso case below as "l1_penalty" | Python Code:
import graphlab
import math
import random
import numpy
from matplotlib import pyplot as plt
%matplotlib inline
Explanation: Overfitting demo
Create a dataset based on a true sinusoidal relationship
Let's look at a synthetic dataset consisting of 30 points drawn from the sinusoid $y = \sin(4x)$:
End of explanation
random.seed(98103)
n = 30
x = graphlab.SArray([random.random() for i in range(n)]).sort()
Explanation: Create random values for x in interval [0,1)
End of explanation
y = x.apply(lambda x: math.sin(4*x))
Explanation: Compute y
End of explanation
random.seed(1)
e = graphlab.SArray([random.gauss(0,1.0/3.0) for i in range(n)])
y = y + e
Explanation: Add random Gaussian noise to y
End of explanation
data = graphlab.SFrame({'X1':x,'Y':y})
data
Explanation: Put data into an SFrame to manipulate later
End of explanation
def plot_data(data):
plt.plot(data['X1'],data['Y'],'k.')
plt.xlabel('x')
plt.ylabel('y')
plot_data(data)
Explanation: Create a function to plot the data, since we'll do it many times
End of explanation
def polynomial_features(data, deg):
data_copy=data.copy()
for i in range(1,deg):
data_copy['X'+str(i+1)]=data_copy['X'+str(i)]*data_copy['X1']
return data_copy
Explanation: Define some useful polynomial regression functions
Define a function to create our features for a polynomial regression model of any degree:
End of explanation
def polynomial_regression(data, deg):
model = graphlab.linear_regression.create(polynomial_features(data,deg),
target='Y', l2_penalty=0.,l1_penalty=0.,
validation_set=None,verbose=False)
return model
Explanation: Define a function to fit a polynomial linear regression model of degree "deg" to the data in "data":
End of explanation
def plot_poly_predictions(data, model):
plot_data(data)
# Get the degree of the polynomial
deg = len(model.coefficients['value'])-1
# Create 200 points in the x axis and compute the predicted value for each point
x_pred = graphlab.SFrame({'X1':[i/200.0 for i in range(200)]})
y_pred = model.predict(polynomial_features(x_pred,deg))
# plot predictions
plt.plot(x_pred['X1'], y_pred, 'g-', label='degree ' + str(deg) + ' fit')
plt.legend(loc='upper left')
plt.axis([0,1,-1.5,2])
Explanation: Define function to plot data and predictions made, since we are going to use it many times.
End of explanation
def print_coefficients(model):
# Get the degree of the polynomial
deg = len(model.coefficients['value'])-1
# Get learned parameters as a list
w = list(model.coefficients['value'])
# Numpy has a nifty function to print out polynomials in a pretty way
# (We'll use it, but it needs the parameters in the reverse order)
print 'Learned polynomial for degree ' + str(deg) + ':'
w.reverse()
print numpy.poly1d(w)
Explanation: Create a function that prints the polynomial coefficients in a pretty way :)
End of explanation
model = polynomial_regression(data, deg=2)
Explanation: Fit a degree-2 polynomial
Fit our degree-2 polynomial to the data generated above:
End of explanation
print_coefficients(model)
Explanation: Inspect learned parameters
End of explanation
plot_poly_predictions(data,model)
Explanation: Form and plot our predictions along a grid of x values:
End of explanation
model = polynomial_regression(data, deg=4)
print_coefficients(model)
plot_poly_predictions(data,model)
Explanation: Fit a degree-4 polynomial
End of explanation
model = polynomial_regression(data, deg=16)
print_coefficients(model)
Explanation: Fit a degree-16 polynomial
End of explanation
plot_poly_predictions(data,model)
Explanation: Woah!!!! Those coefficients are crazy! On the order of 10^6.
End of explanation
def polynomial_ridge_regression(data, deg, l2_penalty):
model = graphlab.linear_regression.create(polynomial_features(data,deg),
target='Y', l2_penalty=l2_penalty,
validation_set=None,verbose=False)
return model
Explanation: Above: Fit looks pretty wild, too. Here's a clear example of how overfitting is associated with very large magnitude estimated coefficients.
#
#
Ridge Regression
Ridge regression aims to avoid overfitting by adding a cost to the RSS term of standard least squares that depends on the 2-norm of the coefficients $\|w\|$. The result is penalizing fits with large coefficients. The strength of this penalty, and thus the fit vs. model complexity balance, is controled by a parameter lambda (here called "L2_penalty").
Define our function to solve the ridge objective for a polynomial regression model of any degree:
End of explanation
model = polynomial_ridge_regression(data, deg=16, l2_penalty=1e-25)
print_coefficients(model)
plot_poly_predictions(data,model)
Explanation: Perform a ridge fit of a degree-16 polynomial using a very small penalty strength
End of explanation
model = polynomial_ridge_regression(data, deg=16, l2_penalty=100)
print_coefficients(model)
plot_poly_predictions(data,model)
Explanation: Perform a ridge fit of a degree-16 polynomial using a very large penalty strength
End of explanation
for l2_penalty in [1e-25, 1e-10, 1e-6, 1e-3, 1e2]:
model = polynomial_ridge_regression(data, deg=16, l2_penalty=l2_penalty)
print 'lambda = %.2e' % l2_penalty
print_coefficients(model)
print '\n'
plt.figure()
plot_poly_predictions(data,model)
plt.title('Ridge, lambda = %.2e' % l2_penalty)
Explanation: Let's look at fits for a sequence of increasing lambda values
End of explanation
# LOO cross validation -- return the average MSE
def loo(data, deg, l2_penalty_values):
# Create polynomial features
polynomial_features(data, deg)
# Create as many folds for cross validatation as number of data points
num_folds = len(data)
folds = graphlab.cross_validation.KFold(data,num_folds)
# for each value of l2_penalty, fit a model for each fold and compute average MSE
l2_penalty_mse = []
min_mse = None
best_l2_penalty = None
for l2_penalty in l2_penalty_values:
next_mse = 0.0
for train_set, validation_set in folds:
# train model
model = graphlab.linear_regression.create(train_set,target='Y',
l2_penalty=l2_penalty,
validation_set=None,verbose=False)
# predict on validation set
y_test_predicted = model.predict(validation_set)
# compute squared error
next_mse += ((y_test_predicted-validation_set['Y'])**2).sum()
# save squared error in list of MSE for each l2_penalty
next_mse = next_mse/num_folds
l2_penalty_mse.append(next_mse)
if min_mse is None or next_mse < min_mse:
min_mse = next_mse
best_l2_penalty = l2_penalty
return l2_penalty_mse,best_l2_penalty
Explanation: Perform a ridge fit of a degree-16 polynomial using a "good" penalty strength
We will learn about cross validation later in this course as a way to select a good value of the tuning parameter (penalty strength) lambda. Here, we consider "leave one out" (LOO) cross validation, which one can show approximates average mean square error (MSE). As a result, choosing lambda to minimize the LOO error is equivalent to choosing lambda to minimize an approximation to average MSE.
End of explanation
l2_penalty_values = numpy.logspace(-4, 10, num=10)
l2_penalty_mse,best_l2_penalty = loo(data, 16, l2_penalty_values)
Explanation: Run LOO cross validation for "num" values of lambda, on a log scale
End of explanation
plt.plot(l2_penalty_values,l2_penalty_mse,'k-')
plt.xlabel('$\L2_penalty$')
plt.ylabel('LOO cross validation error')
plt.xscale('log')
plt.yscale('log')
Explanation: Plot results of estimating LOO for each value of lambda
End of explanation
best_l2_penalty
model = polynomial_ridge_regression(data, deg=16, l2_penalty=best_l2_penalty)
print_coefficients(model)
plot_poly_predictions(data,model)
Explanation: Find the value of lambda, $\lambda_{\mathrm{CV}}$, that minimizes the LOO cross validation error, and plot resulting fit
End of explanation
def polynomial_lasso_regression(data, deg, l1_penalty):
model = graphlab.linear_regression.create(polynomial_features(data,deg),
target='Y', l2_penalty=0.,
l1_penalty=l1_penalty,
validation_set=None,
solver='fista', verbose=False,
max_iterations=3000, convergence_threshold=1e-10)
return model
Explanation: Lasso Regression
Lasso regression jointly shrinks coefficients to avoid overfitting, and implicitly performs feature selection by setting some coefficients exactly to 0 for sufficiently large penalty strength lambda (here called "L1_penalty"). In particular, lasso takes the RSS term of standard least squares and adds a 1-norm cost of the coefficients $\|w\|$.
Define our function to solve the lasso objective for a polynomial regression model of any degree:
End of explanation
for l1_penalty in [0.0001, 0.01, 0.1, 10]:
model = polynomial_lasso_regression(data, deg=16, l1_penalty=l1_penalty)
print 'l1_penalty = %e' % l1_penalty
print 'number of nonzeros = %d' % (model.coefficients['value']).nnz()
print_coefficients(model)
print '\n'
plt.figure()
plot_poly_predictions(data,model)
plt.title('LASSO, lambda = %.2e, # nonzeros = %d' % (l1_penalty, (model.coefficients['value']).nnz()))
Explanation: Explore the lasso solution as a function of a few different penalty strengths
We refer to lambda in the lasso case below as "l1_penalty"
End of explanation |
6,106 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Day80
Step1: Using the Basemap module, we are able to draw the earth.
Step2: Orthographic projection
Example from
Step3: Mercator projection
Let say we want to map Vancouver, BC (as drawn in a traditional 2D map)
Step4: The following code is based off of Bill Mill's "Simple maps in Python" | Python Code:
from mpl_toolkits.basemap import Basemap as Basemap
Explanation: Day80: Drawing maps
Often in urban data, there is information about location. Today I use to draw maps. Because this is unfamiliar to me, I will use python for drawing maps (because “python” is easier to google than “R”).
Basemap module
End of explanation
plt.figure(figsize=(14,8))
Map = Basemap()
Map.drawcoastlines()
Map.drawcountries()
Map.drawmapboundary()
Map.fillcontinents(color = 'coral')
plt.show()
Explanation: Using the Basemap module, we are able to draw the earth.
End of explanation
Map = Basemap(projection='ortho', lat_0=49, lon_0=-123, resolution='l')
# draw coastlines, country boundaries, fill continents.
Map.drawcoastlines(linewidth=0.25)
Map.drawcountries(linewidth=0.25)
Map.fillcontinents()
# draw the edge of the map projection region (the projection limb)
Map.drawmapboundary()
# draw lat/lon grid lines every 30 degrees.
Map.drawmeridians(np.arange(0,360,30))
Map.drawparallels(np.arange(-90,90,30))
plt.show()
Explanation: Orthographic projection
Example from: Cameron Cooke's "The Big Picture"
End of explanation
# Coordinates for Vancouver, BC
Van_lon = -123.1207
Van_lat = 49.2827
Explanation: Mercator projection
Let say we want to map Vancouver, BC (as drawn in a traditional 2D map)
End of explanation
plt.figure(figsize=(20,10))
Map = Basemap(
projection='merc', resolution='h', area_thresh=0.1,
lat_0=Van_lat, lon_0=Van_lon,
llcrnrlat=Van_lat-1.5, llcrnrlon=Van_lon-1.25,
urcrnrlat=Van_lat+1.5, urcrnrlon=Van_lon+2
)
# draw coastlines, country boundaries, fill continents.
Map.drawcoastlines(linewidth=0.25)
Map.drawcountries(linewidth=0.25)
Map.fillcontinents()
# draw the edge of the map projection region (the projection limb)
Map.drawmapboundary()
# draw rivers
Map.drawrivers(color='b', linewidth=1)
# draw lat/lon grid lines
Map.drawmeridians([int(Van_lon+i) for i in range(-1,3,)], labels=[1,0,0,1])
Map.drawparallels([int(Van_lat+i) for i in range(-1,3,)], labels=[1,0,0,1])
# label Vancouver
x,y = Map(Van_lon, Van_lat)
Map.plot(x, y, 'r*', markersize=24)
plt.annotate('Vancouver', xy=(x,y), xytext=(10,10), textcoords='offset points', fontsize=16)
plt.show()
Explanation: The following code is based off of Bill Mill's "Simple maps in Python"
End of explanation |
6,107 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Think Bayes
This notebook presents code and exercises from Think Bayes, second edition.
Copyright 2016 Allen B. Downey
MIT License
Step1: The flea beetle problem
Different species of flea beetle can be distinguished by the width and angle of the aedeagus. The data below includes measurements and know species classification for 74 specimens.
Suppose you discover a new specimen under conditions where it is equally likely to be any of the three species. You measure the aedeagus and width 140 microns and angle 15 (in multiples of 7.5 degrees). What is the probability that it belongs to each species?
This problem is based on this data story on DASL
Datafile Name
Step2: Here's what the distributions of width look like.
Step3: And the distributions of angle.
Step4: I'll group the data by species and compute summary statistics.
Step5: Here are the means.
Step6: And the standard deviations.
Step7: And the correlations.
Step8: Those correlations are small enough that we can get an acceptable approximation by ignoring them, but we might want to come back later and write a complete solution that takes them into account.
The likelihood function
To support the likelihood function, I'll make a dictionary for each attribute that contains a norm object for each species.
Step10: Now we can write the likelihood function concisely.
Step11: The hypotheses are the species names
Step12: We'll start with equal priors
Step13: Now we can update with the data and print the posterior. | Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
import math
import numpy as np
from thinkbayes2 import Pmf, Cdf, Suite
import thinkplot
Explanation: Think Bayes
This notebook presents code and exercises from Think Bayes, second edition.
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
import pandas as pd
df = pd.read_csv('../data/flea_beetles.csv', delimiter='\t')
df.head()
Explanation: The flea beetle problem
Different species of flea beetle can be distinguished by the width and angle of the aedeagus. The data below includes measurements and know species classification for 74 specimens.
Suppose you discover a new specimen under conditions where it is equally likely to be any of the three species. You measure the aedeagus and width 140 microns and angle 15 (in multiples of 7.5 degrees). What is the probability that it belongs to each species?
This problem is based on this data story on DASL
Datafile Name: Flea Beetles
Datafile Subjects: Biology
Story Names: Flea Beetles
Reference: Lubischew, A.A. (1962) On the use of discriminant functions in taxonomy. Biometrics, 18, 455-477. Also found in: Hand, D.J., et al. (1994) A Handbook of Small Data Sets, London: Chapman & Hall, 254-255.
Authorization: Contact Authors
Description: Data were collected on the genus of flea beetle Chaetocnema, which contains three species: concinna (Con), heikertingeri (Hei), and heptapotamica (Hep). Measurements were made on the width and angle of the aedeagus of each beetle. The goal of the original study was to form a classification rule to distinguish the three species.
Number of cases: 74
Variable Names:
Width: The maximal width of aedeagus in the forpart (in microns)
Angle: The front angle of the aedeagus (1 unit = 7.5 degrees)
Species: Species of flea beetle from the genus Chaetocnema
We can read the data from this file:
End of explanation
def plot_cdfs(df, col):
for name, group in df.groupby('Species'):
cdf = Cdf(group[col], label=name)
thinkplot.Cdf(cdf)
thinkplot.decorate(xlabel=col,
ylabel='CDF',
loc='lower right')
plot_cdfs(df, 'Width')
Explanation: Here's what the distributions of width look like.
End of explanation
plot_cdfs(df, 'Angle')
Explanation: And the distributions of angle.
End of explanation
grouped = df.groupby('Species')
Explanation: I'll group the data by species and compute summary statistics.
End of explanation
means = grouped.mean()
Explanation: Here are the means.
End of explanation
stddevs = grouped.std()
Explanation: And the standard deviations.
End of explanation
for name, group in grouped:
corr = group.Width.corr(group.Angle)
print(name, corr)
Explanation: And the correlations.
End of explanation
from scipy.stats import norm
dist_width = {}
dist_angle = {}
for name, group in grouped:
dist_width[name] = norm(group.Width.mean(), group.Width.std())
dist_angle[name] = norm(group.Angle.mean(), group.Angle.std())
Explanation: Those correlations are small enough that we can get an acceptable approximation by ignoring them, but we might want to come back later and write a complete solution that takes them into account.
The likelihood function
To support the likelihood function, I'll make a dictionary for each attribute that contains a norm object for each species.
End of explanation
class Beetle(Suite):
def Likelihood(self, data, hypo):
data: sequence of width, height
hypo: name of species
width, angle = data
name = hypo
like = dist_width[name].pdf(width)
like *= dist_angle[name].pdf(angle)
return like
Explanation: Now we can write the likelihood function concisely.
End of explanation
hypos = grouped.groups.keys()
Explanation: The hypotheses are the species names:
End of explanation
suite = Beetle(hypos)
suite.Print()
Explanation: We'll start with equal priors
End of explanation
suite.Update((140, 15))
suite.Print()
Explanation: Now we can update with the data and print the posterior.
End of explanation |
6,108 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Repairing artifacts with SSP
This tutorial covers the basics of signal-space projection (SSP) and shows
how SSP can be used for artifact repair; extended examples illustrate use
of SSP for environmental noise reduction, and for repair of ocular and
heartbeat artifacts.
Step1: <div class="alert alert-info"><h4>Note</h4><p>Before applying SSP (or any artifact repair strategy), be sure to observe
the artifacts in your data to make sure you choose the right repair tool.
Sometimes the right tool is no tool at all — if the artifacts are small
enough you may not even need to repair them to get good analysis results.
See `tut-artifact-overview` for guidance on detecting and
visualizing various types of artifact.</p></div>
What is SSP?
Signal-space projection (SSP)
Step2: The example data <sample-dataset> also includes an "empty room"
recording taken the same day as the recording of the subject. This will
provide a more accurate estimate of environmental noise than the projectors
stored with the system (which are typically generated during annual
maintenance and tuning). Since we have this subject-specific empty-room
recording, we'll create our own projectors from it and discard the
system-provided SSP projectors (saving them first, for later comparison with
the custom ones)
Step3: Notice that the empty room recording itself has the system-provided SSP
projectors in it — we'll remove those from the empty room file too.
Step4: Visualizing the empty-room noise
Let's take a look at the spectrum of the empty room noise. We can view an
individual spectrum for each sensor, or an average (with confidence band)
across sensors
Step5: Creating the empty-room projectors
We create the SSP vectors using ~mne.compute_proj_raw, and control
the number of projectors with parameters n_grad and n_mag. Once
created, the field pattern of the projectors can be easily visualized with
~mne.viz.plot_projs_topomap. We include the parameter
vlim='joint' so that the colormap is computed jointly for all projectors
of a given channel type; this makes it easier to compare their relative
smoothness. Note that for the function to know the types of channels in a
projector, you must also provide the corresponding ~mne.Info object
Step6: Notice that the gradiometer-based projectors seem to reflect problems with
individual sensor units rather than a global noise source (indeed, planar
gradiometers are much less sensitive to distant sources). This is the reason
that the system-provided noise projectors are computed only for
magnetometers. Comparing the system-provided projectors to the
subject-specific ones, we can see they are reasonably similar (though in a
different order) and the left-right component seems to have changed
polarity.
Step7: Visualizing how projectors affect the signal
We could visualize the different effects these have on the data by applying
each set of projectors to different copies of the ~mne.io.Raw object
using ~mne.io.Raw.apply_proj. However, the ~mne.io.Raw.plot
method has a proj parameter that allows us to temporarily apply
projectors while plotting, so we can use this to visualize the difference
without needing to copy the data. Because the projectors are so similar, we
need to zoom in pretty close on the data to see any differences
Step8: The effect is sometimes easier to see on averaged data. Here we use an
interactive feature of mne.Evoked.plot_topomap to turn projectors on
and off to see the effect on the data. Of course, the interactivity won't
work on the tutorial website, but you can download the tutorial and try it
locally
Step9: Plotting the ERP/F using evoked.plot() or evoked.plot_joint() with
and without projectors applied can also be informative, as can plotting with
proj='reconstruct', which can reduce the signal bias introduced by
projections (see tut-artifact-ssp-reconstruction below).
Example
Step10: Repairing ECG artifacts with SSP
MNE-Python provides several functions for detecting and removing heartbeats
from EEG and MEG data. As we saw in tut-artifact-overview,
~mne.preprocessing.create_ecg_epochs can be used to both detect and
extract heartbeat artifacts into an ~mne.Epochs object, which can
be used to visualize how the heartbeat artifacts manifest across the sensors
Step11: Looks like the EEG channels are pretty spread out; let's baseline-correct and
plot again
Step12: To compute SSP projectors for the heartbeat artifact, you can use
~mne.preprocessing.compute_proj_ecg, which takes a
~mne.io.Raw object as input and returns the requested number of
projectors for magnetometers, gradiometers, and EEG channels (default is two
projectors for each channel type).
~mne.preprocessing.compute_proj_ecg also returns an
Step13: The first line of output tells us that
~mne.preprocessing.compute_proj_ecg found three existing projectors
already in the ~mne.io.Raw object, and will include those in the
list of projectors that it returns (appending the new ECG projectors to the
end of the list). If you don't want that, you can change that behavior with
the boolean no_proj parameter. Since we've already run the computation,
we can just as easily separate out the ECG projectors by indexing the list of
projectors
Step14: Just like with the empty-room projectors, we can visualize the scalp
distribution
Step15: Since no dedicated ECG sensor channel was detected in the
~mne.io.Raw object, by default
~mne.preprocessing.compute_proj_ecg used the magnetometers to
estimate the ECG signal (as stated on the third line of output, above). You
can also supply the ch_name parameter to restrict which channel to use
for ECG artifact detection; this is most useful when you had an ECG sensor
but it is not labeled as such in the ~mne.io.Raw file.
The next few lines of the output describe the filter used to isolate ECG
events. The default settings are usually adequate, but the filter can be
customized via the parameters ecg_l_freq, ecg_h_freq, and
filter_length (see the documentation of
~mne.preprocessing.compute_proj_ecg for details).
.. TODO what are the cases where you might need to customize the ECG filter?
infants? Heart murmur?
Once the ECG events have been identified,
~mne.preprocessing.compute_proj_ecg will also filter the data
channels before extracting epochs around each heartbeat, using the parameter
values given in l_freq, h_freq, filter_length, filter_method,
and iir_params. Here again, the default parameter values are usually
adequate.
.. TODO should advice for filtering here be the same as advice for filtering
raw data generally? (e.g., keep high-pass very low to avoid peak shifts?
what if your raw data is already filtered?)
By default, the filtered epochs will be averaged together
before the projection is computed; this can be controlled with the boolean
average parameter. In general this improves the signal-to-noise (where
"signal" here is our artifact!) ratio because the artifact temporal waveform
is fairly similar across epochs and well time locked to the detected events.
To get a sense of how the heartbeat affects the signal at each sensor, you
can plot the data with and without the ECG projectors
Step16: Finally, note that above we passed reject=None to the
~mne.preprocessing.compute_proj_ecg function, meaning that all
detected ECG epochs would be used when computing the projectors (regardless
of signal quality in the data sensors during those epochs). The default
behavior is to reject epochs based on signal amplitude
Step17: Just like we did with the heartbeat artifact, we can compute SSP projectors
for the ocular artifact using ~mne.preprocessing.compute_proj_eog,
which again takes a ~mne.io.Raw object as input and returns the
requested number of projectors for magnetometers, gradiometers, and EEG
channels (default is two projectors for each channel type). This time, we'll
pass no_proj parameter (so we get back only the new EOG projectors, not
also the existing projectors in the ~mne.io.Raw object), and we'll
ignore the events array by assigning it to _ (the conventional way of
handling unwanted return elements in Python).
Step18: Just like with the empty-room and ECG projectors, we can visualize the scalp
distribution
Step19: Now we repeat the plot from above (with empty room and ECG projectors) and
compare it to a plot with empty room, ECG, and EOG projectors, to see how
well the ocular artifacts have been repaired
Step20: Notice that the small peaks in the first to magnetometer channels (MEG
1411 and MEG 1421) that occur at the same time as the large EEG
deflections have also been removed.
Choosing the number of projectors
In the examples above, we used 3 projectors (all magnetometer) to capture
empty room noise, and saw how projectors computed for the gradiometers failed
to capture global patterns (and thus we discarded the gradiometer
projectors). Then we computed 3 projectors (1 for each channel type) to
capture the heartbeat artifact, and 3 more to capture the ocular artifact.
How did we choose these numbers? The short answer is "based on experience" —
knowing how heartbeat artifacts typically manifest across the sensor array
allows us to recognize them when we see them, and recognize when additional
projectors are capturing something else other than a heartbeat artifact (and
thus may be removing brain signal and should be discarded).
Visualizing SSP sensor-space bias via signal reconstruction
.. sidebar | Python Code:
import os
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.preprocessing import (create_eog_epochs, create_ecg_epochs,
compute_proj_ecg, compute_proj_eog)
Explanation: Repairing artifacts with SSP
This tutorial covers the basics of signal-space projection (SSP) and shows
how SSP can be used for artifact repair; extended examples illustrate use
of SSP for environmental noise reduction, and for repair of ocular and
heartbeat artifacts.
:depth: 2
We begin as always by importing the necessary Python modules. To save ourselves
from repeatedly typing mne.preprocessing we'll directly import a handful of
functions from that submodule:
End of explanation
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
Explanation: <div class="alert alert-info"><h4>Note</h4><p>Before applying SSP (or any artifact repair strategy), be sure to observe
the artifacts in your data to make sure you choose the right repair tool.
Sometimes the right tool is no tool at all — if the artifacts are small
enough you may not even need to repair them to get good analysis results.
See `tut-artifact-overview` for guidance on detecting and
visualizing various types of artifact.</p></div>
What is SSP?
Signal-space projection (SSP) :footcite:UusitaloIlmoniemi1997 is a
technique for removing noise from EEG
and MEG signals by :term:projecting <projector> the signal onto a
lower-dimensional subspace. The subspace is chosen by calculating the average
pattern across sensors when the noise is present, treating that pattern as
a "direction" in the sensor space, and constructing the subspace to be
orthogonal to the noise direction (for a detailed walk-through of projection
see tut-projectors-background).
The most common use of SSP is to remove noise from MEG signals when the noise
comes from environmental sources (sources outside the subject's body and the
MEG system, such as the electromagnetic fields from nearby electrical
equipment) and when that noise is stationary (doesn't change much over the
duration of the recording). However, SSP can also be used to remove
biological artifacts such as heartbeat (ECG) and eye movement (EOG)
artifacts. Examples of each of these are given below.
Example: Environmental noise reduction from empty-room recordings
The example data <sample-dataset> was recorded on a Neuromag system,
which stores SSP projectors for environmental noise removal in the system
configuration (so that reasonably clean raw data can be viewed in real-time
during acquisition). For this reason, all the ~mne.io.Raw data in
the example dataset already includes SSP projectors, which are noted in the
output when loading the data:
End of explanation
system_projs = raw.info['projs']
raw.del_proj()
empty_room_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'ernoise_raw.fif')
empty_room_raw = mne.io.read_raw_fif(empty_room_file)
Explanation: The example data <sample-dataset> also includes an "empty room"
recording taken the same day as the recording of the subject. This will
provide a more accurate estimate of environmental noise than the projectors
stored with the system (which are typically generated during annual
maintenance and tuning). Since we have this subject-specific empty-room
recording, we'll create our own projectors from it and discard the
system-provided SSP projectors (saving them first, for later comparison with
the custom ones):
End of explanation
empty_room_raw.del_proj()
Explanation: Notice that the empty room recording itself has the system-provided SSP
projectors in it — we'll remove those from the empty room file too.
End of explanation
for average in (False, True):
empty_room_raw.plot_psd(average=average, dB=False, xscale='log')
Explanation: Visualizing the empty-room noise
Let's take a look at the spectrum of the empty room noise. We can view an
individual spectrum for each sensor, or an average (with confidence band)
across sensors:
End of explanation
empty_room_projs = mne.compute_proj_raw(empty_room_raw, n_grad=3, n_mag=3)
mne.viz.plot_projs_topomap(empty_room_projs, colorbar=True, vlim='joint',
info=empty_room_raw.info)
Explanation: Creating the empty-room projectors
We create the SSP vectors using ~mne.compute_proj_raw, and control
the number of projectors with parameters n_grad and n_mag. Once
created, the field pattern of the projectors can be easily visualized with
~mne.viz.plot_projs_topomap. We include the parameter
vlim='joint' so that the colormap is computed jointly for all projectors
of a given channel type; this makes it easier to compare their relative
smoothness. Note that for the function to know the types of channels in a
projector, you must also provide the corresponding ~mne.Info object:
End of explanation
fig, axs = plt.subplots(2, 3)
for idx, _projs in enumerate([system_projs, empty_room_projs[3:]]):
mne.viz.plot_projs_topomap(_projs, axes=axs[idx], colorbar=True,
vlim='joint', info=empty_room_raw.info)
Explanation: Notice that the gradiometer-based projectors seem to reflect problems with
individual sensor units rather than a global noise source (indeed, planar
gradiometers are much less sensitive to distant sources). This is the reason
that the system-provided noise projectors are computed only for
magnetometers. Comparing the system-provided projectors to the
subject-specific ones, we can see they are reasonably similar (though in a
different order) and the left-right component seems to have changed
polarity.
End of explanation
mags = mne.pick_types(raw.info, meg='mag')
for title, projs in [('system', system_projs),
('subject-specific', empty_room_projs[3:])]:
raw.add_proj(projs, remove_existing=True)
fig = raw.plot(proj=True, order=mags, duration=1, n_channels=2)
fig.subplots_adjust(top=0.9) # make room for title
fig.suptitle('{} projectors'.format(title), size='xx-large', weight='bold')
Explanation: Visualizing how projectors affect the signal
We could visualize the different effects these have on the data by applying
each set of projectors to different copies of the ~mne.io.Raw object
using ~mne.io.Raw.apply_proj. However, the ~mne.io.Raw.plot
method has a proj parameter that allows us to temporarily apply
projectors while plotting, so we can use this to visualize the difference
without needing to copy the data. Because the projectors are so similar, we
need to zoom in pretty close on the data to see any differences:
End of explanation
events = mne.find_events(raw, stim_channel='STI 014')
event_id = {'auditory/left': 1}
# NOTE: appropriate rejection criteria are highly data-dependent
reject = dict(mag=4000e-15, # 4000 fT
grad=4000e-13, # 4000 fT/cm
eeg=150e-6, # 150 µV
eog=250e-6) # 250 µV
# time range where we expect to see the auditory N100: 50-150 ms post-stimulus
times = np.linspace(0.05, 0.15, 5)
epochs = mne.Epochs(raw, events, event_id, proj='delayed', reject=reject)
fig = epochs.average().plot_topomap(times, proj='interactive')
Explanation: The effect is sometimes easier to see on averaged data. Here we use an
interactive feature of mne.Evoked.plot_topomap to turn projectors on
and off to see the effect on the data. Of course, the interactivity won't
work on the tutorial website, but you can download the tutorial and try it
locally:
End of explanation
# pick some channels that clearly show heartbeats and blinks
regexp = r'(MEG [12][45][123]1|EEG 00.)'
artifact_picks = mne.pick_channels_regexp(raw.ch_names, regexp=regexp)
raw.plot(order=artifact_picks, n_channels=len(artifact_picks))
Explanation: Plotting the ERP/F using evoked.plot() or evoked.plot_joint() with
and without projectors applied can also be informative, as can plotting with
proj='reconstruct', which can reduce the signal bias introduced by
projections (see tut-artifact-ssp-reconstruction below).
Example: EOG and ECG artifact repair
Visualizing the artifacts
As mentioned in the ICA tutorial <tut-artifact-ica>, an important
first step is visualizing the artifacts you want to repair. Here they are in
the raw data:
End of explanation
ecg_evoked = create_ecg_epochs(raw).average()
ecg_evoked.plot_joint()
Explanation: Repairing ECG artifacts with SSP
MNE-Python provides several functions for detecting and removing heartbeats
from EEG and MEG data. As we saw in tut-artifact-overview,
~mne.preprocessing.create_ecg_epochs can be used to both detect and
extract heartbeat artifacts into an ~mne.Epochs object, which can
be used to visualize how the heartbeat artifacts manifest across the sensors:
End of explanation
ecg_evoked.apply_baseline((None, None))
ecg_evoked.plot_joint()
Explanation: Looks like the EEG channels are pretty spread out; let's baseline-correct and
plot again:
End of explanation
projs, events = compute_proj_ecg(raw, n_grad=1, n_mag=1, n_eeg=1, reject=None)
Explanation: To compute SSP projectors for the heartbeat artifact, you can use
~mne.preprocessing.compute_proj_ecg, which takes a
~mne.io.Raw object as input and returns the requested number of
projectors for magnetometers, gradiometers, and EEG channels (default is two
projectors for each channel type).
~mne.preprocessing.compute_proj_ecg also returns an :term:events
array containing the sample numbers corresponding to the peak of the
R wave <https://en.wikipedia.org/wiki/QRS_complex>__ of each detected
heartbeat.
End of explanation
ecg_projs = projs[3:]
print(ecg_projs)
Explanation: The first line of output tells us that
~mne.preprocessing.compute_proj_ecg found three existing projectors
already in the ~mne.io.Raw object, and will include those in the
list of projectors that it returns (appending the new ECG projectors to the
end of the list). If you don't want that, you can change that behavior with
the boolean no_proj parameter. Since we've already run the computation,
we can just as easily separate out the ECG projectors by indexing the list of
projectors:
End of explanation
mne.viz.plot_projs_topomap(ecg_projs, info=raw.info)
Explanation: Just like with the empty-room projectors, we can visualize the scalp
distribution:
End of explanation
raw.del_proj()
for title, proj in [('Without', empty_room_projs), ('With', ecg_projs)]:
raw.add_proj(proj, remove_existing=False)
fig = raw.plot(order=artifact_picks, n_channels=len(artifact_picks))
fig.subplots_adjust(top=0.9) # make room for title
fig.suptitle('{} ECG projectors'.format(title), size='xx-large',
weight='bold')
Explanation: Since no dedicated ECG sensor channel was detected in the
~mne.io.Raw object, by default
~mne.preprocessing.compute_proj_ecg used the magnetometers to
estimate the ECG signal (as stated on the third line of output, above). You
can also supply the ch_name parameter to restrict which channel to use
for ECG artifact detection; this is most useful when you had an ECG sensor
but it is not labeled as such in the ~mne.io.Raw file.
The next few lines of the output describe the filter used to isolate ECG
events. The default settings are usually adequate, but the filter can be
customized via the parameters ecg_l_freq, ecg_h_freq, and
filter_length (see the documentation of
~mne.preprocessing.compute_proj_ecg for details).
.. TODO what are the cases where you might need to customize the ECG filter?
infants? Heart murmur?
Once the ECG events have been identified,
~mne.preprocessing.compute_proj_ecg will also filter the data
channels before extracting epochs around each heartbeat, using the parameter
values given in l_freq, h_freq, filter_length, filter_method,
and iir_params. Here again, the default parameter values are usually
adequate.
.. TODO should advice for filtering here be the same as advice for filtering
raw data generally? (e.g., keep high-pass very low to avoid peak shifts?
what if your raw data is already filtered?)
By default, the filtered epochs will be averaged together
before the projection is computed; this can be controlled with the boolean
average parameter. In general this improves the signal-to-noise (where
"signal" here is our artifact!) ratio because the artifact temporal waveform
is fairly similar across epochs and well time locked to the detected events.
To get a sense of how the heartbeat affects the signal at each sensor, you
can plot the data with and without the ECG projectors:
End of explanation
eog_evoked = create_eog_epochs(raw).average()
eog_evoked.apply_baseline((None, None))
eog_evoked.plot_joint()
Explanation: Finally, note that above we passed reject=None to the
~mne.preprocessing.compute_proj_ecg function, meaning that all
detected ECG epochs would be used when computing the projectors (regardless
of signal quality in the data sensors during those epochs). The default
behavior is to reject epochs based on signal amplitude: epochs with
peak-to-peak amplitudes exceeding 50 µV in EEG channels, 250 µV in EOG
channels, 2000 fT/cm in gradiometer channels, or 3000 fT in magnetometer
channels. You can change these thresholds by passing a dictionary with keys
eeg, eog, mag, and grad (though be sure to pass the threshold
values in volts, teslas, or teslas/meter). Generally, it is a good idea to
reject such epochs when computing the ECG projectors (since presumably the
high-amplitude fluctuations in the channels are noise, not reflective of
brain activity); passing reject=None above was done simply to avoid the
dozens of extra lines of output (enumerating which sensor(s) were responsible
for each rejected epoch) from cluttering up the tutorial.
<div class="alert alert-info"><h4>Note</h4><p>`~mne.preprocessing.compute_proj_ecg` has a similar parameter
``flat`` for specifying the *minimum* acceptable peak-to-peak amplitude
for each channel type.</p></div>
While ~mne.preprocessing.compute_proj_ecg conveniently combines
several operations into a single function, MNE-Python also provides functions
for performing each part of the process. Specifically:
mne.preprocessing.find_ecg_events for detecting heartbeats in a
~mne.io.Raw object and returning a corresponding :term:events
array
mne.preprocessing.create_ecg_epochs for detecting heartbeats in a
~mne.io.Raw object and returning an ~mne.Epochs object
mne.compute_proj_epochs for creating projector(s) from any
~mne.Epochs object
See the documentation of each function for further details.
Repairing EOG artifacts with SSP
Once again let's visualize our artifact before trying to repair it. We've
seen above the large deflections in frontal EEG channels in the raw data;
here is how the ocular artifacts manifests across all the sensors:
End of explanation
eog_projs, _ = compute_proj_eog(raw, n_grad=1, n_mag=1, n_eeg=1, reject=None,
no_proj=True)
Explanation: Just like we did with the heartbeat artifact, we can compute SSP projectors
for the ocular artifact using ~mne.preprocessing.compute_proj_eog,
which again takes a ~mne.io.Raw object as input and returns the
requested number of projectors for magnetometers, gradiometers, and EEG
channels (default is two projectors for each channel type). This time, we'll
pass no_proj parameter (so we get back only the new EOG projectors, not
also the existing projectors in the ~mne.io.Raw object), and we'll
ignore the events array by assigning it to _ (the conventional way of
handling unwanted return elements in Python).
End of explanation
mne.viz.plot_projs_topomap(eog_projs, info=raw.info)
Explanation: Just like with the empty-room and ECG projectors, we can visualize the scalp
distribution:
End of explanation
for title in ('Without', 'With'):
if title == 'With':
raw.add_proj(eog_projs)
fig = raw.plot(order=artifact_picks, n_channels=len(artifact_picks))
fig.subplots_adjust(top=0.9) # make room for title
fig.suptitle('{} EOG projectors'.format(title), size='xx-large',
weight='bold')
Explanation: Now we repeat the plot from above (with empty room and ECG projectors) and
compare it to a plot with empty room, ECG, and EOG projectors, to see how
well the ocular artifacts have been repaired:
End of explanation
evoked = epochs.average()
# Apply the average ref first:
# It's how we typically view EEG data, and here we're really just interested
# in the effect of the EOG+ECG SSPs
evoked.del_proj().set_eeg_reference(projection=True).apply_proj()
evoked.add_proj(ecg_projs).add_proj(eog_projs)
fig, axes = plt.subplots(3, 3, figsize=(8, 6))
for ii in range(3):
axes[ii, 0].get_shared_y_axes().join(*axes[ii])
for pi, proj in enumerate((False, True, 'reconstruct')):
evoked.plot(proj=proj, axes=axes[:, pi], spatial_colors=True)
if pi == 0:
for ax in axes[:, pi]:
parts = ax.get_title().split('(')
ax.set(ylabel=f'{parts[0]} ({ax.get_ylabel()})\n'
f'{parts[1].replace(")", "")}')
axes[0, pi].set(title=f'proj={proj}')
axes[0, pi].texts = []
plt.setp(axes[1:, :].ravel(), title='')
plt.setp(axes[:, 1:].ravel(), ylabel='')
plt.setp(axes[:-1, :].ravel(), xlabel='')
mne.viz.tight_layout()
Explanation: Notice that the small peaks in the first to magnetometer channels (MEG
1411 and MEG 1421) that occur at the same time as the large EEG
deflections have also been removed.
Choosing the number of projectors
In the examples above, we used 3 projectors (all magnetometer) to capture
empty room noise, and saw how projectors computed for the gradiometers failed
to capture global patterns (and thus we discarded the gradiometer
projectors). Then we computed 3 projectors (1 for each channel type) to
capture the heartbeat artifact, and 3 more to capture the ocular artifact.
How did we choose these numbers? The short answer is "based on experience" —
knowing how heartbeat artifacts typically manifest across the sensor array
allows us to recognize them when we see them, and recognize when additional
projectors are capturing something else other than a heartbeat artifact (and
thus may be removing brain signal and should be discarded).
Visualizing SSP sensor-space bias via signal reconstruction
.. sidebar:: SSP reconstruction
Internally, the reconstruction is performed by effectively using a
minimum-norm source localization to a spherical source space with the
projections accounted for, and then projecting the source-space data
back out to sensor space.
Because SSP performs an orthogonal projection, any spatial component in the
data that is not perfectly orthogonal to the SSP spatial direction(s) will
have its overall amplitude reduced by the projection operation. In other
words, SSP typically introduces some amount of amplitude reduction bias in
the sensor space data.
When performing source localization of M/EEG data, these projections are
properly taken into account by being applied not just to the M/EEG data
but also to the forward solution, and hence SSP should not bias the estimated
source amplitudes. However, for sensor space analyses, it can be useful to
visualize the extent to which SSP projection has biased the data. This can be
explored by using proj='reconstruct' in evoked plotting functions, for
example via evoked.plot() <mne.Evoked.plot>:
End of explanation |
6,109 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Initial setup
Jump_to lesson 9 video
Step1: Annealing
We define two new callbacks
Step2: Let's start with a simple linear schedule going from start to end. It returns a function that takes a pos argument (going from 0 to 1) such that this function goes from start (at pos=0) to end (at pos=1) in a linear fashion.
Jump_to lesson 9 video
Step3: We can refactor this with a decorator.
Jump_to lesson 9 video
Step4: And here are other scheduler functions
Step5: Jump_to lesson 9 video
Step6: In practice, we'll often want to combine different schedulers, the following function does that
Step7: Here is an example
Step8: We can use it for training quite easily...
Step9: ... then check with our recorder if the learning rate followed the right schedule.
Step10: Export | Python Code:
x_train,y_train,x_valid,y_valid = get_data()
train_ds,valid_ds = Dataset(x_train, y_train),Dataset(x_valid, y_valid)
nh,bs = 50,512
c = y_train.max().item()+1
loss_func = F.cross_entropy
data = DataBunch(*get_dls(train_ds, valid_ds, bs), c)
#export
def create_learner(model_func, loss_func, data):
return Learner(*model_func(data), loss_func, data)
learn = create_learner(get_model, loss_func, data)
run = Runner([AvgStatsCallback([accuracy])])
run.fit(3, learn)
learn = create_learner(partial(get_model, lr=0.3), loss_func, data)
run = Runner([AvgStatsCallback([accuracy])])
run.fit(3, learn)
#export
def get_model_func(lr=0.5): return partial(get_model, lr=lr)
Explanation: Initial setup
Jump_to lesson 9 video
End of explanation
#export
class Recorder(Callback):
def begin_fit(self): self.lrs,self.losses = [],[]
def after_batch(self):
if not self.in_train: return
self.lrs.append(self.opt.param_groups[-1]['lr'])
self.losses.append(self.loss.detach().cpu())
def plot_lr (self): plt.plot(self.lrs)
def plot_loss(self): plt.plot(self.losses)
class ParamScheduler(Callback):
_order=1
def __init__(self, pname, sched_func): self.pname,self.sched_func = pname,sched_func
def set_param(self):
for pg in self.opt.param_groups:
pg[self.pname] = self.sched_func(self.n_epochs/self.epochs)
def begin_batch(self):
if self.in_train: self.set_param()
Explanation: Annealing
We define two new callbacks: the Recorder to save track of the loss and our scheduled learning rate, and a ParamScheduler that can schedule any hyperparameter as long as it's registered in the state_dict of the optimizer.
Jump_to lesson 9 video
End of explanation
def sched_lin(start, end):
def _inner(start, end, pos): return start + pos*(end-start)
return partial(_inner, start, end)
Explanation: Let's start with a simple linear schedule going from start to end. It returns a function that takes a pos argument (going from 0 to 1) such that this function goes from start (at pos=0) to end (at pos=1) in a linear fashion.
Jump_to lesson 9 video
End of explanation
#export
def annealer(f):
def _inner(start, end): return partial(f, start, end)
return _inner
@annealer
def sched_lin(start, end, pos): return start + pos*(end-start)
# shift-tab works too, in Jupyter!
# sched_lin()
f = sched_lin(1,2)
f(0.3)
Explanation: We can refactor this with a decorator.
Jump_to lesson 9 video
End of explanation
#export
@annealer
def sched_cos(start, end, pos): return start + (1 + math.cos(math.pi*(1-pos))) * (end-start) / 2
@annealer
def sched_no(start, end, pos): return start
@annealer
def sched_exp(start, end, pos): return start * (end/start) ** pos
def cos_1cycle_anneal(start, high, end):
return [sched_cos(start, high), sched_cos(high, end)]
#This monkey-patch is there to be able to plot tensors
torch.Tensor.ndim = property(lambda x: len(x.shape))
Explanation: And here are other scheduler functions:
End of explanation
annealings = "NO LINEAR COS EXP".split()
a = torch.arange(0, 100)
p = torch.linspace(0.01,1,100)
fns = [sched_no, sched_lin, sched_cos, sched_exp]
for fn, t in zip(fns, annealings):
f = fn(2, 1e-2)
plt.plot(a, [f(o) for o in p], label=t)
plt.legend();
Explanation: Jump_to lesson 9 video
End of explanation
#export
def combine_scheds(pcts, scheds):
assert sum(pcts) == 1.
pcts = tensor([0] + listify(pcts))
assert torch.all(pcts >= 0)
pcts = torch.cumsum(pcts, 0)
def _inner(pos):
idx = (pos >= pcts).nonzero().max()
if idx == 2: idx = 1
actual_pos = (pos-pcts[idx]) / (pcts[idx+1]-pcts[idx])
return scheds[idx](actual_pos)
return _inner
Explanation: In practice, we'll often want to combine different schedulers, the following function does that: it uses scheds[i] for pcts[i] of the training.
End of explanation
sched = combine_scheds([0.3, 0.7], [sched_cos(0.3, 0.6), sched_cos(0.6, 0.2)])
plt.plot(a, [sched(o) for o in p])
Explanation: Here is an example: use 30% of the budget to go from 0.3 to 0.6 following a cosine, then the last 70% of the budget to go from 0.6 to 0.2, still following a cosine.
End of explanation
cbfs = [Recorder,
partial(AvgStatsCallback,accuracy),
partial(ParamScheduler, 'lr', sched)]
learn = create_learner(get_model_func(0.3), loss_func, data)
run = Runner(cb_funcs=cbfs)
run.fit(3, learn)
Explanation: We can use it for training quite easily...
End of explanation
run.recorder.plot_lr()
run.recorder.plot_loss()
Explanation: ... then check with our recorder if the learning rate followed the right schedule.
End of explanation
!./notebook2script.py 05_anneal.ipynb
Explanation: Export
End of explanation |
6,110 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Brax
Step1: Brax Config
Here's a brax config that defines a bouncy ball
Step2: We visualize this system config like so
Step3: Brax State
$\text{QP}$, brax's dynamic state, is a structure with the following fields
Step4: Brax Step Function
Let's observe $\text{step}(\text{config}, \text{qp}_t)$ with a few different variants of $\text{config}$ and $\text{qp}$
Step5: Joints
Joints constrain the motion of bodies so that they move in tandem
Step6: Here is our system at rest
Step7: Let's observe $\text{step}(\text{config}, \text{qp}_t)$ by smacking the bottom ball with an initial impulse, simulating a pendulum swing.
Step8: Actuators
Actuators provide dynamic input to the system during every physics step. They provide control parameters for users to manipulate the system interactively via the $\text{act}$ parameter.
Step9: Let's observe $\text{step}(\text{config}, \text{qp}_t, \text{act})$ by raising the middle ball to a desired target angle | Python Code:
#@title Colab setup and imports
from matplotlib.lines import Line2D
from matplotlib.patches import Circle
import matplotlib.pyplot as plt
import numpy as np
try:
import brax
except ImportError:
from IPython.display import clear_output
!pip install git+https://github.com/google/brax.git@main
clear_output()
import brax
Explanation: Brax: a differentiable physics engine
Brax simulates physical systems made up of rigid bodies, joints, and actutators. Brax provides the function:
$$
\text{qp}_{t+1} = \text{step}(\text{system}, \text{qp}_t, \text{act})
$$
where:
* $\text{system}$ is the static description of the physical system: each body in the world, its weight and size, and so on
* $\text{qp}_t$ is the dynamic state of the system at time $t$: each body's position, rotation, velocity, and angular velocity
* $\text{act}$ is dynamic input to the system in the form of motor actuation
Brax simulations are differentiable: the gradient $\Delta \text{step}$ can be used for efficient trajectory optimization. But Brax is also well-suited to derivative-free optimization methods such as evolutionary strategy or reinforcement learning.
Let's review how $\text{system}$, $\text{qp}_t$, and $\text{act}$ are used:
End of explanation
#@title A bouncy ball scene
bouncy_ball = brax.Config(dt=0.05, substeps=20, dynamics_mode='pbd')
# ground is a frozen (immovable) infinite plane
ground = bouncy_ball.bodies.add(name='ground')
ground.frozen.all = True
plane = ground.colliders.add().plane
plane.SetInParent() # for setting an empty oneof
# ball weighs 1kg, has equal rotational inertia along all axes, is 1m long, and
# has an initial rotation of identity (w=1,x=0,y=0,z=0) quaternion
ball = bouncy_ball.bodies.add(name='ball', mass=1)
cap = ball.colliders.add().capsule
cap.radius, cap.length = 0.5, 1
# gravity is -9.8 m/s^2 in z dimension
bouncy_ball.gravity.z = -9.8
Explanation: Brax Config
Here's a brax config that defines a bouncy ball:
End of explanation
def draw_system(ax, pos, alpha=1):
for i, p in enumerate(pos):
ax.add_patch(Circle(xy=(p[0], p[2]), radius=cap.radius, fill=False, color=(0, 0, 0, alpha)))
if i < len(pos) - 1:
pn = pos[i + 1]
ax.add_line(Line2D([p[0], pn[0]], [p[2], pn[2]], color=(1, 0, 0, alpha)))
_, ax = plt.subplots()
plt.xlim([-3, 3])
plt.ylim([0, 4])
draw_system(ax, [[0, 0, 0.5]])
plt.title('ball at rest')
plt.show()
Explanation: We visualize this system config like so:
End of explanation
qp = brax.QP(
# position of each body in 3d (z is up, right-hand coordinates)
pos = np.array([[0., 0., 0.], # ground
[0., 0., 3.]]), # ball is 3m up in the air
# velocity of each body in 3d
vel = np.array([[0., 0., 0.], # ground
[0., 0., 0.]]), # ball
# rotation about center of body, as a quaternion (w, x, y, z)
rot = np.array([[1., 0., 0., 0.], # ground
[1., 0., 0., 0.]]), # ball
# angular velocity about center of body in 3d
ang = np.array([[0., 0., 0.], # ground
[0., 0., 0.]]) # ball
)
Explanation: Brax State
$\text{QP}$, brax's dynamic state, is a structure with the following fields:
End of explanation
#@title Simulating the bouncy ball config { run: "auto"}
bouncy_ball.elasticity = 0.85 #@param { type:"slider", min: 0, max: 1.0, step:0.05 }
ball_velocity = 1 #@param { type:"slider", min:-5, max:5, step: 0.5 }
sys = brax.System(bouncy_ball)
# provide an initial velocity to the ball
qp.vel[1, 0] = ball_velocity
_, ax = plt.subplots()
plt.xlim([-3, 3])
plt.ylim([0, 4])
for i in range(100):
draw_system(ax, qp.pos[1:], i / 100.)
qp, _ = sys.step(qp, [])
plt.title('ball in motion')
plt.show()
Explanation: Brax Step Function
Let's observe $\text{step}(\text{config}, \text{qp}_t)$ with a few different variants of $\text{config}$ and $\text{qp}$:
End of explanation
#@title A pendulum config for Brax
pendulum = brax.Config(dt=0.01, substeps=20, dynamics_mode='pbd')
# start with a frozen anchor at the root of the pendulum
anchor = pendulum.bodies.add(name='anchor', mass=1.0)
anchor.frozen.all = True
# now add a middle and bottom ball to the pendulum
pendulum.bodies.append(ball)
pendulum.bodies.append(ball)
pendulum.bodies[1].name = 'middle'
pendulum.bodies[2].name = 'bottom'
# connect anchor to middle
joint = pendulum.joints.add(name='joint1', parent='anchor',
child='middle', angular_damping=20)
joint.angle_limit.add(min = -180, max = 180)
joint.child_offset.z = 1.5
joint.rotation.z = 90
# connect middle to bottom
pendulum.joints.append(joint)
pendulum.joints[1].name = 'joint2'
pendulum.joints[1].parent = 'middle'
pendulum.joints[1].child = 'bottom'
# gravity is -9.8 m/s^2 in z dimension
pendulum.gravity.z = -9.8
Explanation: Joints
Joints constrain the motion of bodies so that they move in tandem:
End of explanation
_, ax = plt.subplots()
plt.xlim([-3, 3])
plt.ylim([0, 4])
# rather than building our own qp like last time, we ask brax.System to
# generate a default one for us, which is handy
qp = brax.System(pendulum).default_qp()
draw_system(ax, qp.pos)
plt.title('pendulum at rest')
plt.show()
Explanation: Here is our system at rest:
End of explanation
#@title Simulating the pendulum config { run: "auto"}
ball_impulse = 8 #@param { type:"slider", min:-15, max:15, step: 0.5 }
sys = brax.System(pendulum)
qp = sys.default_qp()
# provide an initial velocity to the ball
qp.vel[2, 0] = ball_impulse
_, ax = plt.subplots()
plt.xlim([-3, 3])
plt.ylim([0, 4])
for i in range(50):
draw_system(ax, qp.pos, i / 50.)
qp, _ = sys.step(qp, [])
plt.title('pendulum in motion')
plt.show()
Explanation: Let's observe $\text{step}(\text{config}, \text{qp}_t)$ by smacking the bottom ball with an initial impulse, simulating a pendulum swing.
End of explanation
#@title A single actuator on the pendulum
actuated_pendulum = brax.Config()
actuated_pendulum.CopyFrom(pendulum)
# actuating the joint connecting the anchor and middle
angle = actuated_pendulum.actuators.add(name='actuator', joint='joint1',
strength=100).angle
angle.SetInParent() # for setting an empty oneof
Explanation: Actuators
Actuators provide dynamic input to the system during every physics step. They provide control parameters for users to manipulate the system interactively via the $\text{act}$ parameter.
End of explanation
#@title Simulating the actuated pendulum config { run: "auto"}
target_angle = 45 #@param { type:"slider", min:-90, max:90, step: 1 }
sys = brax.System(actuated_pendulum)
qp = sys.default_qp()
act = np.array([target_angle])
_, ax = plt.subplots()
plt.xlim([-3, 3])
plt.ylim([0, 4])
for i in range(100):
draw_system(ax, qp.pos, i / 100.)
qp, _ = sys.step(qp, act)
plt.title('actuating a pendulum joint')
plt.show()
Explanation: Let's observe $\text{step}(\text{config}, \text{qp}_t, \text{act})$ by raising the middle ball to a desired target angle:
End of explanation |
6,111 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simplified ZZ analysis
This is based on the ZZ analysis in the ATLAS outreach paper, but including all possible pairs of muons rather than selecting the combination closest to the Z mass.
This time we will use ROOT histograms instead of Matplotlib
Step1: Use some Monte Carlo ZZ events for testing before running on real data
Step2: Define a class and some functions that we can use for extracting the information we want from the events
Step3: Now we can look for events with exactly four "good" leptons (those with a big enough pT) and combine them in pairs to make Z candidates | Python Code:
from ROOT import TChain, TH1F, TLorentzVector, TCanvas
Explanation: Simplified ZZ analysis
This is based on the ZZ analysis in the ATLAS outreach paper, but including all possible pairs of muons rather than selecting the combination closest to the Z mass.
This time we will use ROOT histograms instead of Matplotlib:
End of explanation
data = TChain("mini"); # "mini" is the name of the TTree stored in the data files
data.Add("http://atlas-opendata.web.cern.ch/atlas-opendata/release/samples/MC/mc_105986.ZZ.root")
#data.Add("http://atlas-opendata.web.cern.ch/atlas-opendata/release/samples/Data/DataMuons.root")
Explanation: Use some Monte Carlo ZZ events for testing before running on real data:
End of explanation
class Particle:
'''
Represents a particle with a known type, charge and four-momentum
'''
def __init__(self, four_momentum, pdg_code, charge):
self.four_momentum = four_momentum
self.typ = abs(pdg_code)
self.charge = charge
def leptons_from_event(event, pt_min=0.0):
'''
Gets list of leptons from an event, subject to an optional minimum pT cut.
'''
leptons = []
for i in xrange(event.lep_n):
pt = event.lep_pt[i]
if pt > pt_min: # only add lepton to output if it has enough pt
p = TLorentzVector()
p.SetPtEtaPhiE(pt, event.lep_eta[i], event.lep_phi[i], event.lep_E[i])
particle = Particle(p, event.lep_type[i], event.lep_charge[i])
leptons.append(particle)
return leptons
def pairs_from_leptons(leptons):
'''
Get list of four-momenta for all possible opposite-charge pairs.
'''
neg = []
pos = []
for lepton in leptons:
if lepton.charge > 0:
pos.append(lepton)
elif lepton.charge < 0:
neg.append(lepton)
else:
print("Warning: unexpected neutral particle")
pairs = []
for p in pos:
pp = p.four_momentum
for n in neg:
if p.typ == n.typ: # only combine if they are same type (e or mu)
pn = n.four_momentum
ptot = pp + pn
pairs.append(ptot)
return pairs
Explanation: Define a class and some functions that we can use for extracting the information we want from the events:
End of explanation
c1 = TCanvas("TheCanvas","Canvas for plotting histograms",800,600)
h1 = TH1F("h1","Dilepton mass",200,0,200)
num_events = data.GetEntries()
for event_num in xrange(1000): # loop over the events
data.GetEntry(event_num) # read the next event into memory
leptons = leptons_from_event(data,10000) # pt cut of 10 GeV
if len(leptons) == 4: # require exactly 4 "good" leptons
pairs = pairs_from_leptons(leptons)
for pair in pairs:
m = pair.M()/ 1000. # convert from MeV to GeV
h1.Fill(m)
h1.Draw('E')
c1.Draw()
Explanation: Now we can look for events with exactly four "good" leptons (those with a big enough pT) and combine them in pairs to make Z candidates:
End of explanation |
6,112 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CycleGAN, Image-to-Image Translation
In this notebook, we're going to define and train a CycleGAN to read in an image from a set $X$ and transform it so that it looks as if it belongs in set $Y$. Specifically, we'll look at a set of images of Yosemite national park taken either during the summer of winter. The seasons are our two domains!
The objective will be to train generators that learn to transform an image from domain $X$ into an image that looks like it came from domain $Y$ (and vice versa).
Some examples of image data in both sets are pictured below.
<img src='notebook_images/XY_season_images.png' width=50% />
Unpaired Training Data
These images do not come with labels, but CycleGANs give us a way to learn the mapping between one image domain and another using an unsupervised approach. A CycleGAN is designed for image-to-image translation and it learns from unpaired training data. This means that in order to train a generator to translate images from domain $X$ to domain $Y$, we do not have to have exact correspondences between individual images in those domains. For example, in the paper that introduced CycleGANs, the authors are able to translate between images of horses and zebras, even though there are no images of a zebra in exactly the same position as a horse or with exactly the same background, etc. Thus, CycleGANs enable learning a mapping from one domain $X$ to another domain $Y$ without having to find perfectly-matched, training pairs!
<img src='notebook_images/horse2zebra.jpg' width=50% />
CycleGAN and Notebook Structure
A CycleGAN is made of two types of networks
Step2: DataLoaders
The get_data_loader function returns training and test DataLoaders that can load data efficiently and in specified batches. The function has the following parameters
Step3: Display some Training Images
Below we provide a function imshow that reshape some given images and converts them to NumPy images so that they can be displayed by plt. This cell should display a grid that contains a batch of image data from set $X$.
Step4: Next, let's visualize a batch of images from set $Y$.
Step5: Pre-processing
Step7: Define the Model
A CycleGAN is made of two discriminator and two generator networks.
Discriminators
The discriminators, $D_X$ and $D_Y$, in this CycleGAN are convolutional neural networks that see an image and attempt to classify it as real or fake. In this case, real is indicated by an output close to 1 and fake as close to 0. The discriminators have the following architecture
Step8: Define the Discriminator Architecture
Your task is to fill in the __init__ function with the specified 5 layer conv net architecture. Both $D_X$ and $D_Y$ have the same architecture, so we only need to define one class, and later instantiate two discriminators.
It's recommended that you use a kernel size of 4x4 and use that to determine the correct stride and padding size for each layer. This Stanford resource may also help in determining stride and padding sizes.
Define your convolutional layers in __init__
Then fill in the forward behavior of the network
The forward function defines how an input image moves through the discriminator, and the most important thing is to pass it through your convolutional layers in order, with a ReLu activation function applied to all but the last layer.
You should not apply a sigmoid activation function to the output, here, and that is because we are planning on using a squared error loss for training. And you can read more about this loss function, later in the notebook.
Step10: Generators
The generators, G_XtoY and G_YtoX (sometimes called F), are made of an encoder, a conv net that is responsible for turning an image into a smaller feature representation, and a decoder, a transpose_conv net that is responsible for turning that representation into an transformed image. These generators, one from XtoY and one from YtoX, have the following architecture
Step12: Transpose Convolutional Helper Function
To define the generators, you're expected to use the above conv function, ResidualBlock class, and the below deconv helper function, which creates a transpose convolutional layer + an optional batchnorm layer.
Step14: Define the Generator Architecture
Complete the __init__ function with the specified 3 layer encoder convolutional net, a series of residual blocks (the number of which is given by n_res_blocks), and then a 3 layer decoder transpose convolutional net.
Then complete the forward function to define the forward behavior of the generators. Recall that the last layer has a tanh activation function.
Both $G_{XtoY}$ and $G_{YtoX}$ have the same architecture, so we only need to define one class, and later instantiate two generators.
Step16: Create the complete network
Using the classes you defined earlier, you can define the discriminators and generators necessary to create a complete CycleGAN. The given parameters should work for training.
First, create two discriminators, one for checking if $X$ sample images are real, and one for checking if $Y$ sample images are real. Then the generators. Instantiate two of them, one for transforming a painting into a realistic photo and one for transforming a photo into into a painting.
Step18: Check that you've implemented this correctly
The function create_model should return the two generator and two discriminator networks. After you've defined these discriminator and generator components, it's good practice to check your work. The easiest way to do this is to print out your model architecture and read through it to make sure the parameters are what you expected. The next cell will print out their architectures.
Step19: Discriminator and Generator Losses
Computing the discriminator and the generator losses are key to getting a CycleGAN to train.
<img src='notebook_images/CycleGAN_loss.png' width=90% height=90% />
Image from original paper by Jun-Yan Zhu et. al.
The CycleGAN contains two mapping functions $G
Step20: Define the Optimizers
Next, let's define how this model will update its weights. This, like the GANs you may have seen before, uses Adam optimizers for the discriminator and generator. It's again recommended that you take a look at the original, CycleGAN paper to get starting hyperparameter values.
Step21: Training a CycleGAN
When a CycleGAN trains, and sees one batch of real images from set $X$ and $Y$, it trains by performing the following steps
Step22: Tips on Training and Loss Patterns
A lot of experimentation goes into finding the best hyperparameters such that the generators and discriminators don't overpower each other. It's often a good starting point to look at existing papers to find what has worked in previous experiments, I'd recommend this DCGAN paper in addition to the original CycleGAN paper to see what worked for them. Then, you can try your own experiments based off of a good foundation.
Discriminator Losses
When you display the generator and discriminator losses you should see that there is always some discriminator loss; recall that we are trying to design a model that can generate good "fake" images. So, the ideal discriminator will not be able to tell the difference between real and fake images and, as such, will always have some loss. You should also see that $D_X$ and $D_Y$ are roughly at the same loss levels; if they are not, this indicates that your training is favoring one type of discriminator over the and you may need to look at biases in your models or data.
Generator Loss
The generator's loss should start significantly higher than the discriminator losses because it is accounting for the loss of both generators and weighted reconstruction errors. You should see this loss decrease a lot at the start of training because initial, generated images are often far-off from being good fakes. After some time it may level off; this is normal since the generator and discriminator are both improving as they train. If you see that the loss is jumping around a lot, over time, you may want to try decreasing your learning rates or changing your cycle consistency loss to be a little more/less weighted.
Step23: Evaluate the Result!
As you trained this model, you may have chosen to sample and save the results of your generated images after a certain number of training iterations. This gives you a way to see whether or not your Generators are creating good fake images. For example, the image below depicts real images in the $Y$ set, and the corresponding generated images during different points in the training process. You can see that the generator starts out creating very noisy, fake images, but begins to converge to better representations as it trains (though, not perfect).
<img src='notebook_images/sample-004000-summer2winter.png' width=50% />
Below, you've been given a helper function for displaying generated samples based on the passed in training iteration. | Python Code:
# loading in and transforming data
import os
import torch
from torch.utils.data import DataLoader
import torchvision
import torchvision.datasets as datasets
import torchvision.transforms as transforms
# visualizing data
import matplotlib.pyplot as plt
import numpy as np
import warnings
%matplotlib inline
Explanation: CycleGAN, Image-to-Image Translation
In this notebook, we're going to define and train a CycleGAN to read in an image from a set $X$ and transform it so that it looks as if it belongs in set $Y$. Specifically, we'll look at a set of images of Yosemite national park taken either during the summer of winter. The seasons are our two domains!
The objective will be to train generators that learn to transform an image from domain $X$ into an image that looks like it came from domain $Y$ (and vice versa).
Some examples of image data in both sets are pictured below.
<img src='notebook_images/XY_season_images.png' width=50% />
Unpaired Training Data
These images do not come with labels, but CycleGANs give us a way to learn the mapping between one image domain and another using an unsupervised approach. A CycleGAN is designed for image-to-image translation and it learns from unpaired training data. This means that in order to train a generator to translate images from domain $X$ to domain $Y$, we do not have to have exact correspondences between individual images in those domains. For example, in the paper that introduced CycleGANs, the authors are able to translate between images of horses and zebras, even though there are no images of a zebra in exactly the same position as a horse or with exactly the same background, etc. Thus, CycleGANs enable learning a mapping from one domain $X$ to another domain $Y$ without having to find perfectly-matched, training pairs!
<img src='notebook_images/horse2zebra.jpg' width=50% />
CycleGAN and Notebook Structure
A CycleGAN is made of two types of networks: discriminators, and generators. In this example, the discriminators are responsible for classifying images as real or fake (for both $X$ and $Y$ kinds of images). The generators are responsible for generating convincing, fake images for both kinds of images.
This notebook will detail the steps you should take to define and train such a CycleGAN.
You'll load in the image data using PyTorch's DataLoader class to efficiently read in images from a specified directory.
Then, you'll be tasked with defining the CycleGAN architecture according to provided specifications. You'll define the discriminator and the generator models.
You'll complete the training cycle by calculating the adversarial and cycle consistency losses for the generator and discriminator network and completing a number of training epochs. It's suggested that you enable GPU usage for training.
Finally, you'll evaluate your model by looking at the loss over time and looking at sample, generated images.
Load and Visualize the Data
We'll first load in and visualize the training data, importing the necessary libraries to do so.
If you are working locally, you'll need to download the data as a zip file by clicking here.
It may be named summer2winter-yosemite/ with a dash or an underscore, so take note, extract the data to your home directory and make sure the below image_dir matches. Then you can proceed with the following loading code.
End of explanation
def get_data_loader(image_type, image_dir='summer2winter-yosemite',
image_size=128, batch_size=16, num_workers=0):
Returns training and test data loaders for a given image type, either 'summer' or 'winter'.
These images will be resized to 128x128x3, by default, converted into Tensors, and normalized.
# resize and normalize the images
transform = transforms.Compose([transforms.Resize(image_size), # resize to 128x128
transforms.ToTensor()])
# get training and test directories
image_path = './' + image_dir
train_path = os.path.join(image_path, image_type)
test_path = os.path.join(image_path, 'test_{}'.format(image_type))
# define datasets using ImageFolder
train_dataset = datasets.ImageFolder(train_path, transform)
test_dataset = datasets.ImageFolder(test_path, transform)
# create and return DataLoaders
train_loader = DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True, num_workers=num_workers)
test_loader = DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False, num_workers=num_workers)
return train_loader, test_loader
# Create train and test dataloaders for images from the two domains X and Y
# image_type = directory names for our data
dataloader_X, test_dataloader_X = get_data_loader(image_type='summer')
dataloader_Y, test_dataloader_Y = get_data_loader(image_type='winter')
Explanation: DataLoaders
The get_data_loader function returns training and test DataLoaders that can load data efficiently and in specified batches. The function has the following parameters:
* image_type: summer or winter, the names of the directories where the X and Y images are stored
* image_dir: name of the main image directory, which holds all training and test images
* image_size: resized, square image dimension (all images will be resized to this dim)
* batch_size: number of images in one batch of data
The test data is strictly for feeding to our generators, later on, so we can visualize some generated samples on fixed, test data.
You can see that this function is also responsible for making sure our images are of the right, square size (128x128x3) and converted into Tensor image types.
It's suggested that you use the default values of these parameters.
Note: If you are trying this code on a different set of data, you may get better results with larger image_size and batch_size parameters. If you change the batch_size, make sure that you create complete batches in the training loop otherwise you may get an error when trying to save sample data.
End of explanation
# helper imshow function
def imshow(img):
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
# get some images from X
dataiter = iter(dataloader_X)
# the "_" is a placeholder for no labels
images, _ = dataiter.next()
# show images
fig = plt.figure(figsize=(12, 8))
imshow(torchvision.utils.make_grid(images))
Explanation: Display some Training Images
Below we provide a function imshow that reshape some given images and converts them to NumPy images so that they can be displayed by plt. This cell should display a grid that contains a batch of image data from set $X$.
End of explanation
# get some images from Y
dataiter = iter(dataloader_Y)
images, _ = dataiter.next()
# show images
fig = plt.figure(figsize=(12,8))
imshow(torchvision.utils.make_grid(images))
Explanation: Next, let's visualize a batch of images from set $Y$.
End of explanation
# current range
img = images[0]
print('Min: ', img.min())
print('Max: ', img.max())
# helper scale function
def scale(x, feature_range=(-1, 1)):
''' Scale takes in an image x and returns that image, scaled
with a feature_range of pixel values from -1 to 1.
This function assumes that the input x is already scaled from 0-255.'''
# scale from 0-1 to feature_range
min, max = feature_range
x = x * (max - min) + min
return x
# scaled range
scaled_img = scale(img)
print('Scaled min: ', scaled_img.min())
print('Scaled max: ', scaled_img.max())
Explanation: Pre-processing: scaling from -1 to 1
We need to do a bit of pre-processing; we know that the output of our tanh activated generator will contain pixel values in a range from -1 to 1, and so, we need to rescale our training images to a range of -1 to 1. (Right now, they are in a range from 0-1.)
End of explanation
import torch.nn as nn
import torch.nn.functional as F
# helper conv function
def conv(in_channels, out_channels, kernel_size, stride=2, padding=1, batch_norm=True):
Creates a convolutional layer, with optional batch normalization.
layers = []
conv_layer = nn.Conv2d(in_channels=in_channels, out_channels=out_channels,
kernel_size=kernel_size, stride=stride, padding=padding, bias=False)
layers.append(conv_layer)
if batch_norm:
layers.append(nn.BatchNorm2d(out_channels))
return nn.Sequential(*layers)
Explanation: Define the Model
A CycleGAN is made of two discriminator and two generator networks.
Discriminators
The discriminators, $D_X$ and $D_Y$, in this CycleGAN are convolutional neural networks that see an image and attempt to classify it as real or fake. In this case, real is indicated by an output close to 1 and fake as close to 0. The discriminators have the following architecture:
<img src='notebook_images/discriminator_layers.png' width=80% />
This network sees a 128x128x3 image, and passes it through 5 convolutional layers that downsample the image by a factor of 2. The first four convolutional layers have a BatchNorm and ReLu activation function applied to their output, and the last acts as a classification layer that outputs one value.
Convolutional Helper Function
To define the discriminators, you're expected to use the provided conv function, which creates a convolutional layer + an optional batch norm layer.
End of explanation
class Discriminator(nn.Module):
def __init__(self, conv_dim=64):
super(Discriminator, self).__init__()
# Define all convolutional layers
# Should accept an RGB image as input and output a single value
def forward(self, x):
# define feedforward behavior
return x
Explanation: Define the Discriminator Architecture
Your task is to fill in the __init__ function with the specified 5 layer conv net architecture. Both $D_X$ and $D_Y$ have the same architecture, so we only need to define one class, and later instantiate two discriminators.
It's recommended that you use a kernel size of 4x4 and use that to determine the correct stride and padding size for each layer. This Stanford resource may also help in determining stride and padding sizes.
Define your convolutional layers in __init__
Then fill in the forward behavior of the network
The forward function defines how an input image moves through the discriminator, and the most important thing is to pass it through your convolutional layers in order, with a ReLu activation function applied to all but the last layer.
You should not apply a sigmoid activation function to the output, here, and that is because we are planning on using a squared error loss for training. And you can read more about this loss function, later in the notebook.
End of explanation
# residual block class
class ResidualBlock(nn.Module):
Defines a residual block.
This adds an input x to a convolutional layer (applied to x) with the same size input and output.
These blocks allow a model to learn an effective transformation from one domain to another.
def __init__(self, conv_dim):
super(ResidualBlock, self).__init__()
# conv_dim = number of inputs
# define two convolutional layers + batch normalization that will act as our residual function, F(x)
# layers should have the same shape input as output; I suggest a kernel_size of 3
def forward(self, x):
# apply a ReLu activation the outputs of the first layer
# return a summed output, x + resnet_block(x)
return x
Explanation: Generators
The generators, G_XtoY and G_YtoX (sometimes called F), are made of an encoder, a conv net that is responsible for turning an image into a smaller feature representation, and a decoder, a transpose_conv net that is responsible for turning that representation into an transformed image. These generators, one from XtoY and one from YtoX, have the following architecture:
<img src='notebook_images/cyclegan_generator_ex.png' width=90% />
This network sees a 128x128x3 image, compresses it into a feature representation as it goes through three convolutional layers and reaches a series of residual blocks. It goes through a few (typically 6 or more) of these residual blocks, then it goes through three transpose convolutional layers (sometimes called de-conv layers) which upsample the output of the resnet blocks and create a new image!
Note that most of the convolutional and transpose-convolutional layers have BatchNorm and ReLu functions applied to their outputs with the exception of the final transpose convolutional layer, which has a tanh activation function applied to the output. Also, the residual blocks are made of convolutional and batch normalization layers, which we'll go over in more detail, next.
Residual Block Class
To define the generators, you're expected to define a ResidualBlock class which will help you connect the encoder and decoder portions of the generators. You might be wondering, what exactly is a Resnet block? It may sound familiar from something like ResNet50 for image classification, pictured below.
<img src='notebook_images/resnet_50.png' width=90%/>
ResNet blocks rely on connecting the output of one layer with the input of an earlier layer. The motivation for this structure is as follows: very deep neural networks can be difficult to train. Deeper networks are more likely to have vanishing or exploding gradients and, therefore, have trouble reaching convergence; batch normalization helps with this a bit. However, during training, we often see that deep networks respond with a kind of training degradation. Essentially, the training accuracy stops improving and gets saturated at some point during training. In the worst cases, deep models would see their training accuracy actually worsen over time!
One solution to this problem is to use Resnet blocks that allow us to learn so-called residual functions as they are applied to layer inputs. You can read more about this proposed architecture in the paper, Deep Residual Learning for Image Recognition by Kaiming He et. al, and the below image is from that paper.
<img src='notebook_images/resnet_block.png' width=40%/>
Residual Functions
Usually, when we create a deep learning model, the model (several layers with activations applied) is responsible for learning a mapping, M, from an input x to an output y.
M(x) = y (Equation 1)
Instead of learning a direct mapping from x to y, we can instead define a residual function
F(x) = M(x) - x
This looks at the difference between a mapping applied to x and the original input, x. F(x) is, typically, two convolutional layers + normalization layer and a ReLu in between. These convolutional layers should have the same number of inputs as outputs. This mapping can then be written as the following; a function of the residual function and the input x. The addition step creates a kind of loop that connects the input x to the output, y:
M(x) = F(x) + x (Equation 2) or
y = F(x) + x (Equation 3)
Optimizing a Residual Function
The idea is that it is easier to optimize this residual function F(x) than it is to optimize the original mapping M(x). Consider an example; what if we want y = x?
From our first, direct mapping equation, Equation 1, we could set M(x) = x but it is easier to solve the residual equation F(x) = 0, which, when plugged in to Equation 3, yields y = x.
Defining the ResidualBlock Class
To define the ResidualBlock class, we'll define residual functions (a series of layers), apply them to an input x and add them to that same input. This is defined just like any other neural network, with an __init__ function and the addition step in the forward function.
In our case, you'll want to define the residual block as:
* Two convolutional layers with the same size input and output
* Batch normalization applied to the outputs of the convolutional layers
* A ReLu function on the output of the first convolutional layer
Then, in the forward function, add the input x to this residual block. Feel free to use the helper conv function from above to create this block.
End of explanation
# helper deconv function
def deconv(in_channels, out_channels, kernel_size, stride=2, padding=1, batch_norm=True):
Creates a transpose convolutional layer, with optional batch normalization.
layers = []
# append transpose conv layer
layers.append(nn.ConvTranspose2d(in_channels, out_channels, kernel_size, stride, padding, bias=False))
# optional batch norm layer
if batch_norm:
layers.append(nn.BatchNorm2d(out_channels))
return nn.Sequential(*layers)
Explanation: Transpose Convolutional Helper Function
To define the generators, you're expected to use the above conv function, ResidualBlock class, and the below deconv helper function, which creates a transpose convolutional layer + an optional batchnorm layer.
End of explanation
class CycleGenerator(nn.Module):
def __init__(self, conv_dim=64, n_res_blocks=6):
super(CycleGenerator, self).__init__()
# 1. Define the encoder part of the generator
# 2. Define the resnet part of the generator
# 3. Define the decoder part of the generator
def forward(self, x):
Given an image x, returns a transformed image.
# define feedforward behavior, applying activations as necessary
return x
Explanation: Define the Generator Architecture
Complete the __init__ function with the specified 3 layer encoder convolutional net, a series of residual blocks (the number of which is given by n_res_blocks), and then a 3 layer decoder transpose convolutional net.
Then complete the forward function to define the forward behavior of the generators. Recall that the last layer has a tanh activation function.
Both $G_{XtoY}$ and $G_{YtoX}$ have the same architecture, so we only need to define one class, and later instantiate two generators.
End of explanation
def create_model(g_conv_dim=64, d_conv_dim=64, n_res_blocks=6):
Builds the generators and discriminators.
# Instantiate generators
G_XtoY =
G_YtoX =
# Instantiate discriminators
D_X =
D_Y =
# move models to GPU, if available
if torch.cuda.is_available():
device = torch.device("cuda:0")
G_XtoY.to(device)
G_YtoX.to(device)
D_X.to(device)
D_Y.to(device)
print('Models moved to GPU.')
else:
print('Only CPU available.')
return G_XtoY, G_YtoX, D_X, D_Y
# call the function to get models
G_XtoY, G_YtoX, D_X, D_Y = create_model()
Explanation: Create the complete network
Using the classes you defined earlier, you can define the discriminators and generators necessary to create a complete CycleGAN. The given parameters should work for training.
First, create two discriminators, one for checking if $X$ sample images are real, and one for checking if $Y$ sample images are real. Then the generators. Instantiate two of them, one for transforming a painting into a realistic photo and one for transforming a photo into into a painting.
End of explanation
# helper function for printing the model architecture
def print_models(G_XtoY, G_YtoX, D_X, D_Y):
Prints model information for the generators and discriminators.
print(" G_XtoY ")
print("-----------------------------------------------")
print(G_XtoY)
print()
print(" G_YtoX ")
print("-----------------------------------------------")
print(G_YtoX)
print()
print(" D_X ")
print("-----------------------------------------------")
print(D_X)
print()
print(" D_Y ")
print("-----------------------------------------------")
print(D_Y)
print()
# print all of the models
print_models(G_XtoY, G_YtoX, D_X, D_Y)
Explanation: Check that you've implemented this correctly
The function create_model should return the two generator and two discriminator networks. After you've defined these discriminator and generator components, it's good practice to check your work. The easiest way to do this is to print out your model architecture and read through it to make sure the parameters are what you expected. The next cell will print out their architectures.
End of explanation
def real_mse_loss(D_out):
# how close is the produced output from being "real"?
def fake_mse_loss(D_out):
# how close is the produced output from being "false"?
def cycle_consistency_loss(real_im, reconstructed_im, lambda_weight):
# calculate reconstruction loss
# return weighted loss
Explanation: Discriminator and Generator Losses
Computing the discriminator and the generator losses are key to getting a CycleGAN to train.
<img src='notebook_images/CycleGAN_loss.png' width=90% height=90% />
Image from original paper by Jun-Yan Zhu et. al.
The CycleGAN contains two mapping functions $G: X \rightarrow Y$ and $F: Y \rightarrow X$, and associated adversarial discriminators $D_Y$ and $D_X$. (a) $D_Y$ encourages $G$ to translate $X$ into outputs indistinguishable from domain $Y$, and vice versa for $D_X$ and $F$.
To further regularize the mappings, we introduce two cycle consistency losses that capture the intuition that if
we translate from one domain to the other and back again we should arrive at where we started. (b) Forward cycle-consistency loss and (c) backward cycle-consistency loss.
Least Squares GANs
We've seen that regular GANs treat the discriminator as a classifier with the sigmoid cross entropy loss function. However, this loss function may lead to the vanishing gradients problem during the learning process. To overcome such a problem, we'll use a least squares loss function for the discriminator. This structure is also referred to as a least squares GAN or LSGAN, and you can read the original paper on LSGANs, here. The authors show that LSGANs are able to generate higher quality images than regular GANs and that this loss type is a bit more stable during training!
Discriminator Losses
The discriminator losses will be mean squared errors between the output of the discriminator, given an image, and the target value, 0 or 1, depending on whether it should classify that image as fake or real. For example, for a real image, x, we can train $D_X$ by looking at how close it is to recognizing and image x as real using the mean squared error:
out_x = D_X(x)
real_err = torch.mean((out_x-1)**2)
Generator Losses
Calculating the generator losses will look somewhat similar to calculating the discriminator loss; there will still be steps in which you generate fake images that look like they belong to the set of $X$ images but are based on real images in set $Y$, and vice versa. You'll compute the "real loss" on those generated images by looking at the output of the discriminator as it's applied to these fake images; this time, your generator aims to make the discriminator classify these fake images as real images.
Cycle Consistency Loss
In addition to the adversarial losses, the generator loss terms will also include the cycle consistency loss. This loss is a measure of how good a reconstructed image is, when compared to an original image.
Say you have a fake, generated image, x_hat, and a real image, y. You can get a reconstructed y_hat by applying G_XtoY(x_hat) = y_hat and then check to see if this reconstruction y_hat and the orginal image y match. For this, we recommed calculating the L1 loss, which is an absolute difference, between reconstructed and real images. You may also choose to multiply this loss by some weight value lambda_weight to convey its importance.
<img src='notebook_images/reconstruction_error.png' width=40% height=40% />
The total generator loss will be the sum of the generator losses and the forward and backward cycle consistency losses.
Define Loss Functions
To help us calculate the discriminator and gnerator losses during training, let's define some helpful loss functions. Here, we'll define three.
1. real_mse_loss that looks at the output of a discriminator and returns the error based on how close that output is to being classified as real. This should be a mean squared error.
2. fake_mse_loss that looks at the output of a discriminator and returns the error based on how close that output is to being classified as fake. This should be a mean squared error.
3. cycle_consistency_loss that looks at a set of real image and a set of reconstructed/generated images, and returns the mean absolute error between them. This has a lambda_weight parameter that will weight the mean absolute error in a batch.
It's recommended that you take a look at the original, CycleGAN paper to get a starting value for lambda_weight.
End of explanation
import torch.optim as optim
# hyperparams for Adam optimizers
lr=
beta1=
beta2=
g_params = list(G_XtoY.parameters()) + list(G_YtoX.parameters()) # Get generator parameters
# Create optimizers for the generators and discriminators
g_optimizer = optim.Adam(g_params, lr, [beta1, beta2])
d_x_optimizer = optim.Adam(D_X.parameters(), lr, [beta1, beta2])
d_y_optimizer = optim.Adam(D_Y.parameters(), lr, [beta1, beta2])
Explanation: Define the Optimizers
Next, let's define how this model will update its weights. This, like the GANs you may have seen before, uses Adam optimizers for the discriminator and generator. It's again recommended that you take a look at the original, CycleGAN paper to get starting hyperparameter values.
End of explanation
# import save code
from helpers import save_samples, checkpoint
# train the network
def training_loop(dataloader_X, dataloader_Y, test_dataloader_X, test_dataloader_Y,
n_epochs=1000):
print_every=10
# keep track of losses over time
losses = []
test_iter_X = iter(test_dataloader_X)
test_iter_Y = iter(test_dataloader_Y)
# Get some fixed data from domains X and Y for sampling. These are images that are held
# constant throughout training, that allow us to inspect the model's performance.
fixed_X = test_iter_X.next()[0]
fixed_Y = test_iter_Y.next()[0]
fixed_X = scale(fixed_X) # make sure to scale to a range -1 to 1
fixed_Y = scale(fixed_Y)
# batches per epoch
iter_X = iter(dataloader_X)
iter_Y = iter(dataloader_Y)
batches_per_epoch = min(len(iter_X), len(iter_Y))
for epoch in range(1, n_epochs+1):
# Reset iterators for each epoch
if epoch % batches_per_epoch == 0:
iter_X = iter(dataloader_X)
iter_Y = iter(dataloader_Y)
images_X, _ = iter_X.next()
images_X = scale(images_X) # make sure to scale to a range -1 to 1
images_Y, _ = iter_Y.next()
images_Y = scale(images_Y)
# move images to GPU if available (otherwise stay on CPU)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
images_X = images_X.to(device)
images_Y = images_Y.to(device)
# ============================================
# TRAIN THE DISCRIMINATORS
# ============================================
## First: D_X, real and fake loss components ##
# 1. Compute the discriminator losses on real images
# 2. Generate fake images that look like domain X based on real images in domain Y
# 3. Compute the fake loss for D_X
# 4. Compute the total loss and perform backprop
d_x_loss =
## Second: D_Y, real and fake loss components ##
d_y_loss =
# =========================================
# TRAIN THE GENERATORS
# =========================================
## First: generate fake X images and reconstructed Y images ##
# 1. Generate fake images that look like domain X based on real images in domain Y
# 2. Compute the generator loss based on domain X
# 3. Create a reconstructed y
# 4. Compute the cycle consistency loss (the reconstruction loss)
## Second: generate fake Y images and reconstructed X images ##
# 5. Add up all generator and reconstructed losses and perform backprop
g_total_loss =
# Print the log info
if epoch % print_every == 0:
# append real and fake discriminator losses and the generator loss
losses.append((d_x_loss.item(), d_y_loss.item(), g_total_loss.item()))
print('Epoch [{:5d}/{:5d}] | d_X_loss: {:6.4f} | d_Y_loss: {:6.4f} | g_total_loss: {:6.4f}'.format(
epoch, n_epochs, d_x_loss.item(), d_y_loss.item(), g_total_loss.item()))
sample_every=100
# Save the generated samples
if epoch % sample_every == 0:
G_YtoX.eval() # set generators to eval mode for sample generation
G_XtoY.eval()
save_samples(epoch, fixed_Y, fixed_X, G_YtoX, G_XtoY, batch_size=16)
G_YtoX.train()
G_XtoY.train()
# uncomment these lines, if you want to save your model
# checkpoint_every=1000
# # Save the model parameters
# if epoch % checkpoint_every == 0:
# checkpoint(epoch, G_XtoY, G_YtoX, D_X, D_Y)
return losses
n_epochs = 1000 # keep this small when testing if a model first works, then increase it to >=1000
losses = training_loop(dataloader_X, dataloader_Y, test_dataloader_X, test_dataloader_Y, n_epochs=n_epochs)
Explanation: Training a CycleGAN
When a CycleGAN trains, and sees one batch of real images from set $X$ and $Y$, it trains by performing the following steps:
Training the Discriminators
1. Compute the discriminator $D_X$ loss on real images
2. Generate fake images that look like domain $X$ based on real images in domain $Y$
3. Compute the fake loss for $D_X$
4. Compute the total loss and perform backpropagation and $D_X$ optimization
5. Repeat steps 1-4 only with $D_Y$ and your domains switched!
Training the Generators
1. Generate fake images that look like domain $X$ based on real images in domain $Y$
2. Compute the generator loss based on how $D_X$ responds to fake $X$
3. Generate reconstructed $\hat{Y}$ images based on the fake $X$ images generated in step 1
4. Compute the cycle consistency loss by comparing the reconstructions with real $Y$ images
5. Repeat steps 1-4 only swapping domains
6. Add up all the generator and reconstruction losses and perform backpropagation + optimization
<img src='notebook_images/cycle_consistency_ex.png' width=70% />
Saving Your Progress
A CycleGAN repeats its training process, alternating between training the discriminators and the generators, for a specified number of training iterations. You've been given code that will save some example generated images that the CycleGAN has learned to generate after a certain number of training iterations. Along with looking at the losses, these example generations should give you an idea of how well your network has trained.
Below, you may choose to keep all default parameters; your only task is to calculate the appropriate losses and complete the training cycle.
End of explanation
fig, ax = plt.subplots(figsize=(12,8))
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator, X', alpha=0.5)
plt.plot(losses.T[1], label='Discriminator, Y', alpha=0.5)
plt.plot(losses.T[2], label='Generators', alpha=0.5)
plt.title("Training Losses")
plt.legend()
Explanation: Tips on Training and Loss Patterns
A lot of experimentation goes into finding the best hyperparameters such that the generators and discriminators don't overpower each other. It's often a good starting point to look at existing papers to find what has worked in previous experiments, I'd recommend this DCGAN paper in addition to the original CycleGAN paper to see what worked for them. Then, you can try your own experiments based off of a good foundation.
Discriminator Losses
When you display the generator and discriminator losses you should see that there is always some discriminator loss; recall that we are trying to design a model that can generate good "fake" images. So, the ideal discriminator will not be able to tell the difference between real and fake images and, as such, will always have some loss. You should also see that $D_X$ and $D_Y$ are roughly at the same loss levels; if they are not, this indicates that your training is favoring one type of discriminator over the and you may need to look at biases in your models or data.
Generator Loss
The generator's loss should start significantly higher than the discriminator losses because it is accounting for the loss of both generators and weighted reconstruction errors. You should see this loss decrease a lot at the start of training because initial, generated images are often far-off from being good fakes. After some time it may level off; this is normal since the generator and discriminator are both improving as they train. If you see that the loss is jumping around a lot, over time, you may want to try decreasing your learning rates or changing your cycle consistency loss to be a little more/less weighted.
End of explanation
import matplotlib.image as mpimg
# helper visualization code
def view_samples(iteration, sample_dir='samples_cyclegan'):
# samples are named by iteration
path_XtoY = os.path.join(sample_dir, 'sample-{:06d}-X-Y.png'.format(iteration))
path_YtoX = os.path.join(sample_dir, 'sample-{:06d}-Y-X.png'.format(iteration))
# read in those samples
try:
x2y = mpimg.imread(path_XtoY)
y2x = mpimg.imread(path_YtoX)
except:
print('Invalid number of iterations.')
fig, (ax1, ax2) = plt.subplots(figsize=(18,20), nrows=2, ncols=1, sharey=True, sharex=True)
ax1.imshow(x2y)
ax1.set_title('X to Y')
ax2.imshow(y2x)
ax2.set_title('Y to X')
# view samples at iteration 100
view_samples(100, 'samples_cyclegan')
# view samples at iteration 1000
view_samples(1000, 'samples_cyclegan')
Explanation: Evaluate the Result!
As you trained this model, you may have chosen to sample and save the results of your generated images after a certain number of training iterations. This gives you a way to see whether or not your Generators are creating good fake images. For example, the image below depicts real images in the $Y$ set, and the corresponding generated images during different points in the training process. You can see that the generator starts out creating very noisy, fake images, but begins to converge to better representations as it trains (though, not perfect).
<img src='notebook_images/sample-004000-summer2winter.png' width=50% />
Below, you've been given a helper function for displaying generated samples based on the passed in training iteration.
End of explanation |
6,113 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fetch GitHub Issues and Compute Embeddings
This notebook downloads GitHub Issues and then computes the embeddings using a trained model
issues_loader.ipynb is a very similar notebook
That notebook however just uses the IssuesLoader class as way of hard coding some paths.
Running this Notebook
This notebook was last run on [gcr.io/kubeflow-images-public/tensorflow-1.15.2-notebook-gpu
Step2: Get a list of Kubeflow REPOs
You will need to either set a GitHub token or use a GitHub App in order to call the API
TODO(jlewi)
Step3: Get The Data
Step4: Load Model Artifacts (Download from GC if not on local)
We need to load the model used to compute embeddings
Step6: Warning
Step7: Pull request comments also get included so we need to filter those out
Step8: We need to group the events by issue and then select the most recent event for each issue as that should have
the most up to date labels for each issue
TODO(jlewi)
Step9: We need to parse the labels which are json and get the names
Step10: We need to deserialize the json strings to remove escaping
Step11: Compute Embeddings
For each repo compute the embeddings and save to GCS
TODO(jlewi)
Step12: Sanity Check the embeddings
We want to make sure the embeddings are computed the same way as during inference time
During inference IssueLabelerPredict.predict_labels_for_issue calls embeddings.get_issue_text to fetch the body and title
We call embeddings.get_issue_text one of the issues to make sure it matches the data in the dataframe from which we compute the embeddings
This calls the /text on the embeddings microservice
TODO(https
Step13: Compare the embeddings computed in this notebook to the embeddings computed using inference_wrapper
Step14: Save the issues and embeddings to an HDF5 file
Step15: Save Embeddings to GCS | Python Code:
import logging
import os
from pathlib import Path
import sys
logging.basicConfig(format='%(message)s')
logging.getLogger().setLevel(logging.INFO)
home = str(Path.home())
# Installing the python packages locally doesn't appear to have them automatically
# added the path so we need to manually add the directory
local_py_path = os.path.join(home, ".local/lib/python3.6/site-packages")
for p in [local_py_path, os.path.abspath("../../py")]:
if p not in sys.path:
logging.info("Adding %s to python path", p)
# Insert at front because we want to override any installed packages
sys.path.insert(0, p)
!pip3 install --user --upgrade -r ../requirements.txt
from bs4 import BeautifulSoup
import requests
from fastai.core import parallel, partial
from collections import Counter
from tqdm import tqdm_notebook
import torch
from code_intelligence import embeddings
from code_intelligence import graphql
from code_intelligence import gcs_util
from google.cloud import storage
Explanation: Fetch GitHub Issues and Compute Embeddings
This notebook downloads GitHub Issues and then computes the embeddings using a trained model
issues_loader.ipynb is a very similar notebook
That notebook however just uses the IssuesLoader class as way of hard coding some paths.
Running this Notebook
This notebook was last run on [gcr.io/kubeflow-images-public/tensorflow-1.15.2-notebook-gpu:1.0.0]
Resource specs
CPU 15
RAM 32Gi
If kernel dies while computing embeddings it could be because you run out of memory
Compute: This notebook was run on a p3.8xlarge on AWS
Tesla V100 GPU, 32 vCPUs 244GB of Memory
End of explanation
if not os.getenv("GITHUB_TOKEN"):
logging.warning(f"No GitHub token set defaulting to hardcode list of Kubeflow repositories")
# The list of repos can be updated using the else block
repo_names = ['arena', 'batch-predict', 'caffe2-operator', 'chainer-operator', 'code-intelligence', 'common', 'community', 'crd-validation', 'example-seldon', 'examples', 'fairing', 'features', 'frontend', 'homebrew-cask', 'homebrew-core', 'internal-acls', 'katib', 'kfctl', 'kfp-tekton', 'kfserving', 'kubebench', 'kubeflow', 'manifests', 'marketing-materials', 'metadata', 'mpi-operator', 'mxnet-operator', 'pipelines', 'pytorch-operator', 'reporting', 'testing', 'tf-operator', 'triage-issues', 'website', 'xgboost-operator']
else:
gh_client = graphql.GraphQLClient()
repo_query=query repoQuery($org: String!) {
organization(login: $org) {
repositories(first:100) {
totalCount
edges {
node {
name
}
}
}
}
}
variables = {
"org": "kubeflow",
}
results = gh_client.run_query(repo_query, variables)
repo_nodes = graphql.unpack_and_split_nodes(results, ["data", "organization", "repositories", "edges"])
repo_names = [n["name"] for n in repo_nodes]
",".join([f"'{n}'" for n in sorted(repo_names)])
names_str = ", ".join([f"'{n}'" for n in sorted(repo_names)])
print(f"[{names_str}]")
Explanation: Get a list of Kubeflow REPOs
You will need to either set a GitHub token or use a GitHub App in order to call the API
TODO(jlewi): This is no longer really necessary since we are using BigQuery now to fetch the data we can query by org
End of explanation
import pandas as pd
from inference import InferenceWrapper
Explanation: Get The Data
End of explanation
from pathlib import Path
from urllib import request as request_url
def pass_through(x):
return x
model_url = 'https://storage.googleapis.com/issue_label_bot/model/lang_model/models_22zkdqlr/trained_model_22zkdqlr.hdf'
inference_wrapper = embeddings.load_model_artifact(model_url)
Explanation: Load Model Artifacts (Download from GC if not on local)
We need to load the model used to compute embeddings
End of explanation
from pandas.io import gbq
import subprocess
# TODO(jlewi): Get the project using fairing?
PROJECT = subprocess.check_output(["gcloud", "config", "get-value", "project"]).strip().decode()
# TODO(jlewi): This code should now be a function in embeddings/github_bigquery.py
query = SELECT
JSON_EXTRACT(payload, '$.issue.html_url') as html_url,
JSON_EXTRACT(payload, '$.issue.title') as title,
JSON_EXTRACT(payload, '$.issue.body') as body,
JSON_EXTRACT(payload, "$.issue.labels") as labels,
JSON_EXTRACT(payload, "$.issue.updated_at") as updated_at,
org.login,
type,
FROM `githubarchive.month.20*`
WHERE (type="IssuesEvent" or type="IssueCommentEvent") and org.login = 'kubeflow'
issues_and_pulls=gbq.read_gbq(query, dialect='standard', project_id=PROJECT)
Explanation: Warning: The below cell benefits tremendously from parallelism, the more cores your machine has the better
The code will fail if you aren't running with a GPU
Get the Data Using BigQuery
We can use BigQuery to fetch the data from the GitHub Archive
Here is a list of GitHub Event Types
We need to consider both IssuesEvent and IssueCommentEvent
At the time of this writing 2020/04/08 there are approximately 137K events in Kubeflow and it takes O(30) seconds to fetch all of them.
TODO
It looks like when we transfer a repo or maybe an issue we end up with duplicate entries with diffferent URLs (original and new one). We should look into dedupping those
End of explanation
import re
pattern = re.compile(".*issues/[\d]+")
issues_index = issues_and_pulls["html_url"].apply(lambda x: pattern.match(x) is not None)
issues=issues_and_pulls[issues_index]
Explanation: Pull request comments also get included so we need to filter those out
End of explanation
latest_issues = issues.groupby("html_url", as_index=False).apply(lambda x: x.sort_values(["updated_at"]).iloc[-1])
# Example of fetching a specific issue
# This allows easy spot checking of the data
some_issue = "https://github.com/kubeflow/kubeflow/issues/4916"
test_issue = latest_issues.loc[latest_issues["html_url"]==f'"{some_issue}"']
test_issue
Explanation: We need to group the events by issue and then select the most recent event for each issue as that should have
the most up to date labels for each issue
TODO(jlewi): We should look for the most recent event in the dataset and then have some alert if the age exceeds some
limit as that indicates the data isn't up to date.
End of explanation
import json
def get_labels(x):
d = json.loads(x)
return [i["name"] for i in d]
latest_issues["parsed_labels"] = latest_issues["labels"].apply(get_labels)
Explanation: We need to parse the labels which are json and get the names
End of explanation
for f in ["html_url", "title", "body"]:
latest_issues[f] = latest_issues[f].apply(lambda x : json.loads(x))
Explanation: We need to deserialize the json strings to remove escaping
End of explanation
input_data = latest_issues[["title", "body"]]
issue_embeddings = inference_wrapper.df_to_embedding(input_data)
issue_embeddings.shape
Explanation: Compute Embeddings
For each repo compute the embeddings and save to GCS
TODO(jlewi): Can we use the metadata storage to keep track of artifacts?
End of explanation
from code_intelligence import util as code_intelligence_util
issue_index = 1020
logging.info(f"Fetching issue {latest_issues.iloc[issue_index]['html_url']}")
issue_owner, issue_repo, issue_num = code_intelligence_util.parse_issue_url(latest_issues.iloc[issue_index]["html_url"].strip("\""))
some_issue_data = embeddings.get_issue(latest_issues.iloc[issue_index]["html_url"], gh_client)
some_issue_data
print(latest_issues.iloc[issue_index]["title"])
print(some_issue_data["title"])
print(latest_issues.iloc[issue_index]["body"])
print(some_issue_data["body"])
some_issue_data["title"] == latest_issues.iloc[issue_index]["title"]
some_issue_data["body"] == latest_issues.iloc[issue_index]["body"]
Explanation: Sanity Check the embeddings
We want to make sure the embeddings are computed the same way as during inference time
During inference IssueLabelerPredict.predict_labels_for_issue calls embeddings.get_issue_text to fetch the body and title
We call embeddings.get_issue_text one of the issues to make sure it matches the data in the dataframe from which we compute the embeddings
This calls the /text on the embeddings microservice
TODO(https://github.com/kubeflow/code-intelligence/issues/126) The label bot microservice needs to be updated to actually
use the GraphQL API to match this code. Hopefully, in the interim the model is robust to slight deviations caused
by the differences in whitespace
End of explanation
dict_for_embeddings = inference_wrapper.process_dict(some_issue_data)
inference_wrapper.get_pooled_features(dict_for_embeddings['text']).detach().cpu().numpy()
issue_embeddings[issue_index,:]
Explanation: Compare the embeddings computed in this notebook to the embeddings computed using inference_wrapper
End of explanation
import h5py
import datetime
now = code_intelligence_util.now().isoformat()
git_tag = subprocess.check_output(["git", "describe", "--tags", "--always", "--dirty"]).decode().strip()
file_name = f"kubeflow_issue_embeddings_{now}.hdf5"
local_file = os.path.join(home, file_name)
latest_issues.to_hdf(local_file, "issues", mode="a")
h5_file = h5py.File(local_file, mode="a")
h5_file.create_dataset("issue_embeddings", data=issue_embeddings)
# store some metadata
h5_file.attrs["file"] = "Get-GitHub-Issues.ipynb"
h5_file.attrs["git-tag"] = git_tag
h5_file.close()
Explanation: Save the issues and embeddings to an HDF5 file
End of explanation
embeddings_file = os.path.join(embeddings_dir, file_name)
if gcs_util.check_gcs_object(embeddings_file):
logging.info(f"File {embeddings_file} exists")
else:
logging.info(f"Copying {local_file} to {embeddings_file}")
gcs_util.copy_to_gcs(local_file, embeddings_file)
embeddings_file
Explanation: Save Embeddings to GCS
End of explanation |
6,114 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DAP Zonal Queries (or Spaxel Queries)
Marvin allows you to perform queries on individual spaxels within and across the MaNGA dataset.
Step1: Let's grab all spaxels with an Ha-flux > 25 from MPL-5.
Step2: Spaxel queries are queries on individual spaxels, and thus will always return a spaxel x and y satisfying your input condition. There is the potential of returning a large number of results that span only a few actual galaxies. Let's see how many..
Step3: Optimize your query
Unless specified, spaxel queries will query across all bintypes and stellar templates. If you only want to search over a certain binning mode, this must be specified. If your query is taking too long, or returning too many results, consider filtering on a specific bintype and template.
Step4: Global+Local Queries
To combine global and local searches, simply combine them together in one filter condition. Let's look for all spaxels that have an H-alpha EW > 3 in galaxies with NSA redshift < 0.1 and a log sersic_mass > 9.5
Step5: Query Functions
Marvin also contains more advanced queries in the form of predefined functions.
For example, let's say you want to ask Marvin
"Give me all galaxies that have an H-alpha flux > 25 in more than 20% of their good spaxels"
you can do so using the query function npergood. npergood accepts as input a standard filter expression condition. E.g., the syntax for the above query would be input as
npergood(emline_gflux_ha_6564 > 25) >= 20
The syntax is
FUNCTION(Conditional Expression) Operator Value
Let's try it... | Python Code:
from marvin import config
from marvin.tools.query import Query
config.mode='remote'
Explanation: DAP Zonal Queries (or Spaxel Queries)
Marvin allows you to perform queries on individual spaxels within and across the MaNGA dataset.
End of explanation
config.setRelease('MPL-5')
f = 'emline_gflux_ha_6564 > 25'
q = Query(searchfilter=f)
print(q)
# let's run the query
r = q.run()
r.totalcount
r.results
Explanation: Let's grab all spaxels with an Ha-flux > 25 from MPL-5.
End of explanation
# get a list of the plate-ifus
plateifu = r.getListOf('plateifu')
# look at the unique values with Python set
print('unique galaxies', set(plateifu), len(set(plateifu)))
Explanation: Spaxel queries are queries on individual spaxels, and thus will always return a spaxel x and y satisfying your input condition. There is the potential of returning a large number of results that span only a few actual galaxies. Let's see how many..
End of explanation
f = 'emline_gflux_ha_6564 > 25 and bintype.name == SPX'
q = Query(searchfilter=f, returnparams=['template.name'])
print(q)
# run it
r = q.run()
r.results
Explanation: Optimize your query
Unless specified, spaxel queries will query across all bintypes and stellar templates. If you only want to search over a certain binning mode, this must be specified. If your query is taking too long, or returning too many results, consider filtering on a specific bintype and template.
End of explanation
f = 'nsa.sersic_logmass > 9.5 and nsa.z < 0.1 and emline_sew_ha_6564 > 3'
q = Query(searchfilter=f)
print(q)
r = q.run()
# Let's see how many spaxels we returned from how many galaxies
plateifu = r.getListOf('plateifu')
print('spaxels returned', r.totalcount)
print('from galaxies', len(set(plateifu)))
r.results[0:5]
Explanation: Global+Local Queries
To combine global and local searches, simply combine them together in one filter condition. Let's look for all spaxels that have an H-alpha EW > 3 in galaxies with NSA redshift < 0.1 and a log sersic_mass > 9.5
End of explanation
config.mode='remote'
config.setRelease('MPL-4')
f = 'npergood(emline_gflux_ha_6564 > 5) >= 20'
q = Query(searchfilter=f)
r = q.run()
r.results
Explanation: Query Functions
Marvin also contains more advanced queries in the form of predefined functions.
For example, let's say you want to ask Marvin
"Give me all galaxies that have an H-alpha flux > 25 in more than 20% of their good spaxels"
you can do so using the query function npergood. npergood accepts as input a standard filter expression condition. E.g., the syntax for the above query would be input as
npergood(emline_gflux_ha_6564 > 25) >= 20
The syntax is
FUNCTION(Conditional Expression) Operator Value
Let's try it...
End of explanation |
6,115 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lesson 1 Dogbreeds CodeAlong
Step1: 2. Initial Exploration
Step2: 3. Initial Model
starting w/ small images, large batch sizes to train model v.fast in beginning; increase image size and decrease batch-size as go along.
Step3: 3.1 Precompute
Step4: 3.2 Augment
Step5: 3.3 Increase Size
If you train smth on a smaller size, you can call learn.set_data() and pass in a larger sized dataset. That'll take your model, however it's trained so far, and continue to train on larger images.
This is another way to get SotA results. Starting training on small images for a few epochs, then switching to larger images and continuing training is an amazing effective way to avoid overfitting.
J.Howard (paraphrased)
NOTE
Step6: 6. Individual Prediction | Python Code:
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.imports import *
from fastai.torch_imports import *
from fastai.transforms import *
from fastai.model import *
from fastai.dataset import *
from fastai.sgdr import *
from fastai.plots import *
from fastai.conv_learner import *
PATH = "data/dogbreeds/"
sz = 224
arch = resnext101_64
bs = 64
label_csv = f'{PATH}labels.csv'
n = len(list(open(label_csv)))-1
val_idxs = get_cv_idxs(n)
val_idxs, n, len(val_idxs)
Explanation: Lesson 1 Dogbreeds CodeAlong
End of explanation
!ls {PATH}
label_df = pd.read_csv(label_csv)
label_df.head()
# use Pandas to create pivot table which shows how many of each label:
label_df.pivot_table(index='breed', aggfunc=len).sort_values('id', ascending=False)
tfms = tfms_from_model(arch, sz, aug_tfms=transforms_side_on, max_zoom=1.1)
data = ImageClassifierData.from_csv(PATH, folder='train', csv_fname=f'{PATH}labels.csv',
test_name='test', val_idxs=val_idxs, suffix='.jpg',
tfms=tfms, bs=bs)
fn = PATH + data.trn_ds.fnames[0]; fn
img = PIL.Image.open(fn); img
img.size
size_d = {k: PIL.Image.open(PATH + k).size for k in data.trn_ds.fnames}
row_sz, col_sz = list(zip(*size_d.values()))
row_sz = np.array(row_sz); col_sz = np.array(col_sz)
row_sz[:5]
plt.hist(row_sz);
plt.hist(row_sz[row_sz < 1000])
plt.hist(col_sz);
plt.hist(col_sz[col_sz < 1000])
len(data.trn_ds), len(data.test_ds)
len(data.classes), data.classes[:5]
Explanation: 2. Initial Exploration
End of explanation
def get_data(sz, bs):
tfms = tfms_from_model(arch, sz, aug_tfms=transforms_side_on, max_zoom=1.1)
data = ImageClassifierData.from_csv(PATH, 'train', f'{PATH}labels.csv', test_name='test',
num_workers=4, val_idxs=val_idxs, suffix='.jpg',
tfms=tfms, bs=bs)
return data if sz > 300 else data.resize(340, 'tmp')
Explanation: 3. Initial Model
starting w/ small images, large batch sizes to train model v.fast in beginning; increase image size and decrease batch-size as go along.
End of explanation
data = get_data(sz, bs)
learn = ConvLearner.pretrained(arch, data, precompute=True) # GTX870M;bs=64;sz=224;MEM:2431/3017
learn.fit(1e-2, 5)
Explanation: 3.1 Precompute
End of explanation
from sklearn import metrics
# data = get_data(sz, bs)
learn = ConvLearner.pretrained(arch, data, precompute=True, ps=0.5)
learn.fit(1e-2, 2)
lrf = learn.find_lr()
learn.sched.plot()
# turn precompute off then use dataug
learn.precompute = False
learn.fit(1e-2, 5, cycle_len=1)
learn.save('224_pre')
learn.load('224_pre')
Explanation: 3.2 Augment
End of explanation
learn.set_data(get_data(299, bs=32))
learn.freeze() # just making all but last layer already frozen
learn.fit(1e-2, 3, cycle_len=1) # precompute is off so DataAugmentation is back on
learn.fit(1e-2, 3, cycle_len=1, cycle_mult=2)
log_preds, y = learn.TTA()
probs = np.exp(log_preds)
accuracy(log_preds, y), metrics.log_loss(y, probs)
learn.save('299_pre')
# learn.load('299_pre')
learn.fit(1e-2, 1, cycle_len=2)
learn.save('299_pre')
log_preds, y = learn.TTA()
probs = np.exp(log_preds)
accuracy(log_preds, y), metrics.log_loss(y, probs)
SUBM = f'{PATH}subm/'
os.makedirs(SUBM, exist_ok=True)
df.to_csv(f'{SUBM}subm.gz', compression='gzip', index=False)
FileLink(f'{SUBM}subm.gz')
Explanation: 3.3 Increase Size
If you train smth on a smaller size, you can call learn.set_data() and pass in a larger sized dataset. That'll take your model, however it's trained so far, and continue to train on larger images.
This is another way to get SotA results. Starting training on small images for a few epochs, then switching to larger images and continuing training is an amazing effective way to avoid overfitting.
J.Howard (paraphrased)
NOTE: Fully-Convolutional Architectures only.
End of explanation
fn = data.val_ds.fnames[0]
fn
Image.open(PATH+fn).resize((150,150))
trn_tfms, val_tfms = tfms_from_model(arch, sz)
learn = ConvLearner.pretrained(arch, data)
learn.load('299_pre')
# ds = FilesIndexArrayDataset([fn], np.array([0]), val_tfms, PATH)
# dl = DataLoader(ds)
# preds = learn.predict_dl(dl)
# np.argmax(preds)
im = trn_tfms(Image.open(PATH+fn))
preds = to_np(learn.model(V(T(im[None]).cuda())))
np.argmax(preds)
trn_tfms, val_tfms = tfms_from_model(arch, sz)
im = val_tfms(Image.open(PATH+fn)) # or could apply trn_tfms(.)
preds = learn.predict_array(im[None]) # index into image as[None] to create minibatch of 1 img
np.argmax(preds)
Explanation: 6. Individual Prediction
End of explanation |
6,116 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Alien Blaster problem
This notebook presents solutions to exercises in Think Bayes.
Copyright 2016 Allen B. Downey
MIT License
Step1: Part One
In preparation for an alien invasion, the Earth Defense League has been working on new missiles to shoot down space invaders. Of course, some missile designs are better than others; let's assume that each design has some probability of hitting an alien ship, $x$.
Based on previous tests, the distribution of $x$ in the population of designs is well-modeled by a beta distribution with parameters $\alpha=2$ and $\beta=3$. What is the average missile's probability of shooting down an alien?
Step2: In its first test, the new Alien Blaster 9000 takes 10 shots and hits 2 targets. Taking into account this data, what is the posterior distribution of $x$ for this missile? What is the value in the posterior with the highest probability, also known as the MAP?
Step4: Now suppose the new ultra-secret Alien Blaster 10K is being tested. In a press conference, an EDF general reports that the new design has been tested twice, taking two shots during each test. The results of the test are confidential, so the general won't say how many targets were, but they report
Step5: If we start with a uniform prior, we can see what the likelihood function looks like
Step6: A tie is most likely if they are both terrible shots or both very good.
Is this data good or bad; that is, does it increase or decrease your estimate of $x$ for the Alien Blaster 10K?
Now let's run it with the specified prior and see what happens when we multiply the convex prior and the concave posterior
Step7: The posterior mean and MAP are lower than in the prior.
Step8: So if we learn that the new design is "consistent", it is more likely to be consistently bad (in this case).
Part Two
Suppose we
have we have a stockpile of 3 Alien Blaster 9000s and 7 Alien
Blaster 10Ks. After extensive testing, we have concluded that
the AB9000 hits the target 30% of the time, precisely, and the
AB10K hits the target 40% of the time.
If I grab a random weapon from the stockpile and shoot at 10 targets,
what is the probability of hitting exactly 3? Again, you can write a
number, mathematical expression, or Python code.
Step9: The answer is a value drawn from the mixture of the two distributions.
Continuing the previous problem, let's estimate the distribution
of k, the number of successful shots out of 10.
Write a few lines of Python code to simulate choosing a random weapon and firing it.
Write a loop that simulates the scenario and generates random values of k 1000 times.
Store the values of k you generate and plot their distribution.
Step10: Here's what the distribution looks like.
Step11: The mean should be near 3.7. We can run this simulation more efficiently using NumPy. First we generate a sample of xs
Step12: Then for each x we generate a k
Step13: And the results look similar.
Step14: One more way to do the same thing is to make a meta-Pmf, which contains the two binomial Pmf objects
Step15: Here's how we can draw samples from the meta-Pmf
Step16: And here are the results, one more time
Step17: This result, which we have estimated three ways, is a predictive distribution, based on our uncertainty about x.
We can compute the mixture analtically using thinkbayes2.MakeMixture | Python Code:
from __future__ import print_function, division
% matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import numpy as np
from thinkbayes2 import Hist, Pmf, Cdf, Suite, Beta
import thinkplot
Explanation: The Alien Blaster problem
This notebook presents solutions to exercises in Think Bayes.
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
prior = Beta(2, 3)
thinkplot.Pdf(prior.MakePmf())
prior.Mean()
Explanation: Part One
In preparation for an alien invasion, the Earth Defense League has been working on new missiles to shoot down space invaders. Of course, some missile designs are better than others; let's assume that each design has some probability of hitting an alien ship, $x$.
Based on previous tests, the distribution of $x$ in the population of designs is well-modeled by a beta distribution with parameters $\alpha=2$ and $\beta=3$. What is the average missile's probability of shooting down an alien?
End of explanation
posterior = Beta(3, 2)
posterior.Update((2, 8))
posterior.MAP()
Explanation: In its first test, the new Alien Blaster 9000 takes 10 shots and hits 2 targets. Taking into account this data, what is the posterior distribution of $x$ for this missile? What is the value in the posterior with the highest probability, also known as the MAP?
End of explanation
from scipy import stats
class AlienBlaster(Suite):
def Likelihood(self, data, hypo):
Computes the likeliood of data under hypo.
data: number of shots they took
hypo: probability of a hit, p
n = data
x = hypo
# specific version for n=2 shots
likes = [x**4, (1-x)**4, (2*x*(1-x))**2]
# general version for any n shots
likes = [stats.binom.pmf(k, n, x)**2 for k in range(n+1)]
return np.sum(likes)
Explanation: Now suppose the new ultra-secret Alien Blaster 10K is being tested. In a press conference, an EDF general reports that the new design has been tested twice, taking two shots during each test. The results of the test are confidential, so the general won't say how many targets were, but they report: "The same number of targets were hit in the two tests, so we have reason to think this new design is consistent."
Write a class called AlienBlaster that inherits from Suite and provides a likelihood function that takes this data -- two shots and a tie -- and computes the likelihood of the data for each hypothetical value of $x$. If you would like a challenge, write a version that works for any number of shots.
End of explanation
pmf = Beta(1, 1).MakePmf()
blaster = AlienBlaster(pmf)
blaster.Update(2)
thinkplot.Pdf(blaster)
Explanation: If we start with a uniform prior, we can see what the likelihood function looks like:
End of explanation
pmf = Beta(2, 3).MakePmf()
blaster = AlienBlaster(pmf)
blaster.Update(2)
thinkplot.Pdf(blaster)
Explanation: A tie is most likely if they are both terrible shots or both very good.
Is this data good or bad; that is, does it increase or decrease your estimate of $x$ for the Alien Blaster 10K?
Now let's run it with the specified prior and see what happens when we multiply the convex prior and the concave posterior:
End of explanation
prior.Mean(), blaster.Mean()
prior.MAP(), blaster.MAP()
Explanation: The posterior mean and MAP are lower than in the prior.
End of explanation
k = 3
n = 10
x1 = 0.3
x2 = 0.4
0.3 * stats.binom.pmf(k, n, x1) + 0.7 * stats.binom.pmf(k, n, x2)
Explanation: So if we learn that the new design is "consistent", it is more likely to be consistently bad (in this case).
Part Two
Suppose we
have we have a stockpile of 3 Alien Blaster 9000s and 7 Alien
Blaster 10Ks. After extensive testing, we have concluded that
the AB9000 hits the target 30% of the time, precisely, and the
AB10K hits the target 40% of the time.
If I grab a random weapon from the stockpile and shoot at 10 targets,
what is the probability of hitting exactly 3? Again, you can write a
number, mathematical expression, or Python code.
End of explanation
def flip(p):
return np.random.random() < p
def simulate_shots(n, p):
return np.random.binomial(n, p)
ks = []
for i in range(1000):
if flip(0.3):
k = simulate_shots(n, x1)
else:
k = simulate_shots(n, x2)
ks.append(k)
Explanation: The answer is a value drawn from the mixture of the two distributions.
Continuing the previous problem, let's estimate the distribution
of k, the number of successful shots out of 10.
Write a few lines of Python code to simulate choosing a random weapon and firing it.
Write a loop that simulates the scenario and generates random values of k 1000 times.
Store the values of k you generate and plot their distribution.
End of explanation
pmf = Pmf(ks)
thinkplot.Hist(pmf)
len(ks), np.mean(ks)
Explanation: Here's what the distribution looks like.
End of explanation
xs = np.random.choice(a=[x1, x2], p=[0.3, 0.7], size=1000)
Hist(xs)
Explanation: The mean should be near 3.7. We can run this simulation more efficiently using NumPy. First we generate a sample of xs:
End of explanation
ks = np.random.binomial(n, xs)
Explanation: Then for each x we generate a k:
End of explanation
pmf = Pmf(ks)
thinkplot.Hist(pmf)
np.mean(ks)
Explanation: And the results look similar.
End of explanation
from thinkbayes2 import MakeBinomialPmf
pmf1 = MakeBinomialPmf(n, x1)
pmf2 = MakeBinomialPmf(n, x2)
metapmf = Pmf({pmf1:0.3, pmf2:0.7})
metapmf.Print()
Explanation: One more way to do the same thing is to make a meta-Pmf, which contains the two binomial Pmf objects:
End of explanation
ks = [metapmf.Random().Random() for _ in range(1000)]
Explanation: Here's how we can draw samples from the meta-Pmf:
End of explanation
pmf = Pmf(ks)
thinkplot.Hist(pmf)
np.mean(ks)
Explanation: And here are the results, one more time:
End of explanation
from thinkbayes2 import MakeMixture
mix = MakeMixture(metapmf)
thinkplot.Hist(mix)
mix.Mean()
Explanation: This result, which we have estimated three ways, is a predictive distribution, based on our uncertainty about x.
We can compute the mixture analtically using thinkbayes2.MakeMixture:
def MakeMixture(metapmf, label='mix'):
Make a mixture distribution.
Args:
metapmf: Pmf that maps from Pmfs to probs.
label: string label for the new Pmf.
Returns: Pmf object.
mix = Pmf(label=label)
for pmf, p1 in metapmf.Items():
for k, p2 in pmf.Items():
mix[k] += p1 * p2
return mix
The outer loop iterates through the Pmfs; the inner loop iterates through the items.
So p1 is the probability of choosing a particular Pmf; p2 is the probability of choosing a value from the Pmf.
In the example, each Pmf is associated with a value of x (probability of hitting a target). The inner loop enumerates the values of k (number of targets hit after 10 shots).
End of explanation |
6,117 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualize Evoked data
In this tutorial we focus on plotting functions of
Step1: First we read the evoked object from a file. Check out
tut_epoching_and_averaging to get to this stage from raw data.
Step2: Notice that evoked is a list of
Step3: Let's start with a simple one. We plot event related potentials / fields
(ERP/ERF). The bad channels are not plotted by default. Here we explicitly
set the exclude parameter to show the bad channels in red. All plotting
functions of MNE-python return a handle to the figure instance. When we have
the handle, we can customise the plots to our liking.
Step4: All plotting functions of MNE-python return a handle to the figure instance.
When we have the handle, we can customise the plots to our liking. For
example, we can get rid of the empty space with a simple function call.
Step5: Now let's make it a bit fancier and only use MEG channels. Many of the
MNE-functions include a picks parameter to include a selection of
channels. picks is simply a list of channel indices that you can easily
construct with
Step6: Notice the legend on the left. The colors would suggest that there may be two
separate sources for the signals. This wasn't obvious from the first figure.
Try painting the slopes with left mouse button. It should open a new window
with topomaps (scalp plots) of the average over the painted area. There is
also a function for drawing topomaps separately.
Step7: By default the topomaps are drawn from evenly spread out points of time over
the evoked data. We can also define the times ourselves.
Step8: Or we can automatically select the peaks.
Step9: You can take a look at the documentation of
Step10: Notice that we created five axes, but had only four categories. The fifth
axes was used for drawing the colorbar. You must provide room for it when you
create this kind of custom plots or turn the colorbar off with
colorbar=False. That's what the warnings are trying to tell you. Also, we
used show=False for the three first function calls. This prevents the
showing of the figure prematurely. The behavior depends on the mode you are
using for your python session. See http
Step11: Sometimes, you may want to compare two or more conditions at a selection of
sensors, or e.g. for the Global Field Power. For this, you can use the
function
Step12: We can also plot the activations as images. The time runs along the x-axis
and the channels along the y-axis. The amplitudes are color coded so that
the amplitudes from negative to positive translates to shift from blue to
red. White means zero amplitude. You can use the cmap parameter to define
the color map yourself. The accepted values include all matplotlib colormaps.
Step13: Finally we plot the sensor data as a topographical view. In the simple case
we plot only left auditory responses, and then we plot them all in the same
figure for comparison. Click on the individual plots to open them bigger.
Step14: Visualizing field lines in 3D
We now compute the field maps to project MEG and EEG data to MEG helmet
and scalp surface.
To do this we'll need coregistration information. See
tut_forward for more details.
Here we just illustrate usage. | Python Code:
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
# sphinx_gallery_thumbnail_number = 9
Explanation: Visualize Evoked data
In this tutorial we focus on plotting functions of :class:mne.Evoked.
End of explanation
data_path = mne.datasets.sample.data_path()
fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis-ave.fif')
evoked = mne.read_evokeds(fname, baseline=(None, 0), proj=True)
print(evoked)
Explanation: First we read the evoked object from a file. Check out
tut_epoching_and_averaging to get to this stage from raw data.
End of explanation
evoked_l_aud = evoked[0]
evoked_r_aud = evoked[1]
evoked_l_vis = evoked[2]
evoked_r_vis = evoked[3]
Explanation: Notice that evoked is a list of :class:evoked <mne.Evoked> instances.
You can read only one of the categories by passing the argument condition
to :func:mne.read_evokeds. To make things more simple for this tutorial, we
read each instance to a variable.
End of explanation
fig = evoked_l_aud.plot(exclude=())
Explanation: Let's start with a simple one. We plot event related potentials / fields
(ERP/ERF). The bad channels are not plotted by default. Here we explicitly
set the exclude parameter to show the bad channels in red. All plotting
functions of MNE-python return a handle to the figure instance. When we have
the handle, we can customise the plots to our liking.
End of explanation
fig.tight_layout()
Explanation: All plotting functions of MNE-python return a handle to the figure instance.
When we have the handle, we can customise the plots to our liking. For
example, we can get rid of the empty space with a simple function call.
End of explanation
picks = mne.pick_types(evoked_l_aud.info, meg=True, eeg=False, eog=False)
evoked_l_aud.plot(spatial_colors=True, gfp=True, picks=picks)
Explanation: Now let's make it a bit fancier and only use MEG channels. Many of the
MNE-functions include a picks parameter to include a selection of
channels. picks is simply a list of channel indices that you can easily
construct with :func:mne.pick_types. See also :func:mne.pick_channels and
:func:mne.pick_channels_regexp.
Using spatial_colors=True, the individual channel lines are color coded
to show the sensor positions - specifically, the x, y, and z locations of
the sensors are transformed into R, G and B values.
End of explanation
evoked_l_aud.plot_topomap()
Explanation: Notice the legend on the left. The colors would suggest that there may be two
separate sources for the signals. This wasn't obvious from the first figure.
Try painting the slopes with left mouse button. It should open a new window
with topomaps (scalp plots) of the average over the painted area. There is
also a function for drawing topomaps separately.
End of explanation
times = np.arange(0.05, 0.151, 0.05)
evoked_r_aud.plot_topomap(times=times, ch_type='mag')
Explanation: By default the topomaps are drawn from evenly spread out points of time over
the evoked data. We can also define the times ourselves.
End of explanation
evoked_r_aud.plot_topomap(times='peaks', ch_type='mag')
Explanation: Or we can automatically select the peaks.
End of explanation
fig, ax = plt.subplots(1, 5, figsize=(8, 2))
kwargs = dict(times=0.1, show=False, vmin=-300, vmax=300)
evoked_l_aud.plot_topomap(axes=ax[0], colorbar=True, **kwargs)
evoked_r_aud.plot_topomap(axes=ax[1], colorbar=False, **kwargs)
evoked_l_vis.plot_topomap(axes=ax[2], colorbar=False, **kwargs)
evoked_r_vis.plot_topomap(axes=ax[3], colorbar=False, **kwargs)
for ax, title in zip(ax[:4], ['Aud/L', 'Aud/R', 'Vis/L', 'Vis/R']):
ax.set_title(title)
plt.show()
Explanation: You can take a look at the documentation of :func:mne.Evoked.plot_topomap
or simply write evoked_r_aud.plot_topomap? in your python console to
see the different parameters you can pass to this function. Most of the
plotting functions also accept axes parameter. With that, you can
customise your plots even further. First we create a set of matplotlib
axes in a single figure and plot all of our evoked categories next to each
other.
End of explanation
ts_args = dict(gfp=True)
topomap_args = dict(sensors=False)
evoked_r_aud.plot_joint(title='right auditory', times=[.09, .20],
ts_args=ts_args, topomap_args=topomap_args)
Explanation: Notice that we created five axes, but had only four categories. The fifth
axes was used for drawing the colorbar. You must provide room for it when you
create this kind of custom plots or turn the colorbar off with
colorbar=False. That's what the warnings are trying to tell you. Also, we
used show=False for the three first function calls. This prevents the
showing of the figure prematurely. The behavior depends on the mode you are
using for your python session. See http://matplotlib.org/users/shell.html for
more information.
We can combine the two kinds of plots in one figure using the
:func:mne.Evoked.plot_joint method of Evoked objects. Called as-is
(evoked.plot_joint()), this function should give an informative display
of spatio-temporal dynamics.
You can directly style the time series part and the topomap part of the plot
using the topomap_args and ts_args parameters. You can pass key-value
pairs as a python dictionary. These are then passed as parameters to the
topomaps (:func:mne.Evoked.plot_topomap) and time series
(:func:mne.Evoked.plot) of the joint plot.
For an example of specific styling using these topomap_args and
ts_args arguments, here, topomaps at specific time points
(90 and 200 ms) are shown, sensors are not plotted (via an argument
forwarded to plot_topomap), and the Global Field Power is shown:
End of explanation
conditions = ["Left Auditory", "Right Auditory", "Left visual", "Right visual"]
evoked_dict = dict()
for condition in conditions:
evoked_dict[condition.replace(" ", "/")] = mne.read_evokeds(
fname, baseline=(None, 0), proj=True, condition=condition)
print(evoked_dict)
colors = dict(Left="Crimson", Right="CornFlowerBlue")
linestyles = dict(Auditory='-', visual='--')
pick = evoked_dict["Left/Auditory"].ch_names.index('MEG 1811')
mne.viz.plot_compare_evokeds(evoked_dict, picks=pick, colors=colors,
linestyles=linestyles)
Explanation: Sometimes, you may want to compare two or more conditions at a selection of
sensors, or e.g. for the Global Field Power. For this, you can use the
function :func:mne.viz.plot_compare_evokeds. The easiest way is to create
a Python dictionary, where the keys are condition names and the values are
:class:mne.Evoked objects. If you provide lists of :class:mne.Evoked
objects, such as those for multiple subjects, the grand average is plotted,
along with a confidence interval band - this can be used to contrast
conditions for a whole experiment.
First, we load in the evoked objects into a dictionary, setting the keys to
'/'-separated tags (as we can do with event_ids for epochs). Then, we plot
with :func:mne.viz.plot_compare_evokeds.
The plot is styled with dictionary arguments, again using "/"-separated tags.
We plot a MEG channel with a strong auditory response.
End of explanation
evoked_r_aud.plot_image(picks=picks)
Explanation: We can also plot the activations as images. The time runs along the x-axis
and the channels along the y-axis. The amplitudes are color coded so that
the amplitudes from negative to positive translates to shift from blue to
red. White means zero amplitude. You can use the cmap parameter to define
the color map yourself. The accepted values include all matplotlib colormaps.
End of explanation
title = 'MNE sample data\n(condition : %s)'
evoked_l_aud.plot_topo(title=title % evoked_l_aud.comment,
background_color='k', color=['white'])
mne.viz.plot_evoked_topo(evoked, title=title % 'Left/Right Auditory/Visual',
background_color='w')
Explanation: Finally we plot the sensor data as a topographical view. In the simple case
we plot only left auditory responses, and then we plot them all in the same
figure for comparison. Click on the individual plots to open them bigger.
End of explanation
subjects_dir = data_path + '/subjects'
trans_fname = data_path + '/MEG/sample/sample_audvis_raw-trans.fif'
maps = mne.make_field_map(evoked_l_aud, trans=trans_fname, subject='sample',
subjects_dir=subjects_dir, n_jobs=1)
# explore several points in time
field_map = evoked_l_aud.plot_field(maps, time=.1)
Explanation: Visualizing field lines in 3D
We now compute the field maps to project MEG and EEG data to MEG helmet
and scalp surface.
To do this we'll need coregistration information. See
tut_forward for more details.
Here we just illustrate usage.
End of explanation |
6,118 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Minimal Contact Binary System
Setup
Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: Here we'll initialize a default binary, but ask for it to be created as a contact system.
For more details see the contact binary hierarchy tutorial.
Step3: Adding Datasets
Step4: Running Compute
Step5: Synthetics
To ensure compatibility with computing synthetics in detached and semi-detached systems in Phoebe, the synthetic meshes for our overcontact system are attached to each component separetely, instead of the contact envelope.
Step6: Plotting
Meshes
Step7: Orbits
Step8: Light Curves
Step9: RVs | Python Code:
!pip install -I "phoebe>=2.2,<2.3"
Explanation: Minimal Contact Binary System
Setup
Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
b = phoebe.default_binary(contact_binary=True)
Explanation: Here we'll initialize a default binary, but ask for it to be created as a contact system.
For more details see the contact binary hierarchy tutorial.
End of explanation
b.add_dataset('mesh', compute_times=[0], dataset='mesh01')
b.add_dataset('orb', compute_times=np.linspace(0,1,201), dataset='orb01')
b.add_dataset('lc', times=np.linspace(0,1,21), dataset='lc01')
b.add_dataset('rv', times=np.linspace(0,1,21), dataset='rv01')
Explanation: Adding Datasets
End of explanation
b.run_compute(irrad_method='none')
Explanation: Running Compute
End of explanation
print(b['mesh01@model'].components)
Explanation: Synthetics
To ensure compatibility with computing synthetics in detached and semi-detached systems in Phoebe, the synthetic meshes for our overcontact system are attached to each component separetely, instead of the contact envelope.
End of explanation
afig, mplfig = b['mesh01@model'].plot(x='ws', show=True)
Explanation: Plotting
Meshes
End of explanation
afig, mplfig = b['orb01@model'].plot(x='ws',show=True)
Explanation: Orbits
End of explanation
afig, mplfig = b['lc01@model'].plot(show=True)
Explanation: Light Curves
End of explanation
afig, mplfig = b['rv01@model'].plot(show=True)
Explanation: RVs
End of explanation |
6,119 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License").
Image Captioning with Attention
<table class="tfo-notebook-buttons" align="left"><td>
<a target="_blank" href="https
Step1: Download and prepare the MS-COCO dataset
We will use the MS-COCO dataset to train our model. This dataset contains >82,000 images, each of which has been annotated with at least 5 different captions. The code below will download and extract the dataset automatically.
Caution
Step2: Optionally, limit the size of the training set for faster training
For this example, we'll select a subset of 30,000 captions and use these and the corresponding images to train our model. As always, captioning quality will improve if you choose to use more data.
Step3: Preprocess the images using InceptionV3
Next, we will use InceptionV3 (pretrained on Imagenet) to classify each image. We will extract features from the last convolutional layer.
First, we will need to convert the images into the format inceptionV3 expects by
Step4: Initialize InceptionV3 and load the pretrained Imagenet weights
To do so, we'll create a tf.keras model where the output layer is the last convolutional layer in the InceptionV3 architecture.
* Each image is forwarded through the network and the vector that we get at the end is stored in a dictionary (image_name --> feature_vector).
* We use the last convolutional layer because we are using attention in this example. The shape of the output of this layer is 8x8x2048.
* We avoid doing this during training so it does not become a bottleneck.
* After all the images are passed through the network, we pickle the dictionary and save it to disk.
Step5: Caching the features extracted from InceptionV3
We will pre-process each image with InceptionV3 and cache the output to disk. Caching the output in RAM would be faster but memory intensive, requiring 8 * 8 * 2048 floats per image. At the time of writing, this would exceed the memory limitations of Colab (although these may change, an instance appears to have about 12GB of memory currently).
Performance could be improved with a more sophisticated caching strategy (e.g., by sharding the images to reduce random access disk I/O) at the cost of more code.
This will take about 10 minutes to run in Colab with a GPU. If you'd like to see a progress bar, you could
Step6: Preprocess and tokenize the captions
First, we'll tokenize the captions (e.g., by splitting on spaces). This will give us a vocabulary of all the unique words in the data (e.g., "surfing", "football", etc).
Next, we'll limit the vocabulary size to the top 5,000 words to save memory. We'll replace all other words with the token "UNK" (for unknown).
Finally, we create a word --> index mapping and vice-versa.
We will then pad all sequences to the be same length as the longest one.
Step7: Split the data into training and testing
Step8: Our images and captions are ready! Next, let's create a tf.data dataset to use for training our model.
Step9: Model
Fun fact, the decoder below is identical to the one in the example for Neural Machine Translation with Attention.
The model architecture is inspired by the Show, Attend and Tell paper.
In this example, we extract the features from the lower convolutional layer of InceptionV3 giving us a vector of shape (8, 8, 2048).
We squash that to a shape of (64, 2048).
This vector is then passed through the CNN Encoder(which consists of a single Fully connected layer).
The RNN(here GRU) attends over the image to predict the next word.
Step10: Training
We extract the features stored in the respective .npy files and then pass those features through the encoder.
The encoder output, hidden state(initialized to 0) and the decoder input (which is the start token) is passed to the decoder.
The decoder returns the predictions and the decoder hidden state.
The decoder hidden state is then passed back into the model and the predictions are used to calculate the loss.
Use teacher forcing to decide the next input to the decoder.
Teacher forcing is the technique where the target word is passed as the next input to the decoder.
The final step is to calculate the gradients and apply it to the optimizer and backpropagate.
Step11: Caption!
The evaluate function is similar to the training loop, except we don't use teacher forcing here. The input to the decoder at each time step is its previous predictions along with the hidden state and the encoder output.
Stop predicting when the model predicts the end token.
And store the attention weights for every time step.
Step12: Try it on your own images
For fun, below we've provided a method you can use to caption your own images with the model we've just trained. Keep in mind, it was trained on a relatively small amount of data, and your images may be different from the training data (so be prepared for weird results!) | Python Code:
# Import TensorFlow and enable eager execution
# This code requires TensorFlow version >=1.9
import tensorflow as tf
tf.enable_eager_execution()
# We'll generate plots of attention in order to see which parts of an image
# our model focuses on during captioning
import matplotlib.pyplot as plt
# Scikit-learn includes many helpful utilities
from sklearn.model_selection import train_test_split
from sklearn.utils import shuffle
import re
import numpy as np
import os
import time
import json
from glob import glob
from PIL import Image
import pickle
Explanation: Copyright 2018 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License").
Image Captioning with Attention
<table class="tfo-notebook-buttons" align="left"><td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/generative_examples/image_captioning_with_attention.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td><td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/generative_examples/image_captioning_with_attention.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a></td></table>
Image captioning is the task of generating a caption for an image. Given an image like this:
Image Source, License: Public Domain
Our goal is to generate a caption, such as "a surfer riding on a wave". Here, we'll use an attention-based model. This enables us to see which parts of the image the model focuses on as it generates a caption.
This model architecture below is similar to Show, Attend and Tell: Neural Image Caption Generation with Visual Attention.
The code uses tf.keras and eager execution, which you can learn more about in the linked guides.
This notebook is an end-to-end example. If you run it, it will download the MS-COCO dataset, preprocess and cache a subset of the images using Inception V3, train an encoder-decoder model, and use it to generate captions on new images.
The code requires TensorFlow version >=1.9. If you're running this in Colab
In this example, we're training on a relatively small amount of data as an example. On a single P100 GPU, this example will take about ~2 hours to train. We train on the first 30,000 captions (corresponding to about ~20,000 images depending on shuffling, as there are multiple captions per image in the dataset)
End of explanation
annotation_zip = tf.keras.utils.get_file('captions.zip',
cache_subdir=os.path.abspath('.'),
origin = 'http://images.cocodataset.org/annotations/annotations_trainval2014.zip',
extract = True)
annotation_file = os.path.dirname(annotation_zip)+'/annotations/captions_train2014.json'
name_of_zip = 'train2014.zip'
if not os.path.exists(os.path.abspath('.') + '/' + name_of_zip):
image_zip = tf.keras.utils.get_file(name_of_zip,
cache_subdir=os.path.abspath('.'),
origin = 'http://images.cocodataset.org/zips/train2014.zip',
extract = True)
PATH = os.path.dirname(image_zip)+'/train2014/'
else:
PATH = os.path.abspath('.')+'/train2014/'
Explanation: Download and prepare the MS-COCO dataset
We will use the MS-COCO dataset to train our model. This dataset contains >82,000 images, each of which has been annotated with at least 5 different captions. The code below will download and extract the dataset automatically.
Caution: large download ahead. We'll use the training set, it's a 13GB file.
End of explanation
# read the json file
with open(annotation_file, 'r') as f:
annotations = json.load(f)
# storing the captions and the image name in vectors
all_captions = []
all_img_name_vector = []
for annot in annotations['annotations']:
caption = '<start> ' + annot['caption'] + ' <end>'
image_id = annot['image_id']
full_coco_image_path = PATH + 'COCO_train2014_' + '%012d.jpg' % (image_id)
all_img_name_vector.append(full_coco_image_path)
all_captions.append(caption)
# shuffling the captions and image_names together
# setting a random state
train_captions, img_name_vector = shuffle(all_captions,
all_img_name_vector,
random_state=1)
# selecting the first 30000 captions from the shuffled set
num_examples = 30000
train_captions = train_captions[:num_examples]
img_name_vector = img_name_vector[:num_examples]
len(train_captions), len(all_captions)
Explanation: Optionally, limit the size of the training set for faster training
For this example, we'll select a subset of 30,000 captions and use these and the corresponding images to train our model. As always, captioning quality will improve if you choose to use more data.
End of explanation
def load_image(image_path):
img = tf.read_file(image_path)
img = tf.image.decode_jpeg(img, channels=3)
img = tf.image.resize_images(img, (299, 299))
img = tf.keras.applications.inception_v3.preprocess_input(img)
return img, image_path
Explanation: Preprocess the images using InceptionV3
Next, we will use InceptionV3 (pretrained on Imagenet) to classify each image. We will extract features from the last convolutional layer.
First, we will need to convert the images into the format inceptionV3 expects by:
* Resizing the image to (299, 299)
* Using the preprocess_input method to place the pixels in the range of -1 to 1 (to match the format of the images used to train InceptionV3).
End of explanation
image_model = tf.keras.applications.InceptionV3(include_top=False,
weights='imagenet')
new_input = image_model.input
hidden_layer = image_model.layers[-1].output
image_features_extract_model = tf.keras.Model(new_input, hidden_layer)
Explanation: Initialize InceptionV3 and load the pretrained Imagenet weights
To do so, we'll create a tf.keras model where the output layer is the last convolutional layer in the InceptionV3 architecture.
* Each image is forwarded through the network and the vector that we get at the end is stored in a dictionary (image_name --> feature_vector).
* We use the last convolutional layer because we are using attention in this example. The shape of the output of this layer is 8x8x2048.
* We avoid doing this during training so it does not become a bottleneck.
* After all the images are passed through the network, we pickle the dictionary and save it to disk.
End of explanation
# getting the unique images
encode_train = sorted(set(img_name_vector))
# feel free to change the batch_size according to your system configuration
image_dataset = tf.data.Dataset.from_tensor_slices(
encode_train).map(load_image).batch(16)
for img, path in image_dataset:
batch_features = image_features_extract_model(img)
batch_features = tf.reshape(batch_features,
(batch_features.shape[0], -1, batch_features.shape[3]))
for bf, p in zip(batch_features, path):
path_of_feature = p.numpy().decode("utf-8")
np.save(path_of_feature, bf.numpy())
Explanation: Caching the features extracted from InceptionV3
We will pre-process each image with InceptionV3 and cache the output to disk. Caching the output in RAM would be faster but memory intensive, requiring 8 * 8 * 2048 floats per image. At the time of writing, this would exceed the memory limitations of Colab (although these may change, an instance appears to have about 12GB of memory currently).
Performance could be improved with a more sophisticated caching strategy (e.g., by sharding the images to reduce random access disk I/O) at the cost of more code.
This will take about 10 minutes to run in Colab with a GPU. If you'd like to see a progress bar, you could: install tqdm (!pip install tqdm), then change this line:
for img, path in image_dataset:
to:
for img, path in tqdm(image_dataset):.
End of explanation
# This will find the maximum length of any caption in our dataset
def calc_max_length(tensor):
return max(len(t) for t in tensor)
# The steps above is a general process of dealing with text processing
# choosing the top 5000 words from the vocabulary
top_k = 5000
tokenizer = tf.keras.preprocessing.text.Tokenizer(num_words=top_k,
oov_token="<unk>",
filters='!"#$%&()*+.,-/:;=?@[\]^_`{|}~ ')
tokenizer.fit_on_texts(train_captions)
train_seqs = tokenizer.texts_to_sequences(train_captions)
tokenizer.word_index = {key:value for key, value in tokenizer.word_index.items() if value <= top_k}
# putting <unk> token in the word2idx dictionary
tokenizer.word_index[tokenizer.oov_token] = top_k + 1
tokenizer.word_index['<pad>'] = 0
# creating the tokenized vectors
train_seqs = tokenizer.texts_to_sequences(train_captions)
# creating a reverse mapping (index -> word)
index_word = {value:key for key, value in tokenizer.word_index.items()}
# padding each vector to the max_length of the captions
# if the max_length parameter is not provided, pad_sequences calculates that automatically
cap_vector = tf.keras.preprocessing.sequence.pad_sequences(train_seqs, padding='post')
# calculating the max_length
# used to store the attention weights
max_length = calc_max_length(train_seqs)
Explanation: Preprocess and tokenize the captions
First, we'll tokenize the captions (e.g., by splitting on spaces). This will give us a vocabulary of all the unique words in the data (e.g., "surfing", "football", etc).
Next, we'll limit the vocabulary size to the top 5,000 words to save memory. We'll replace all other words with the token "UNK" (for unknown).
Finally, we create a word --> index mapping and vice-versa.
We will then pad all sequences to the be same length as the longest one.
End of explanation
# Create training and validation sets using 80-20 split
img_name_train, img_name_val, cap_train, cap_val = train_test_split(img_name_vector,
cap_vector,
test_size=0.2,
random_state=0)
len(img_name_train), len(cap_train), len(img_name_val), len(cap_val)
Explanation: Split the data into training and testing
End of explanation
# feel free to change these parameters according to your system's configuration
BATCH_SIZE = 64
BUFFER_SIZE = 1000
embedding_dim = 256
units = 512
vocab_size = len(tokenizer.word_index)
# shape of the vector extracted from InceptionV3 is (64, 2048)
# these two variables represent that
features_shape = 2048
attention_features_shape = 64
# loading the numpy files
def map_func(img_name, cap):
img_tensor = np.load(img_name.decode('utf-8')+'.npy')
return img_tensor, cap
dataset = tf.data.Dataset.from_tensor_slices((img_name_train, cap_train))
# using map to load the numpy files in parallel
# NOTE: Be sure to set num_parallel_calls to the number of CPU cores you have
# https://www.tensorflow.org/api_docs/python/tf/py_func
dataset = dataset.map(lambda item1, item2: tf.py_func(
map_func, [item1, item2], [tf.float32, tf.int32]), num_parallel_calls=8)
# shuffling and batching
dataset = dataset.shuffle(BUFFER_SIZE)
# https://www.tensorflow.org/api_docs/python/tf/contrib/data/batch_and_drop_remainder
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.prefetch(1)
Explanation: Our images and captions are ready! Next, let's create a tf.data dataset to use for training our model.
End of explanation
def gru(units):
# If you have a GPU, we recommend using the CuDNNGRU layer (it provides a
# significant speedup).
if tf.test.is_gpu_available():
return tf.keras.layers.CuDNNGRU(units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
else:
return tf.keras.layers.GRU(units,
return_sequences=True,
return_state=True,
recurrent_activation='sigmoid',
recurrent_initializer='glorot_uniform')
class BahdanauAttention(tf.keras.Model):
def __init__(self, units):
super(BahdanauAttention, self).__init__()
self.W1 = tf.keras.layers.Dense(units)
self.W2 = tf.keras.layers.Dense(units)
self.V = tf.keras.layers.Dense(1)
def call(self, features, hidden):
# features(CNN_encoder output) shape == (batch_size, 64, embedding_dim)
# hidden shape == (batch_size, hidden_size)
# hidden_with_time_axis shape == (batch_size, 1, hidden_size)
hidden_with_time_axis = tf.expand_dims(hidden, 1)
# score shape == (batch_size, 64, hidden_size)
score = tf.nn.tanh(self.W1(features) + self.W2(hidden_with_time_axis))
# attention_weights shape == (batch_size, 64, 1)
# we get 1 at the last axis because we are applying score to self.V
attention_weights = tf.nn.softmax(self.V(score), axis=1)
# context_vector shape after sum == (batch_size, hidden_size)
context_vector = attention_weights * features
context_vector = tf.reduce_sum(context_vector, axis=1)
return context_vector, attention_weights
class CNN_Encoder(tf.keras.Model):
# Since we have already extracted the features and dumped it using pickle
# This encoder passes those features through a Fully connected layer
def __init__(self, embedding_dim):
super(CNN_Encoder, self).__init__()
# shape after fc == (batch_size, 64, embedding_dim)
self.fc = tf.keras.layers.Dense(embedding_dim)
def call(self, x):
x = self.fc(x)
x = tf.nn.relu(x)
return x
class RNN_Decoder(tf.keras.Model):
def __init__(self, embedding_dim, units, vocab_size):
super(RNN_Decoder, self).__init__()
self.units = units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = gru(self.units)
self.fc1 = tf.keras.layers.Dense(self.units)
self.fc2 = tf.keras.layers.Dense(vocab_size)
self.attention = BahdanauAttention(self.units)
def call(self, x, features, hidden):
# defining attention as a separate model
context_vector, attention_weights = self.attention(features, hidden)
# x shape after passing through embedding == (batch_size, 1, embedding_dim)
x = self.embedding(x)
# x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size)
x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)
# passing the concatenated vector to the GRU
output, state = self.gru(x)
# shape == (batch_size, max_length, hidden_size)
x = self.fc1(output)
# x shape == (batch_size * max_length, hidden_size)
x = tf.reshape(x, (-1, x.shape[2]))
# output shape == (batch_size * max_length, vocab)
x = self.fc2(x)
return x, state, attention_weights
def reset_state(self, batch_size):
return tf.zeros((batch_size, self.units))
encoder = CNN_Encoder(embedding_dim)
decoder = RNN_Decoder(embedding_dim, units, vocab_size)
optimizer = tf.train.AdamOptimizer()
# We are masking the loss calculated for padding
def loss_function(real, pred):
mask = 1 - np.equal(real, 0)
loss_ = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=real, logits=pred) * mask
return tf.reduce_mean(loss_)
Explanation: Model
Fun fact, the decoder below is identical to the one in the example for Neural Machine Translation with Attention.
The model architecture is inspired by the Show, Attend and Tell paper.
In this example, we extract the features from the lower convolutional layer of InceptionV3 giving us a vector of shape (8, 8, 2048).
We squash that to a shape of (64, 2048).
This vector is then passed through the CNN Encoder(which consists of a single Fully connected layer).
The RNN(here GRU) attends over the image to predict the next word.
End of explanation
# adding this in a separate cell because if you run the training cell
# many times, the loss_plot array will be reset
loss_plot = []
EPOCHS = 20
for epoch in range(EPOCHS):
start = time.time()
total_loss = 0
for (batch, (img_tensor, target)) in enumerate(dataset):
loss = 0
# initializing the hidden state for each batch
# because the captions are not related from image to image
hidden = decoder.reset_state(batch_size=target.shape[0])
dec_input = tf.expand_dims([tokenizer.word_index['<start>']] * BATCH_SIZE, 1)
with tf.GradientTape() as tape:
features = encoder(img_tensor)
for i in range(1, target.shape[1]):
# passing the features through the decoder
predictions, hidden, _ = decoder(dec_input, features, hidden)
loss += loss_function(target[:, i], predictions)
# using teacher forcing
dec_input = tf.expand_dims(target[:, i], 1)
total_loss += (loss / int(target.shape[1]))
variables = encoder.variables + decoder.variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables), tf.train.get_or_create_global_step())
if batch % 100 == 0:
print ('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1,
batch,
loss.numpy() / int(target.shape[1])))
# storing the epoch end loss value to plot later
loss_plot.append(total_loss / len(cap_vector))
print ('Epoch {} Loss {:.6f}'.format(epoch + 1,
total_loss/len(cap_vector)))
print ('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
plt.plot(loss_plot)
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.title('Loss Plot')
plt.show()
Explanation: Training
We extract the features stored in the respective .npy files and then pass those features through the encoder.
The encoder output, hidden state(initialized to 0) and the decoder input (which is the start token) is passed to the decoder.
The decoder returns the predictions and the decoder hidden state.
The decoder hidden state is then passed back into the model and the predictions are used to calculate the loss.
Use teacher forcing to decide the next input to the decoder.
Teacher forcing is the technique where the target word is passed as the next input to the decoder.
The final step is to calculate the gradients and apply it to the optimizer and backpropagate.
End of explanation
def evaluate(image):
attention_plot = np.zeros((max_length, attention_features_shape))
hidden = decoder.reset_state(batch_size=1)
temp_input = tf.expand_dims(load_image(image)[0], 0)
img_tensor_val = image_features_extract_model(temp_input)
img_tensor_val = tf.reshape(img_tensor_val, (img_tensor_val.shape[0], -1, img_tensor_val.shape[3]))
features = encoder(img_tensor_val)
dec_input = tf.expand_dims([tokenizer.word_index['<start>']], 0)
result = []
for i in range(max_length):
predictions, hidden, attention_weights = decoder(dec_input, features, hidden)
attention_plot[i] = tf.reshape(attention_weights, (-1, )).numpy()
predicted_id = tf.argmax(predictions[0]).numpy()
result.append(index_word[predicted_id])
if index_word[predicted_id] == '<end>':
return result, attention_plot
dec_input = tf.expand_dims([predicted_id], 0)
attention_plot = attention_plot[:len(result), :]
return result, attention_plot
def plot_attention(image, result, attention_plot):
temp_image = np.array(Image.open(image))
fig = plt.figure(figsize=(10, 10))
len_result = len(result)
for l in range(len_result):
temp_att = np.resize(attention_plot[l], (8, 8))
ax = fig.add_subplot(len_result//2, len_result//2, l+1)
ax.set_title(result[l])
img = ax.imshow(temp_image)
ax.imshow(temp_att, cmap='gray', alpha=0.6, extent=img.get_extent())
plt.tight_layout()
plt.show()
# captions on the validation set
rid = np.random.randint(0, len(img_name_val))
image = img_name_val[rid]
real_caption = ' '.join([index_word[i] for i in cap_val[rid] if i not in [0]])
result, attention_plot = evaluate(image)
print ('Real Caption:', real_caption)
print ('Prediction Caption:', ' '.join(result))
plot_attention(image, result, attention_plot)
# opening the image
Image.open(img_name_val[rid])
Explanation: Caption!
The evaluate function is similar to the training loop, except we don't use teacher forcing here. The input to the decoder at each time step is its previous predictions along with the hidden state and the encoder output.
Stop predicting when the model predicts the end token.
And store the attention weights for every time step.
End of explanation
image_url = 'https://tensorflow.org/images/surf.jpg'
image_extension = image_url[-4:]
image_path = tf.keras.utils.get_file('image'+image_extension,
origin=image_url)
result, attention_plot = evaluate(image_path)
print ('Prediction Caption:', ' '.join(result))
plot_attention(image_path, result, attention_plot)
# opening the image
Image.open(image_path)
Explanation: Try it on your own images
For fun, below we've provided a method you can use to caption your own images with the model we've just trained. Keep in mind, it was trained on a relatively small amount of data, and your images may be different from the training data (so be prepared for weird results!)
End of explanation |
6,120 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center">Images</h1>
<table width="100%">
<tr style="background-color
Step1: Load your first image and display it
Step2: Image Construction
There are a variety of ways to create an image.
The following components are required for a complete definition of an image
Step3: Basic Image Attributes
You can change the image origin, spacing and direction. Making such changes to an image already containing data should be done cautiously.
Step4: Image dimension queries
Step5: What is the depth of a 2D image?
Step6: Pixel/voxel type queries
Step7: What is the dimension and size of a Vector image and its data?
Step8: Accessing Pixels and Slicing
The Image class's member functions GetPixel and SetPixel provide an ITK-like interface for pixel access.
Step9: Slicing of SimpleITK images returns a copy of the image data.
This is similar to slicing Python lists and differs from the "view" returned by slicing numpy arrays.
Step10: Draw a square on top of the logo image
Step11: Conversion between numpy and SimpleITK
SimpleITK and numpy indexing access is in opposite order!
SimpleITK
Step12: From numpy to SimpleITK
Remember to to set the image's origin, spacing, and possibly direction cosine matrix. The default values may not match the physical dimensions of your image.
Step13: Image operations
SimpleITK supports basic arithmetic operations between images, <b>taking into account their physical space</b>.
Repeatedly run this cell. Fix the error (comment out the SetDirection, then SetSpacing). Why doesn't the SetOrigin line cause a problem? How close do two physical attributes need to be in order to be considered equivalent?
Step14: Reading and Writing
SimpleITK can read and write images stored in a single file, or a set of files (e.g. DICOM series).
Images stored in the DICOM format have a meta-data dictionary associated with them, which is populated with the DICOM tags. When a DICOM series is read as a single image, the meta-data information is not available since DICOM tags are specific to a file. If you need the meta-data, access the dictionary for each file by reading them seperately.
In the following cell, we read an image in JPEG format, and write it as PNG and BMP. File formats are deduced from the file extension. Appropriate pixel type is also set - you can override this and force a pixel type of your choice.
Step15: Read an image in JPEG format and cast the pixel type according to user selection.
Step16: Read a DICOM series and write it as a single mha file
Step17: Write an image series as JPEG. The WriteImage function receives a volume and a list of images names and writes the volume according to the z axis. For a displayable result we need to rescale the image intensities (default is [0,255]) since the JPEG format requires a cast to the UInt8 pixel type.
Step18: Select a specific DICOM series from a directory and only then load user selection.
Step19: Image Display
While SimpleITK does not do visualization, it does contain a built in Show method. This function writes the image out to disk and than launches a program for visualization. By default it is configured to use <a href="http
Step20: By converting into a numpy array, matplotlib can be used for visualization for integration into the scientifc python enviroment. This is good for illustrative purposes, but is problematic when working with images that have a high dynamic range or non-isotropic spacing - most 3D medical images.
When working with medical images it is recommended to visualize them using dedicated software such as the freely available 3D Slicer or ITK-SNAP.
Step21: So if you really want to look at your images, use the sitk.Show command
Step22: Use a different viewer by setting environment variable(s). Do this from within your ipython notebook using 'magic' functions, or set in a more permanent manner using your OS specific convention. | Python Code:
import SimpleITK as sitk
from __future__ import print_function
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
from ipywidgets import interact, fixed
import os
OUTPUT_DIR = 'Output'
# Utility method that either downloads data from the MIDAS repository or
# if already downloaded returns the file name for reading from disk (cached data).
from downloaddata import fetch_data as fdata
Explanation: <h1 align="center">Images</h1>
<table width="100%">
<tr style="background-color: red;"><td><font color="white">SimpleITK conventions:</font></td></tr>
<tr><td>
<ul>
<li>Image access is in x,y,z order, image.GetPixel(x,y,z) or image[x,y,z], with zero based indexing.</li>
<li>If the output of an ITK filter has non-zero starting index, then the index will be set to 0, and the origin adjusted accordingly.</li>
</ul>
</td></tr>
</table>
The unique feature of SimpleITK (derived from ITK) as a toolkit for image manipulation and analysis is that it views <b>images as physical objects occupying a bounded region in physical space</b>. In addition images can have different spacing between pixels along each axis, and the axes are not necessarily orthogonal. The following figure illustrates these concepts.
<img src="ImageOriginAndSpacing.png" style="width:700px"/><br><br>
Pixel Types
The pixel type is represented as an enumerated type. The following is a table of the enumerated list.
<table>
<tr><td>sitkUInt8</td><td>Unsigned 8 bit integer</td></tr>
<tr><td>sitkInt8</td><td>Signed 8 bit integer</td></tr>
<tr><td>sitkUInt16</td><td>Unsigned 16 bit integer</td></tr>
<tr><td>sitkInt16</td><td>Signed 16 bit integer</td></tr>
<tr><td>sitkUInt32</td><td>Unsigned 32 bit integer</td></tr>
<tr><td>sitkInt32</td><td>Signed 32 bit integer</td></tr>
<tr><td>sitkUInt64</td><td>Unsigned 64 bit integer</td></tr>
<tr><td>sitkInt64</td><td>Signed 64 bit integer</td></tr>
<tr><td>sitkFloat32</td><td>32 bit float</td></tr>
<tr><td>sitkFloat64</td><td>64 bit float</td></tr>
<tr><td>sitkComplexFloat32</td><td>complex number of 32 bit float</td></tr>
<tr><td>sitkComplexFloat64</td><td>complex number of 64 bit float</td></tr>
<tr><td>sitkVectorUInt8</td><td>Multi-component of unsigned 8 bit integer</td></tr>
<tr><td>sitkVectorInt8</td><td>Multi-component of signed 8 bit integer</td></tr>
<tr><td>sitkVectorUInt16</td><td>Multi-component of unsigned 16 bit integer</td></tr>
<tr><td>sitkVectorInt16</td><td>Multi-component of signed 16 bit integer</td></tr>
<tr><td>sitkVectorUInt32</td><td>Multi-component of unsigned 32 bit integer</td></tr>
<tr><td>sitkVectorInt32</td><td>Multi-component of signed 32 bit integer</td></tr>
<tr><td>sitkVectorUInt64</td><td>Multi-component of unsigned 64 bit integer</td></tr>
<tr><td>sitkVectorInt64</td><td>Multi-component of signed 64 bit integer</td></tr>
<tr><td>sitkVectorFloat32</td><td>Multi-component of 32 bit float</td></tr>
<tr><td>sitkVectorFloat64</td><td>Multi-component of 64 bit float</td></tr>
<tr><td>sitkLabelUInt8</td><td>RLE label of unsigned 8 bit integers</td></tr>
<tr><td>sitkLabelUInt16</td><td>RLE label of unsigned 16 bit integers</td></tr>
<tr><td>sitkLabelUInt32</td><td>RLE label of unsigned 32 bit integers</td></tr>
<tr><td>sitkLabelUInt64</td><td>RLE label of unsigned 64 bit integers</td></tr>
</table>
There is also sitkUnknown, which is used for undefined or erroneous pixel ID's. It has a value of -1.
The 64-bit integer types are not available on all distributions. When not available the value is sitkUnknown.
End of explanation
logo = sitk.ReadImage(fdata('SimpleITK.jpg'))
plt.imshow(sitk.GetArrayFromImage(logo))
plt.axis('off');
Explanation: Load your first image and display it
End of explanation
image_3D = sitk.Image(256, 128, 64, sitk.sitkInt16)
image_2D = sitk.Image(64, 64, sitk.sitkFloat32)
image_2D = sitk.Image([32,32], sitk.sitkUInt32)
image_RGB = sitk.Image([128,64], sitk.sitkVectorUInt8, 3)
Explanation: Image Construction
There are a variety of ways to create an image.
The following components are required for a complete definition of an image:
<ol>
<li>Pixel type [fixed on creation, no default]: unsigned 32 bit integer, sitkVectorUInt8, etc., see list above.</li>
<li> Sizes [fixed on creation, no default]: number of pixels/voxels in each dimension. This quantity implicitly defines the image dimension.</li>
<li> Origin [default is zero]: coordinates of the pixel/voxel with index (0,0,0) in physical units (i.e. mm).</li>
<li> Spacing [default is one]: Distance between adjacent pixels/voxels in each dimension given in physical units.</li>
<li> Direction matrix [default is identity]: mapping, rotation, between direction of the pixel/voxel axes and physical directions.</li>
</ol>
Initial pixel/voxel values are set to zero.
End of explanation
image_3D.SetOrigin((78.0, 76.0, 77.0))
image_3D.SetSpacing([0.5,0.5,3.0])
print(image_3D.GetOrigin())
print(image_3D.GetSize())
print(image_3D.GetSpacing())
print(image_3D.GetDirection())
Explanation: Basic Image Attributes
You can change the image origin, spacing and direction. Making such changes to an image already containing data should be done cautiously.
End of explanation
print(image_3D.GetDimension())
print(image_3D.GetWidth())
print(image_3D.GetHeight())
print(image_3D.GetDepth())
Explanation: Image dimension queries:
End of explanation
print(image_2D.GetSize())
print(image_2D.GetDepth())
Explanation: What is the depth of a 2D image?
End of explanation
print(image_3D.GetPixelIDValue())
print(image_3D.GetPixelIDTypeAsString())
print(image_3D.GetNumberOfComponentsPerPixel())
Explanation: Pixel/voxel type queries:
End of explanation
print(image_RGB.GetDimension())
print(image_RGB.GetSize())
print(image_RGB.GetNumberOfComponentsPerPixel())
Explanation: What is the dimension and size of a Vector image and its data?
End of explanation
help(image_3D.GetPixel)
print(image_3D.GetPixel(0, 0, 0))
image_3D.SetPixel(0, 0, 0, 1)
print(image_3D.GetPixel(0, 0, 0))
# This can also be done using pythonic notation.
print(image_3D[0,0,1])
image_3D[0,0,1] = 2
print(image_3D[0,0,1])
Explanation: Accessing Pixels and Slicing
The Image class's member functions GetPixel and SetPixel provide an ITK-like interface for pixel access.
End of explanation
# Brute force subsampling
logo_subsampled = logo[::2,::2]
# Get the sub-image containing the word Simple
simple = logo[0:115,:]
# Get the sub-image containing the word Simple and flip it
simple_flipped = logo[115:0:-1,:]
n = 4
plt.subplot(n,1,1)
plt.imshow(sitk.GetArrayFromImage(logo))
plt.axis('off');
plt.subplot(n,1,2)
plt.imshow(sitk.GetArrayFromImage(logo_subsampled))
plt.axis('off');
plt.subplot(n,1,3)
plt.imshow(sitk.GetArrayFromImage(simple))
plt.axis('off')
plt.subplot(n,1,4)
plt.imshow(sitk.GetArrayFromImage(simple_flipped))
plt.axis('off');
Explanation: Slicing of SimpleITK images returns a copy of the image data.
This is similar to slicing Python lists and differs from the "view" returned by slicing numpy arrays.
End of explanation
# Version 0: get the numpy array and assign the value via broadcast - later on you will need to construct
# a new image from the array
logo_pixels = sitk.GetArrayFromImage(logo)
logo_pixels[0:10,0:10] = [0,255,0]
# Version 1: generates an error, the image slicing returns a new image and you cannot assign a value to an image
#logo[0:10,0:10] = [255,0,0]
# Version 2: image slicing returns a new image, so all assignments here will not have any effect on the original
# 'logo' image
logo_subimage = logo[0:10, 0:10]
for x in range(0,10):
for y in range(0,10):
logo_subimage[x,y] = [255,0,0]
# Version 3: modify the original image, iterate and assing a value to each pixel
#for x in range(0,10):
# for y in range(0,10):
# logo[x,y] = [255,0,0]
plt.subplot(2,1,1)
plt.imshow(sitk.GetArrayFromImage(logo))
plt.axis('off')
plt.subplot(2,1,2)
plt.imshow(logo_pixels)
plt.axis('off');
Explanation: Draw a square on top of the logo image:
After running this cell, uncomment "Version 3" and see its effect.
End of explanation
nda = sitk.GetArrayFromImage(image_3D)
print(image_3D.GetSize())
print(nda.shape)
nda = sitk.GetArrayFromImage(image_RGB)
print(image_RGB.GetSize())
print(nda.shape)
Explanation: Conversion between numpy and SimpleITK
SimpleITK and numpy indexing access is in opposite order!
SimpleITK: image[x,y,z]<br>
numpy: image_numpy_array[z,y,x]
From SimpleITK to numpy
End of explanation
nda = np.zeros((10,20,3))
#if this is supposed to be a 3D gray scale image [x=3, y=20, z=10]
img = sitk.GetImageFromArray(nda)
print(img.GetSize())
#if this is supposed to be a 2D color image [x=20,y=10]
img = sitk.GetImageFromArray(nda, isVector=True)
print(img.GetSize())
Explanation: From numpy to SimpleITK
Remember to to set the image's origin, spacing, and possibly direction cosine matrix. The default values may not match the physical dimensions of your image.
End of explanation
img1 = sitk.Image(24,24, sitk.sitkUInt8)
img1[0,0] = 0
img2 = sitk.Image(img1.GetSize(), sitk.sitkUInt8)
img2.SetDirection([0,1,0.5,0.5])
img2.SetSpacing([0.5,0.8])
img2.SetOrigin([0.000001,0.000001])
img2[0,0] = 255
img3 = img1 + img2
print(img3[0,0])
Explanation: Image operations
SimpleITK supports basic arithmetic operations between images, <b>taking into account their physical space</b>.
Repeatedly run this cell. Fix the error (comment out the SetDirection, then SetSpacing). Why doesn't the SetOrigin line cause a problem? How close do two physical attributes need to be in order to be considered equivalent?
End of explanation
img = sitk.ReadImage(fdata('SimpleITK.jpg'))
print(img.GetPixelIDTypeAsString())
# write as PNG and BMP
sitk.WriteImage(img, os.path.join(OUTPUT_DIR, 'SimpleITK.png'))
sitk.WriteImage(img, os.path.join(OUTPUT_DIR, 'SimpleITK.bmp'))
Explanation: Reading and Writing
SimpleITK can read and write images stored in a single file, or a set of files (e.g. DICOM series).
Images stored in the DICOM format have a meta-data dictionary associated with them, which is populated with the DICOM tags. When a DICOM series is read as a single image, the meta-data information is not available since DICOM tags are specific to a file. If you need the meta-data, access the dictionary for each file by reading them seperately.
In the following cell, we read an image in JPEG format, and write it as PNG and BMP. File formats are deduced from the file extension. Appropriate pixel type is also set - you can override this and force a pixel type of your choice.
End of explanation
# Several pixel types, some make sense in this case (vector types) and some are just show
# that the user's choice will force the pixel type even when it doesn't make sense.
pixel_types = { 'sitkUInt8': sitk.sitkUInt8,
'sitkUInt16' : sitk.sitkUInt16,
'sitkFloat64' : sitk.sitkFloat64,
'sitkVectorUInt8' : sitk.sitkVectorUInt8,
'sitkVectorUInt16' : sitk.sitkVectorUInt16,
'sitkVectorFloat64' : sitk.sitkVectorFloat64}
def pixel_type_dropdown_callback(pixel_type, pixel_types_dict):
#specify the file location and the pixel type we want
img = sitk.ReadImage(fdata('SimpleITK.jpg'), pixel_types_dict[pixel_type])
print(img.GetPixelIDTypeAsString())
print(img[0,0])
plt.imshow(sitk.GetArrayFromImage(img))
plt.axis('off')
interact(pixel_type_dropdown_callback, pixel_type=pixel_types.keys(), pixel_types_dict=fixed(pixel_types));
Explanation: Read an image in JPEG format and cast the pixel type according to user selection.
End of explanation
data_directory = os.path.dirname(fdata("CIRS057A_MR_CT_DICOM/readme.txt"))
series_ID = '1.2.840.113619.2.290.3.3233817346.783.1399004564.515'
# Get the list of files belonging to a specific series ID.
reader = sitk.ImageSeriesReader()
# Use the functional interface to read the image series.
original_image = sitk.ReadImage(reader.GetGDCMSeriesFileNames(data_directory, series_ID))
# Write the image.
output_file_name_3D = os.path.join(OUTPUT_DIR, '3DImage.mha')
sitk.WriteImage(original_image, output_file_name_3D)
# Read it back again.
written_image = sitk.ReadImage(output_file_name_3D)
# Check that the original and written image are the same.
statistics_image_filter = sitk.StatisticsImageFilter()
statistics_image_filter.Execute(original_image - written_image)
# Check that the original and written files are the same
print('Max, Min differences are : {0}, {1}'.format(statistics_image_filter.GetMaximum(), statistics_image_filter.GetMinimum()))
Explanation: Read a DICOM series and write it as a single mha file
End of explanation
sitk.WriteImage(sitk.Cast(sitk.RescaleIntensity(written_image), sitk.sitkUInt8),
[os.path.join(OUTPUT_DIR, 'slice{0:03d}.jpg'.format(i)) for i in range(written_image.GetSize()[2])])
Explanation: Write an image series as JPEG. The WriteImage function receives a volume and a list of images names and writes the volume according to the z axis. For a displayable result we need to rescale the image intensities (default is [0,255]) since the JPEG format requires a cast to the UInt8 pixel type.
End of explanation
data_directory = os.path.dirname(fdata("CIRS057A_MR_CT_DICOM/readme.txt"))
# Global variable 'selected_series' is updated by the interact function
selected_series = ''
def DICOM_series_dropdown_callback(series_to_load, series_dictionary):
global selected_series
# Print some information about the series from the meta-data dictionary
# DICOM standard part 6, Data Dictionary: http://medical.nema.org/medical/dicom/current/output/pdf/part06.pdf
img = sitk.ReadImage(series_dictionary[series_to_load][0])
tags_to_print = {'0010|0010': 'Patient name: ',
'0008|0060' : 'Modality: ',
'0008|0021' : 'Series date: ',
'0008|0080' : 'Institution name: ',
'0008|1050' : 'Performing physician\'s name: '}
for tag in tags_to_print:
try:
print(tags_to_print[tag] + img.GetMetaData(tag))
except: # Ignore if the tag isn't in the dictionary
pass
selected_series = series_to_load
# Directory contains multiple DICOM studies/series, store
# in dictionary with key being the seriesID
reader = sitk.ImageSeriesReader()
series_file_names = {}
series_IDs = reader.GetGDCMSeriesIDs(data_directory)
# Check that we have at least one series
if series_IDs:
for series in series_IDs:
series_file_names[series] = reader.GetGDCMSeriesFileNames(data_directory, series)
interact(DICOM_series_dropdown_callback, series_to_load=series_IDs, series_dictionary=fixed(series_file_names));
else:
print('Data directory does not contain any DICOM series.')
reader.SetFileNames(series_file_names[selected_series])
img = reader.Execute()
npa = sitk.GetArrayFromImage(img)
# Display the image slice from the middle of the stack, z axis
z = img.GetDepth()/2
plt.imshow(sitk.GetArrayFromImage(img)[z,:,:], cmap=plt.cm.Greys_r)
plt.axis('off');
Explanation: Select a specific DICOM series from a directory and only then load user selection.
End of explanation
sitk.Show?
Explanation: Image Display
While SimpleITK does not do visualization, it does contain a built in Show method. This function writes the image out to disk and than launches a program for visualization. By default it is configured to use <a href="http://imagej.nih.gov/ij/">ImageJ</a>, because it readily supports many medical image formats and loads quickly. However, the Show visualization program is easily customizable by an enviroment variable:
<ul>
<li>SITK_SHOW_COMMAND: Viewer to use (<a href="http://www.itksnap.org">ITK-SNAP</a>, <a href="www.slicer.org">3D Slicer</a>...) </li>
<li>SITK_SHOW_COLOR_COMMAND: Viewer to use when displaying color images.</li>
<li>SITK_SHOW_3D_COMMAND: Viewer to use for 3D images.</li>
</ul>
End of explanation
mr_image = sitk.ReadImage(fdata('training_001_mr_T1.mha'))
npa = sitk.GetArrayFromImage(mr_image)
# Display the image slice from the middle of the stack, z axis
z = mr_image.GetDepth()/2
npa_zslice = sitk.GetArrayFromImage(mr_image)[z,:,:]
# Three plots displaying the same data, how do we deal with the high dynamic range?
fig = plt.figure()
fig.set_size_inches(15,30)
fig.add_subplot(1,3,1)
plt.imshow(npa_zslice)
plt.title('default colormap')
plt.axis('off')
fig.add_subplot(1,3,2)
plt.imshow(npa_zslice,cmap=plt.cm.Greys_r);
plt.title('grey colormap')
plt.axis('off')
fig.add_subplot(1,3,3)
plt.title('grey colormap,\n scaling based on volumetric min and max values')
plt.imshow(npa_zslice,cmap=plt.cm.Greys_r, vmin=npa.min(), vmax=npa.max())
plt.axis('off');
# Display the image slice in the middle of the stack, x axis
x = mr_image.GetWidth()/2
npa_xslice = npa[:,:,x]
plt.imshow(npa_xslice, cmap=plt.cm.Greys_r)
plt.axis('off')
print('Image spacing: {0}'.format(mr_image.GetSpacing()))
# Collapse along the x axis
extractSliceFilter = sitk.ExtractImageFilter()
size = list(mr_image.GetSize())
size[0] = 0
extractSliceFilter.SetSize( size )
index = (x, 0, 0)
extractSliceFilter.SetIndex(index)
sitk_xslice = extractSliceFilter.Execute(mr_image)
# Resample slice to isotropic
original_spacing = sitk_xslice.GetSpacing()
original_size = sitk_xslice.GetSize()
min_spacing = min(sitk_xslice.GetSpacing())
new_spacing = [min_spacing, min_spacing]
new_size = [int(round(original_size[0]*(original_spacing[0]/min_spacing))),
int(round(original_size[1]*(original_spacing[1]/min_spacing)))]
resampleSliceFilter = sitk.ResampleImageFilter()
# Why is the image pixelated?
sitk_isotropic_xslice = resampleSliceFilter.Execute(sitk_xslice, new_size, sitk.Transform(), sitk.sitkNearestNeighbor, sitk_xslice.GetOrigin(),
new_spacing, sitk_xslice.GetDirection(), 0, sitk_xslice.GetPixelIDValue())
plt.imshow(sitk.GetArrayFromImage(sitk_isotropic_xslice), cmap=plt.cm.Greys_r)
plt.axis('off')
print('Image spacing: {0}'.format(sitk_isotropic_xslice.GetSpacing()))
Explanation: By converting into a numpy array, matplotlib can be used for visualization for integration into the scientifc python enviroment. This is good for illustrative purposes, but is problematic when working with images that have a high dynamic range or non-isotropic spacing - most 3D medical images.
When working with medical images it is recommended to visualize them using dedicated software such as the freely available 3D Slicer or ITK-SNAP.
End of explanation
try:
sitk.Show(mr_image)
except RuntimeError:
print('SimpleITK Show method could not find the viewer (ImageJ not installed or ' +
'environment variable pointing to non existant viewer).')
Explanation: So if you really want to look at your images, use the sitk.Show command:
End of explanation
%env SITK_SHOW_COMMAND /Applications/ITK-SNAP.app/Contents/MacOS/ITK-SNAP
try:
sitk.Show(mr_image)
except RuntimeError:
print('SimpleITK Show method could not find the viewer (ImageJ not installed or ' +
'environment variable pointing to non existant viewer).')
%env SITK_SHOW_COMMAND '/Applications/ImageJ/ImageJ.app/Contents/MacOS/JavaApplicationStub'
try:
sitk.Show(mr_image)
except RuntimeError:
print('SimpleITK Show method could not find the viewer (ImageJ not installed or ' +
'environment variable pointing to non existant viewer).')
%env SITK_SHOW_COMMAND '/Applications/Slicer.app/Contents/MacOS/Slicer'
sitk.Show(mr_image)
Explanation: Use a different viewer by setting environment variable(s). Do this from within your ipython notebook using 'magic' functions, or set in a more permanent manner using your OS specific convention.
End of explanation |
6,121 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Think Bayes
Copyright 2018 Allen B. Downey
MIT License
Step1: The Space Shuttle problem
Here's a problem from Bayesian Methods for Hackers
On January 28, 1986, the twenty-fifth flight of the U.S. space shuttle program ended in disaster when one of the rocket boosters of the Shuttle Challenger exploded shortly after lift-off, killing all seven crew members. The presidential commission on the accident concluded that it was caused by the failure of an O-ring in a field joint on the rocket booster, and that this failure was due to a faulty design that made the O-ring unacceptably sensitive to a number of factors including outside temperature. Of the previous 24 flights, data were available on failures of O-rings on 23, (one was lost at sea), and these data were discussed on the evening preceding the Challenger launch, but unfortunately only the data corresponding to the 7 flights on which there was a damage incident were considered important and these were thought to show no obvious trend. The data are shown below (see 1)
Step4: Grid algorithm
We can solve the problem first using a grid algorithm, with parameters b0 and b1, and
$\mathrm{logit}(p) = b0 + b1 * T$
and each datum being a temperature T and a boolean outcome fail, which is true is there was damage and false otherwise.
Hint
Step5: According to the posterior distribution, what was the probability of damage when the shuttle launched at 31 degF?
Step6: MCMC
Implement this model using MCMC. As a starting place, you can use this example from the PyMC3 docs.
As a challege, try writing the model more explicitly, rather than using the GLM module. | Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
import numpy as np
import pandas as pd
# import classes from thinkbayes2
from thinkbayes2 import Pmf, Cdf, Suite, Joint
import thinkplot
Explanation: Think Bayes
Copyright 2018 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
# !wget https://raw.githubusercontent.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/master/Chapter2_MorePyMC/data/challenger_data.csv
columns = ['Date', 'Temperature', 'Incident']
df = pd.read_csv('challenger_data.csv', parse_dates=[0])
df.drop(labels=[3, 24], inplace=True)
df
df['Incident'] = df['Damage Incident'].astype(float)
df
import matplotlib.pyplot as plt
plt.scatter(df.Temperature, df.Incident, s=75, color="k",
alpha=0.5)
plt.yticks([0, 1])
plt.ylabel("Damage Incident?")
plt.xlabel("Outside temperature (Fahrenheit)")
plt.title("Defects of the Space Shuttle O-Rings vs temperature");
Explanation: The Space Shuttle problem
Here's a problem from Bayesian Methods for Hackers
On January 28, 1986, the twenty-fifth flight of the U.S. space shuttle program ended in disaster when one of the rocket boosters of the Shuttle Challenger exploded shortly after lift-off, killing all seven crew members. The presidential commission on the accident concluded that it was caused by the failure of an O-ring in a field joint on the rocket booster, and that this failure was due to a faulty design that made the O-ring unacceptably sensitive to a number of factors including outside temperature. Of the previous 24 flights, data were available on failures of O-rings on 23, (one was lost at sea), and these data were discussed on the evening preceding the Challenger launch, but unfortunately only the data corresponding to the 7 flights on which there was a damage incident were considered important and these were thought to show no obvious trend. The data are shown below (see 1):
End of explanation
from scipy.special import expit
class Logistic(Suite, Joint):
def Likelihood(self, data, hypo):
data: T, fail
hypo: b0, b1
return 1
# Solution
from scipy.special import expit
class Logistic(Suite, Joint):
def Likelihood(self, data, hypo):
data: T, fail
hypo: b0, b1
temp, fail = data
b0, b1 = hypo
log_odds = b0 + b1 * temp
p_fail = expit(log_odds)
if fail == 1:
return p_fail
elif fail == 0:
return 1-p_fail
else:
# NaN
return 1
b0 = np.linspace(0, 50, 101);
b1 = np.linspace(-1, 1, 101);
from itertools import product
hypos = product(b0, b1)
suite = Logistic(hypos);
for data in zip(df.Temperature, df.Incident):
print(data)
suite.Update(data)
thinkplot.Pdf(suite.Marginal(0))
thinkplot.decorate(xlabel='Intercept',
ylabel='PMF',
title='Posterior marginal distribution')
thinkplot.Pdf(suite.Marginal(1))
thinkplot.decorate(xlabel='Log odds ratio',
ylabel='PMF',
title='Posterior marginal distribution')
Explanation: Grid algorithm
We can solve the problem first using a grid algorithm, with parameters b0 and b1, and
$\mathrm{logit}(p) = b0 + b1 * T$
and each datum being a temperature T and a boolean outcome fail, which is true is there was damage and false otherwise.
Hint: the expit function from scipy.special computes the inverse of the logit function.
End of explanation
# Solution
T = 31
total = 0
for hypo, p in suite.Items():
b0, b1 = hypo
log_odds = b0 + b1 * T
p_fail = expit(log_odds)
total += p * p_fail
total
# Solution
pred = suite.Copy()
pred.Update((31, True))
Explanation: According to the posterior distribution, what was the probability of damage when the shuttle launched at 31 degF?
End of explanation
from warnings import simplefilter
simplefilter('ignore', FutureWarning)
import pymc3 as pm
# Solution
with pm.Model() as model:
pm.glm.GLM.from_formula('Incident ~ Temperature', df,
family=pm.glm.families.Binomial())
start = pm.find_MAP()
trace = pm.sample(1000, start=start, tune=1000)
pm.traceplot(trace);
# Solution
with pm.Model() as model:
pm.glm.GLM.from_formula('Incident ~ Temperature', df,
family=pm.glm.families.Binomial())
trace = pm.sample(1000, tune=1000)
Explanation: MCMC
Implement this model using MCMC. As a starting place, you can use this example from the PyMC3 docs.
As a challege, try writing the model more explicitly, rather than using the GLM module.
End of explanation |
6,122 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reading a file using CF module
The main difference with the previous example is the way we will read the data from the file.
Instead of the netCDF4 module, we will use the cf-python package, which implements the CF data model for the reading, writing and processing of data and metadata.
Step1: The data file is the same.
Step2: Read the file
We use the function read. Doing so, we easily obtain a nice summary of the file content.
Step3: We see that the file contains 4 variables
Step4: The number of variables which have a standard name corresponding to sea_water_temperature is
Step5: but in other cases (ex
Step6: We inspect the corresponding coordinates
Step7: To extract the time variable
Step8: and to get the values
Step9: A simple plot | Python Code:
%matplotlib inline
import cf
import netCDF4
import matplotlib.pyplot as plt
Explanation: Reading a file using CF module
The main difference with the previous example is the way we will read the data from the file.
Instead of the netCDF4 module, we will use the cf-python package, which implements the CF data model for the reading, writing and processing of data and metadata.
End of explanation
dataurl = "http://thredds.socib.es/thredds/dodsC/mooring/conductivity_and_temperature_recorder/buoy_canaldeibiza-scb_sbe37006/L1/dep0003_buoy-canaldeibiza_scb-sbe37006_L1_latest.nc"
Explanation: The data file is the same.
End of explanation
f = cf.read(dataurl)
print f
Explanation: Read the file
We use the function read. Doing so, we easily obtain a nice summary of the file content.
End of explanation
temperature = f.select('sea_water_temperature')
temperature
Explanation: We see that the file contains 4 variables:
1. temperature
2. salinity
3. conductivity.
Each of them has 4 dimensions: longitude, latitude, time and depth.
Read variable, coordinates and units
From the previous commands we cannot know the name of the variables within the file. But that's not necessary. Temperature can be retrived using its standard name:
End of explanation
print len(temperature)
Explanation: The number of variables which have a standard name corresponding to sea_water_temperature is:
End of explanation
temperature_values = temperature[0].array
temperature_units = temperature[0].units
print temperature_values[0:20]
print 'Temperature units: ' + temperature_units
Explanation: but in other cases (ex: different sensors measuring temperature with data in a common file), one can obtain more than one variable.
To get the temperature values, we select the first element (index = 0 in python, not 1) and convert it into an array.
End of explanation
temperature[0].coords()
Explanation: We inspect the corresponding coordinates:
End of explanation
time = temperature[0].coord('time')
time
Explanation: To extract the time variable:
End of explanation
time_values = temperature[0].coord('time').array
time_units = temperature[0].coord('time').units
print time_values[0:20]
print ' '
print 'Time units: ' + time_units
Explanation: and to get the values:
End of explanation
time2 = netCDF4.num2date(time_values, time_units)
plt.plot(time2, temperature_values)
plt.ylabel(temperature_units, fontsize=20)
plt.show()
Explanation: A simple plot
End of explanation |
6,123 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Natural language inference
Step1: Contents
Overview
Our version of the task
Primary resources
Set-up
SNLI
SNLI properties
Working with SNLI
MultiNLI
MultiNLI properties
Working with MultiNLI
Annotated MultiNLI subsets
Adversarial NLI
Adversarial NLI properties
Working with Adversarial NLI
Other NLI datasets
Overview
Natural Language Inference (NLI) is the task of predicting the logical relationships between words, phrases, sentences, (paragraphs, documents, ...). Such relationships are crucial for all kinds of reasoning in natural language
Step2: SNLI
SNLI properties
For SNLI (and MultiNLI), MTurk annotators were presented with premise sentences and asked to produce new sentences that entailed, contradicted, or were neutral with respect to the premise. A subset of the examples were then validated by an additional four MTurk annotators.
All the premises are captions from the Flickr30K corpus.
Some of the sentences rather depressingly reflect stereotypes (Rudinger et al. 2017).
550,152 train examples; 10K dev; 10K test
Mean length in tokens
Step3: The dataset has three splits
Step4: The class nli.NLIReader is used by all the readers discussed here.
Because the datasets are so large, it is often useful to be able to randomly sample from them. This is supported with the keyword argument samp_percentage. For example, the following samples approximately 10% of the examples from the SNLI training set
Step5: The precise number of examples will vary somewhat because of the way the sampling is done. (Here, we choose efficiency over precision in the number of cases we return; see the implementation for details.)
All of the readers have a read method that yields NLIExample example instances. For SNLI, these have the following attributes
Step6: Use filter_unlabeled=True (the default) to silently drop the examples for which gold_label is -.
Let's look at a specific example in some detail
Step7: MultiNLI
MultiNLI properties
Train premises drawn from five genres
Step8: For MultiNLI, we have the following splits
Step9: The MultiNLI test sets are available on Kaggle (matched version and mismatched version).
The interface to these is the same as for the SNLI readers
Step10: The NLIExample instances for MultiNLI have nearly all the attributes that SNLI is supposed to have!
promptID
Step11: No examples in the MultiNLI train set lack a gold label. The original corpus distribution does contain some unlabeled examples in its dev-sets, but those seem to have been removed in the Hugging Face distribution. As a result, the value of the filter_unlabeled parameter has no effect for mnli.
Let's look at a specific example
Step12: As you can see, there are three versions of the premise and hypothesis sentences
Step13: The binary parses lack node labels; so that we can use nltk.tree.Tree with them, the label X is added to all of them
Step14: Here's the full parse tree with syntactic categories
Step15: The leaves of either tree are tokenized versions of them
Step16: Annotated MultiNLI subsets
MultiNLI includes additional annotations for a subset of the dev examples. The goal is to help people understand how well their models are doing on crucial NLI-related linguistic phenomena.
Step17: Adversarial NLI
Adversarial NLI properties
The ANLI dataset was created in response to evidence that datasets like SNLI and MultiNLI are artificially easy for modern machine learning models to solve. The team sought to tackle this weakness head-on, by designing a crowdsourcing task in which annotators were explicitly trying to confuse state-of-the-art models. In broad outline, the task worked like this
Step18: For ANLI, we have a lot of options. Because it is distributed in three rounds, and the rounds can be used independently or pooled
Step19: Here is the fully pooled train setting
Step20: The above figures correspond to those in Table 2 of the paper.
Here is a summary of what NLIExample instances offer for this corpus | Python Code:
__author__ = "Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2022"
Explanation: Natural language inference: task and datasets
End of explanation
import nli
import os
import pandas as pd
import random
from datasets import load_dataset
DATA_HOME = os.path.join("data", "nlidata")
ANNOTATIONS_HOME = os.path.join(DATA_HOME, "multinli_1.0_annotations")
Explanation: Contents
Overview
Our version of the task
Primary resources
Set-up
SNLI
SNLI properties
Working with SNLI
MultiNLI
MultiNLI properties
Working with MultiNLI
Annotated MultiNLI subsets
Adversarial NLI
Adversarial NLI properties
Working with Adversarial NLI
Other NLI datasets
Overview
Natural Language Inference (NLI) is the task of predicting the logical relationships between words, phrases, sentences, (paragraphs, documents, ...). Such relationships are crucial for all kinds of reasoning in natural language: arguing, debating, problem solving, summarization, and so forth.
Dagan et al. (2006), one of the foundational papers on NLI (also called Recognizing Textual Entailment; RTE), make a case for the generality of this task in NLU:
It seems that major inferences, as needed by multiple applications, can indeed be cast in terms of textual entailment. For example, a QA system has to identify texts that entail a hypothesized answer. [...] Similarly, for certain Information Retrieval queries the combination of semantic concepts and relations denoted by the query should be entailed from relevant retrieved documents. [...] In multi-document summarization a redundant sentence, to be omitted from the summary, should be entailed from other sentences in the summary. And in MT evaluation a correct translation should be semantically equivalent to the gold standard translation, and thus both translations should entail each other. Consequently, we hypothesize that textual entailment recognition is a suitable generic task for evaluating and comparing applied semantic inference models. Eventually, such efforts can promote the development of entailment recognition "engines" which may provide useful generic modules across applications.
Our version of the task
Our NLI data will look like this:
| Premise | Relation | Hypothesis |
|:--------|:---------------:|:------------|
| turtle | contradiction | linguist |
| A turtled danced | entails | A turtle moved |
| Every reptile danced | entails | Every turtle moved |
| Some turtles walk | contradicts | No turtles move |
| James Byron Dean refused to move without blue jeans | entails | James Dean didn't dance without pants |
In the word-entailment bakeoff, we study a special case of this where the premise and hypothesis are single words. This notebook begins to introduce the problem of NLI more fully.
Primary resources
We're going to focus on three NLI corpora:
The Stanford Natural Language Inference corpus (SNLI)
The Multi-Genre NLI Corpus (MultiNLI)
The Adversarial NLI Corpus (ANLI)
The first was collected by a group at Stanford, led by Sam Bowman, and the second was collected by a group at NYU, also led by Sam Bowman. Both have the same format and were crowdsourced using the same basic methods. However, SNLI is entirely focused on image captions, whereas MultiNLI includes a greater range of contexts.
The third corpus was collected by a group at Facebook AI and UNC Chapel Hill. The team's goal was to address the fact that datasets like SNLI and MultiNLI seem to be artificially easy – models trained on them can often surpass stated human performance levels but still fail on examples that are simple and intuitive for people. The dataset is "Adversarial" because the annotators were asked to try to construct examples that fooled strong models but still passed muster with other human readers.
This notebook presents tools for working with these corpora. The second notebook in the unit concerns models of NLI.
Set-up
As usual, you need to be fully set up to work with the CS224u repository.
If you haven't already, download the course data, unpack it, and place it in the directory containing the course repository – the same directory as this notebook. (If you want to put it somewhere else, change DATA_HOME below.)
End of explanation
snli = load_dataset("snli")
Explanation: SNLI
SNLI properties
For SNLI (and MultiNLI), MTurk annotators were presented with premise sentences and asked to produce new sentences that entailed, contradicted, or were neutral with respect to the premise. A subset of the examples were then validated by an additional four MTurk annotators.
All the premises are captions from the Flickr30K corpus.
Some of the sentences rather depressingly reflect stereotypes (Rudinger et al. 2017).
550,152 train examples; 10K dev; 10K test
Mean length in tokens:
Premise: 14.1
Hypothesis: 8.3
Clause-types
Premise S-rooted: 74%
Hypothesis S-rooted: 88.9%
Vocab size: 37,026
56,951 examples validated by four additional annotators
58.3% examples with unanimous gold label
91.2% of gold labels match the author's label
0.70 overall Fleiss kappa
Top scores currently around 90%.
Working with SNLI
End of explanation
snli.keys()
Explanation: The dataset has three splits:
End of explanation
nli.NLIReader(snli['train'], samp_percentage=0.10, random_state=42)
Explanation: The class nli.NLIReader is used by all the readers discussed here.
Because the datasets are so large, it is often useful to be able to randomly sample from them. This is supported with the keyword argument samp_percentage. For example, the following samples approximately 10% of the examples from the SNLI training set:
End of explanation
snli_labels = pd.Series(
[ex.label for ex in nli.NLIReader(
snli['train'], filter_unlabeled=False).read()])
snli_labels.value_counts()
Explanation: The precise number of examples will vary somewhat because of the way the sampling is done. (Here, we choose efficiency over precision in the number of cases we return; see the implementation for details.)
All of the readers have a read method that yields NLIExample example instances. For SNLI, these have the following attributes:
label: str
premise: str
hypothesis: str
Note: the original SNLI distribution includes a number of other valuable fields, including identifiers for the original caption in the Flickr 30k corpus, parses for the examples, and annotation distributions for the validation set. Perhaps someone could update the dataset on Hugging Face to provide access to this information!
The following creates the label distribution for the training data:
End of explanation
snli_iterator = iter(nli.NLIReader(snli['train']).read())
snli_ex = next(snli_iterator)
print(snli_ex)
Explanation: Use filter_unlabeled=True (the default) to silently drop the examples for which gold_label is -.
Let's look at a specific example in some detail:
End of explanation
mnli = load_dataset("multi_nli")
Explanation: MultiNLI
MultiNLI properties
Train premises drawn from five genres:
Fiction: works from 1912–2010 spanning many genres
Government: reports, letters, speeches, etc., from government websites
The Slate website
Telephone: the Switchboard corpus
Travel: Berlitz travel guides
Additional genres just for dev and test (the mismatched condition):
The 9/11 report
Face-to-face: The Charlotte Narrative and Conversation Collection
Fundraising letters
Non-fiction from Oxford University Press
Verbatim articles about linguistics
392,702 train examples; 20K dev; 20K test
19,647 examples validated by four additional annotators
58.2% examples with unanimous gold label
92.6% of gold labels match the author's label
Test-set labels available as a Kaggle competition.
Top matched scores currently around 0.81.
Top mismatched scores currently around 0.83.
Working with MultiNLI
End of explanation
mnli.keys()
Explanation: For MultiNLI, we have the following splits:
train
validation_matched
validation_mismatched
End of explanation
nli.NLIReader(mnli['train'], samp_percentage=0.10, random_state=42)
Explanation: The MultiNLI test sets are available on Kaggle (matched version and mismatched version).
The interface to these is the same as for the SNLI readers:
End of explanation
multinli_labels = pd.Series(
[ex.label for ex in nli.NLIReader(
mnli['validation_mismatched'], filter_unlabeled=False).read()])
multinli_labels.value_counts()
Explanation: The NLIExample instances for MultiNLI have nearly all the attributes that SNLI is supposed to have!
promptID: str
label: str
pairID: str
premise: str
premise_binary_parse: nltk.tree.Tree
premise_parse: nltk.tree.Tree
hypothesis: str
hypothesis_binary_parse: nltk.tree.Tree
hypothesis_parse: nltk.tree.Tree
The only field that is unfortunately missing is annotator_labels, which gives all five labels chosen by annotators for the two dev splits. Perhaps someone could create a PR to bring these fields back in!
The full label distribution for the train split:
End of explanation
mnli_iterator = iter(nli.NLIReader(mnli['train']).read())
mnli_ex = next(mnli_iterator)
Explanation: No examples in the MultiNLI train set lack a gold label. The original corpus distribution does contain some unlabeled examples in its dev-sets, but those seem to have been removed in the Hugging Face distribution. As a result, the value of the filter_unlabeled parameter has no effect for mnli.
Let's look at a specific example:
End of explanation
mnli_ex.premise
Explanation: As you can see, there are three versions of the premise and hypothesis sentences:
Regular string representations of the data
Unlabeled binary parses
Labeled parses
End of explanation
mnli_ex.premise_binary_parse
Explanation: The binary parses lack node labels; so that we can use nltk.tree.Tree with them, the label X is added to all of them:
End of explanation
mnli_ex.premise_parse
Explanation: Here's the full parse tree with syntactic categories:
End of explanation
mnli_ex.premise_parse.leaves()
Explanation: The leaves of either tree are tokenized versions of them:
End of explanation
matched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_matched_annotations.txt")
mismatched_ann_filename = os.path.join(
ANNOTATIONS_HOME,
"multinli_1.0_mismatched_annotations.txt")
def view_random_example(annotations, random_state=42):
random.seed(random_state)
ann_ex = random.choice(list(annotations.items()))
pairid, ann_ex = ann_ex
ex = ann_ex['example']
print("pairID: {}".format(pairid))
print(ann_ex['annotations'])
print(ex.premise)
print(ex.label)
print(ex.hypothesis)
matched_ann = nli.read_annotated_subset(
matched_ann_filename,
mnli['validation_matched'])
view_random_example(matched_ann, random_state=23)
Explanation: Annotated MultiNLI subsets
MultiNLI includes additional annotations for a subset of the dev examples. The goal is to help people understand how well their models are doing on crucial NLI-related linguistic phenomena.
End of explanation
anli = load_dataset("anli")
Explanation: Adversarial NLI
Adversarial NLI properties
The ANLI dataset was created in response to evidence that datasets like SNLI and MultiNLI are artificially easy for modern machine learning models to solve. The team sought to tackle this weakness head-on, by designing a crowdsourcing task in which annotators were explicitly trying to confuse state-of-the-art models. In broad outline, the task worked like this:
The crowdworker is presented with a premise (context) text and asked to construct a hypothesis sentence that entails, contradicts, or is neutral with respect to that premise. (The actual wording is more informal, along the lines of the SNLI/MultiNLI task).
The crowdworker submits a hypothesis text.
The premise/hypothesis pair is fed to a trained model that makes a prediction about the correct NLI label.
If the model's prediction is correct, then the crowdworker loops back to step 2 to try again. If the model's prediction is incorrect, then the example is validated by different crowdworkers.
The dataset consists of three rounds, each involving a different model and a different set of sources for the premise texts:
| Round | Model | Training data | Context sources |
|:------:|:------------|:---------------------------|:-----------------|
| 1 | BERT-large | SNLI + MultiNLI | Wikipedia |
| 2 | ROBERTa | SNLI + MultiNLI + NLI-FEVER + Round 1 | Wikipedia |
| 3 | ROBERTa | SNLI + MultiNLI + NLI-FEVER + Round 2 | Various |
Each round has train/dev/test splits. The sizes of these splits and their label distributions are calculated just below.
The project README seeks to establish some rules for how the rounds can be used for training and evaluation.
Working with Adversarial NLI
End of explanation
anli.keys()
Explanation: For ANLI, we have a lot of options. Because it is distributed in three rounds, and the rounds can be used independently or pooled:
End of explanation
anli_pooled_reader = nli.NLIReader(
anli['train_r1'], anli['train_r2'], anli['train_r3'],
filter_unlabeled=False)
anli_pooled_labels = pd.Series([ex.label for ex in anli_pooled_reader.read()])
anli_pooled_labels.value_counts()
for rounds in ((1,), (2,), (3,), (1,2,3)):
splits = [anli['train_r{}'.format(i)] for i in rounds]
count = len(list(nli.NLIReader(*splits).read()))
print("R{0:}: {1:,}".format(rounds, count))
Explanation: Here is the fully pooled train setting:
End of explanation
anli_ex = next(iter(nli.NLIReader(anli['dev_r3']).read()))
anli_ex
Explanation: The above figures correspond to those in Table 2 of the paper.
Here is a summary of what NLIExample instances offer for this corpus:
uid: a unique identifier; akin to pairID in SNLI/MultiNLI
premise: the premise; corresponds to sentence1 in SNLI/MultiNLI
hypothesis: the hypothesis; corresponds to sentence2 in SNLI/MultiNLI
label: the gold label; corresponds to gold_label in SNLI/MultiNLI
reason: a crowdworker's free-text hypothesis about why the model made an incorrect prediction for the current context/hypothesis pair
The ANLI distribution contains additional fields that are unfortunately left out of the Hugging Face distribution:
model_label: the label predicted by the model used in the current round
emturk: for dev (and test), this is True if the annotator contributed only dev (test) exmples, else False; in turn, it is False for all train examples.
genre: the source for the context text
tag: information about the round and train/dev/test classification
As with the other datasets, it would be a wonderful service to the field to improve the interface!
End of explanation |
6,124 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Building a CNN
In this notebook we'll rebuild the network presented in <a href="http
Step1: The MNIST data that TensorFlow pre-loads comes as 28x28x1 images.
However, the LeNet architecture only accepts 32x32xC images, where C is the number of color channels.
In order to reformat the MNIST data into a shape that LeNet will accept, we pad the data with two rows of zeros on the top and bottom, and two columns of zeros on the left and right (28+2+2 = 32).
Step2: Implementing LeNet-5
Implement the LeNet-5 neural network architecture.
Input
The LeNet architecture accepts a 32x32xC image as input, where C is the number of color channels. Since MNIST images are grayscale, C is 1 in this case.
Architecture
Layer 1
Step3: Training pipeline
Step4: Training the model
Run the training data through the training pipeline to train the model.
Before each epoch, shuffle the training set.
After each epoch, measure the loss and accuracy of the validation set.
Save the model after training.
You do not need to modify this section.
Step5: Evaluation of the model
Once you are completely satisfied with your model, evaluate the performance of the model on the test set. | Python Code:
import numpy as np
import tensorflow as tf
from sklearn.utils import shuffle
from tensorflow.examples.tutorials.mnist import input_data
from tensorflow.contrib.layers import flatten
mnist = input_data.read_data_sets("MNIST_data/", reshape=False)
X_train, y_train = mnist.train.images, mnist.train.labels
X_validation, y_validation = mnist.validation.images, mnist.validation.labels
X_test, y_test = mnist.test.images, mnist.test.labels
assert(len(X_train) == len(y_train))
assert(len(X_validation) == len(y_validation))
assert(len(X_test) == len(y_test))
print()
print("Image Shape: {}".format(X_train[0].shape))
print()
print("Training Set: {} samples".format(len(X_train)))
print("Validation Set: {} samples".format(len(X_validation)))
print("Test Set: {} samples".format(len(X_test)))
Explanation: Building a CNN
In this notebook we'll rebuild the network presented in <a href="http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf">this paper</a> called LeNet developed by Yann LeCun.
<img src="images/lenet.png">
Using tensorflow's built-in tutorial data, we'll do some preprocessing first, then define the network layers and ultimately train and test a Convolutional Neural Network.
End of explanation
# basic preprocessing
# padding images to match expected input
X_train = np.pad(X_train, ((0,0),(2,2),(2,2),(0,0)), 'constant')
X_validation = np.pad(X_validation, ((0,0),(2,2),(2,2),(0,0)), 'constant')
X_test = np.pad(X_test, ((0,0),(2,2),(2,2),(0,0)), 'constant')
# shuffling data
X_train, y_train = shuffle(X_train, y_train)
Explanation: The MNIST data that TensorFlow pre-loads comes as 28x28x1 images.
However, the LeNet architecture only accepts 32x32xC images, where C is the number of color channels.
In order to reformat the MNIST data into a shape that LeNet will accept, we pad the data with two rows of zeros on the top and bottom, and two columns of zeros on the left and right (28+2+2 = 32).
End of explanation
def LeNet(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
# TODO: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
# TODO: Activation.
# TODO: Pooling. Input = 28x28x6. Output = 14x14x6.
# TODO: Layer 2: Convolutional. Output = 10x10x16.
# TODO: Activation.
# TODO: Pooling. Input = 10x10x16. Output = 5x5x16.
# TODO: Flatten. Input = 5x5x16. Output = 400.
# TODO: Layer 3: Fully Connected. Input = 400. Output = 120.
# TODO: Activation.
# TODO: Layer 4: Fully Connected. Input = 120. Output = 84.
# TODO: Activation.
# TODO: Layer 5: Fully Connected. Input = 84. Output = 10.
return logits
Explanation: Implementing LeNet-5
Implement the LeNet-5 neural network architecture.
Input
The LeNet architecture accepts a 32x32xC image as input, where C is the number of color channels. Since MNIST images are grayscale, C is 1 in this case.
Architecture
Layer 1: Convolutional. The output shape should be 28x28x6.
Activation. Your choice of activation function.
Pooling. The output shape should be 14x14x6.
Layer 2: Convolutional. The output shape should be 10x10x16.
Activation. Your choice of activation function.
Pooling. The output shape should be 5x5x16.
Flatten. Flatten the output shape of the final pooling layer such that it's 1D instead of 3D. The easiest way to do is by using tf.contrib.layers.flatten, which is already imported for you.
Layer 3: Fully Connected. This should have 120 outputs.
Activation. Your choice of activation function.
Layer 4: Fully Connected. This should have 84 outputs.
Activation. Your choice of activation function.
Layer 5: Fully Connected (Logits). This should have 10 outputs.
Output
Return the result of the 2nd fully connected layer.
End of explanation
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, 10)
learning_rate = 0.001
epochs = 10
batch_size = 128
# the model you defined
logits = LeNet(x)
# define the evaluation functions
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_operation = optimizer.minimize(loss_operation)
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, batch_size):
batch_x, batch_y = X_data[offset:offset + batch_size], y_data[offset:offset + batch_size]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
Explanation: Training pipeline
End of explanation
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(epochs):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, batch_size):
end = offset + batch_size
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
validation_accuracy = evaluate(X_validation, y_validation)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, './lenet')
print("Model saved")
Explanation: Training the model
Run the training data through the training pipeline to train the model.
Before each epoch, shuffle the training set.
After each epoch, measure the loss and accuracy of the validation set.
Save the model after training.
You do not need to modify this section.
End of explanation
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
Explanation: Evaluation of the model
Once you are completely satisfied with your model, evaluate the performance of the model on the test set.
End of explanation |
6,125 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Detecting Changes in Sentinel-1 Imagery (Part 2)
Author
Step1: Datasets and Python modules
One dataset will be used in the tutorial
Step2: And to make use of interactive graphics, we import the folium package
Step3: Part 2. Hypothesis testing
We continue from Part 1 of the Tutorial with the area of interest aoi covering the Frankfurt International Airport and a subset aoi_sub consisting of uniform pixels within a forested region.
Step4: This time we filter the S1 archive to get an image collection consisting of two images acquired in the month of August, 2020. Because we are interested in change detection, it is essential that the local incidence angles be the same in both images. So now we specify both the orbit pass (ASCENDING) as well the relative orbit number (15)
Step5: Here are the acquisition times in the collection, formatted with Python's time module
Step6: A ratio image
Let's select the first two images and extract the VV bands, clipping them to aoi_sub,
Step7: Now we'll build the ratio of the VV bands and display it
Step8: As in the first part of the Tutorial, standard GEE reducers can be used to calculate a histogram, mean and variance of the ratio image
Step9: Here is a plot of the (normalized) histogram using numpy and matplotlib
Step10: This looks a bit like the gamma distribution we met in Part 1 but is in fact an F probability distribution. The F distribution is defined as the ratio of two chi square distributions, see Eq. (1.12), with $m_1$ and $m_2$ degrees of freedom. The above histogram is an $F$ distribution with $m_1=2m$ and $m_2=2m$ degrees of freedom and is given by
$$
p_{f;2m,2m}(x) = {\Gamma(2m)\over \Gamma(m)^2} x^{m-1}(1+x)^{-2m},
$$
$$
\quad {\rm mean}(x) = {m\over m-1},\tag{2.1}
$$
$$
\quad {\rm var}(x) = {m(2m-1)\over (m-1)^2 (m-2)}
$$
with parameter $m = 5$. We can see this empirically by overlaying the distribution onto the histogram with the help of scipy.stats.f. The histogram bucket widths are 0.01 so we have to divide by 100
Step11: Checking the mean and variance, we get approximate agreement
Step12: So what is so special about this distribution? When looking for changes between two co-registered Sentinel-1 images acquired at different times, it might seem natural to subtract one from the other and then examine the difference, much as we would do for instance with visual/infrared ground reflectance images. In the case of SAR intensity images this is not a good idea. In the difference of two uncorrelated multilook images $\langle s_1\rangle$ and $\langle s_2\rangle$ the variances add together and, from Eq. (1.21) in the first part of the Tutorial,
$$
{\rm var}(\langle s_1\rangle-\langle s_2\rangle) = {a_1^2+a_2^2\over m}, \tag{2.4}
$$
where $a_1$ and $a_2$ are mean intensities. So difference pixels in bright areas will have a higher variance than difference pixels in darker areas. It is not possible to set a reliable threshold to determine with a given confidence where change has occurred.
It turns out that the F distributed ratio of the two images which we looked at above is much more informative. For each pixel position in the two images, the quotient $\langle s_1\rangle / \langle s_2\rangle$ is a likelihood ratio test statistic for deciding whether or not a change has occurred between the two acquisition dates at that position. We will explain what this means below. Here for now is the ratio of the two Frankfurt Airport images, this time within the complete aoi
Step13: We might guess that the bright pixels here are significant changes, for instance due to aircraft movements on the tarmac or vehicles moving on the highway. Of course ''significant'' doesn't necessarily imply ''interesting''. We already know Frankfurt has a busy airport and that a German Autobahn is always crowded. The question is, how significant are the changes in the statistical sense? Let's now try to answer that question.
Statistical testing
A statistical hypothesis is a conjecture about the distributions of one or more measured variables. It might, for instance, be an assertion about the mean of a distribution, or about the equivalence of the variances of two different distributions. We distinguish between simple hypotheses, for which the distributions are completely specified, for example
Step14: Most changes are within the airport or on the Autobahn. Barge movements on the Main River (upper left hand corner) are also signaled as significant changes. Note that the 'red' changes (significant increases in intensity) do not show up in the 'ratio' overlay, which displays $s_1/s_2$.
Bivariate change detection
Rather than analyzing the VV and VH bands individually, it would make more sense to treat them together, and that is what we will now do. It is convenient to work with the covariance matrix form for measured intensities that we introduce in Part 1, see Eq.(1.6a). Again with the aim of keeping the notation simple, define
$$
\pmatrix{ s_i & 0\cr 0 & r_i} = \pmatrix{\langle|S_{vv}|^2\rangle_i & 0 \cr 0 & \langle|S_{vh}|^2\rangle_i}, \quad {\rm with\ means}\quad a_i = \langle|S^{a_i}{vv}|^2\rangle, \quad b_i = \langle|S^{b_i}{vh}|^2\rangle \tag{2.13}
$$
for the two acquisition times $t_i,\ i=1,2$.
Under $H_0$ we have $a_1=a_2=a$ and $b_1=b_2=b$. Assuming independence of $s_i$ and $r_i$, the likelihood function is the product of the four gamma distributions
$$
L_0(a,b) = p(s_1\mid a)p(r_1\mid b)p(s_2\mid a)p(r_2\mid b).
$$
Under $H_1$,
$$
L_1(a_1,b_1,a_2,b_2) = p(s_1\mid a_1)p(r_1\mid b_1)p(s_2\mid a_2)p(r_2\mid b_2).
$$
With maximum likelihood estimates under $H_0$
$$
\hat a = (s_1+s_2)/2\quad {\rm and}\quad \hat b = (r_1+r_2)/2
$$
for the parameters and some simple algebra, we get
$$
L_0(\hat a,\hat b) = {(2m)^{4m}\over (s_1+s_2)^{2m}(r_1+r_2)^{2m}\Gamma(m)^4}s_1r_1s_2r_2e^{-4m}. \tag{2.14}
$$
Similarly with $\hat a_1=s_1,\ \hat b_1=r_1,\ \hat a_2=s_2,\ \hat b_2=r_2$, we calculate
$$
L_1(\hat a_1,\hat b_1,\hat a_2,\hat b_2) = {m^{4m}\over s_1r_1s_2r_2}e^{-4m}.
$$
The likelihood test statistic in then
$$
Q = {L_0(\hat a,\hat b)\over L_1(\hat a_1,\hat b_1,\hat a_2,\hat b_2)}={2^4(s_1r_1s_2r_2)^m\over (s_1+s_2)^{2m}(r_1+r_2)^{2m}}.
$$
Writing this in terms of the covariance matrix representation,
$$
c_i = \pmatrix{s_i & 0\cr 0 & r_i},\quad i=1,2,
$$
we derive, finally, the likelihood ratio test
$$
Q = \left[2^4\pmatrix{|c_1| |c_2|\over |c_1+c_2|^2 }\right]^m \le k, \tag{2.15}
$$
where $|\cdot|$ indicates the matrix determinant, $|c_i|=s_ir_i$.
So far so good. But in order to determine P values, we need the probability distribution of $Q$. This time we have no idea how to obtain it. Here again, statistical theory comes to our rescue.
Let $\Theta$ be the parameter space for the LRT. In our example it is
$$
\Theta = { a_1,b_1,a_2,b_2}
$$
and has $d=4$ dimensions. Under the null hypothesis the parameter space is restricted by the conditions $a=a_1=a_2$ and $b=b_1=b_2$ to
$$
\Theta_0 = { a,b}
$$
with $d_0=2$ dimensions. According to Wilks' Theorem, as the number of measurements determining the LRT statistic $Q$ approaches $\infty$, the test statistic $-2\log Q$ approaches a chi square distribution with $d-d_0=2$ degrees of freedom. (Recall that, in order to determine the matrices $c_1$ and $c_2$, five individual measurements were averaged or multi-looked.) So rather than working with $Q$ directly, we use $-2\log Q$ instead and hope that Wilk's theorem is a good enough approximation for our case.
In order to check if this is so, we just have to program
$$
-2\log Q = (\log{|c_1|}+\log{|c_2|}-2\log{|c_1+c_2|}+4\log{2})(-2m)
$$
in GEE-ese
Step15: and then plot its histogram, comparing it with the chi square distribution scipy.stats.chi2.pdf() with two degrees of freedom
Step16: Looks pretty good. Note now that a small value of the LRT $Q$ in Eq. (2.15) corresponds to a large value of $-2\log{Q}$. Therefore the P value for a measurement $q$ is now the probability of getting the value $-2\log{q}$
or higher,
$$
P = {\rm Prob}(-2\log{Q} \ge -2\log{q}) = 1 - {\rm Prob}(-2\log{Q} < -2\log{q}).
$$
So let's try out our bivariate change detection procedure, this time on an agricultural scene where we expect to see larger regions of change.
Step17: This is a mixed agricultural/forest area in southern Manitoba, Canada. We'll gather two images, one from the beginning of August and one from the beginning of September, 2018. A lot of harvesting takes place in this interval, so we expect some extensive changes.
Step18: Here are the acquisition times
Step19: Fortunately it is possible to map the chi square cumulative distribution function over an ee.Image() so that a P value image can be calculated directly. This wasn't possible in the single band case, as the F cumulative distribution is not available on the GEE. Here are the P values
Step20: The uniformly dark areas correspond to small or vanishing P values and signify change. The bright areas correspond to no change. Why they are not uniformly bright will be explained below. Now we set a significance threshold of $\alpha=0.01$ and display the significant changes, whereby 1% of them will be false positives. For reference we also show the 2018 Canada AAFC Annual Crop Inventory map, which is available as a GEE collection
Step21: The major crops in the scene are soybeans (dark brown), oats (light brown), canola (light green), corn (light yellow) and winter wheat (dark gray). The wooded areas exhibit little change, while canola has evidently been extensively harvested in the interval.
A note on P values
Because small P values are indicative of change, it is tempting to say that, the larger the P value, the higher the probability of no change. Or more explicitly, the P value is itself the no change probability. Let's see why this is false. Below we choose a wooded area of the agricultural scene where few significant changes are to be expected and use it to subset the P value image. Then we plot the histogram of the subset
Step22: So the P values of no-change measurements are uniformly distributed over $[0, 1]$ (the excess of small P values at the left can be ascribed to genuine changes within the polygon). A large P value is no more indicative of no change than a small one. Of course it has to be this way. When, for example, we set a significance level of 5%, then the fraction of false positives, i.e., the fraction of P values smaller than 0.05 given $H_0$, must also be 5%. This accounts for the noisy appearance of the P value image in the no-change regions.
Change direction
Step23: Now we display the changes, with positive definite red, negative definite blue, and indefinite yellow | Python Code:
import ee
# Trigger the authentication flow.
ee.Authenticate()
# Initialize the library.
ee.Initialize()
Explanation: Detecting Changes in Sentinel-1 Imagery (Part 2)
Author: mortcanty
Run me first
Run the following cell to initialize the API. The output will contain instructions on how to grant this notebook access to Earth Engine using your account.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
from scipy.stats import norm, gamma, f, chi2
import IPython.display as disp
%matplotlib inline
Explanation: Datasets and Python modules
One dataset will be used in the tutorial:
COPERNICUS/S1_GRD_FLOAT
Sentinel-1 ground range detected images
The following cell imports some python modules which we will be using as we go along and enables inline graphics.
End of explanation
# Import the Folium library.
import folium
# Define a method for displaying Earth Engine image tiles to folium map.
def add_ee_layer(self, ee_image_object, vis_params, name):
map_id_dict = ee.Image(ee_image_object).getMapId(vis_params)
folium.raster_layers.TileLayer(
tiles = map_id_dict['tile_fetcher'].url_format,
attr = 'Map Data © <a href="https://earthengine.google.com/">Google Earth Engine</a>',
name = name,
overlay = True,
control = True
).add_to(self)
# Add EE drawing method to folium.
folium.Map.add_ee_layer = add_ee_layer
Explanation: And to make use of interactive graphics, we import the folium package:
End of explanation
geoJSON = {
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
8.473892211914062,
49.98081240937428
],
[
8.658599853515625,
49.98081240937428
],
[
8.658599853515625,
50.06066538593667
],
[
8.473892211914062,
50.06066538593667
],
[
8.473892211914062,
49.98081240937428
]
]
]
}
}
]
}
coords = geoJSON['features'][0]['geometry']['coordinates']
aoi = ee.Geometry.Polygon(coords)
geoJSON = {
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
8.534317016601562,
50.021637833966786
],
[
8.530540466308594,
49.99780882512238
],
[
8.564186096191406,
50.00663576154257
],
[
8.578605651855469,
50.019431940583104
],
[
8.534317016601562,
50.021637833966786
]
]
]
}
}
]
}
coords = geoJSON['features'][0]['geometry']['coordinates']
aoi_sub = ee.Geometry.Polygon(coords)
Explanation: Part 2. Hypothesis testing
We continue from Part 1 of the Tutorial with the area of interest aoi covering the Frankfurt International Airport and a subset aoi_sub consisting of uniform pixels within a forested region.
End of explanation
im_coll = (ee.ImageCollection('COPERNICUS/S1_GRD_FLOAT')
.filterBounds(aoi)
.filterDate(ee.Date('2020-08-01'),ee.Date('2020-08-31'))
.filter(ee.Filter.eq('orbitProperties_pass', 'ASCENDING'))
.filter(ee.Filter.eq('relativeOrbitNumber_start', 15))
.sort('system:time_start'))
Explanation: This time we filter the S1 archive to get an image collection consisting of two images acquired in the month of August, 2020. Because we are interested in change detection, it is essential that the local incidence angles be the same in both images. So now we specify both the orbit pass (ASCENDING) as well the relative orbit number (15):
End of explanation
import time
acq_times = im_coll.aggregate_array('system:time_start').getInfo()
[time.strftime('%x', time.gmtime(acq_time/1000)) for acq_time in acq_times]
Explanation: Here are the acquisition times in the collection, formatted with Python's time module:
End of explanation
im_list = im_coll.toList(im_coll.size())
im1 = ee.Image(im_list.get(0)).select('VV').clip(aoi_sub)
im2 = ee.Image(im_list.get(1)).select('VV').clip(aoi_sub)
Explanation: A ratio image
Let's select the first two images and extract the VV bands, clipping them to aoi_sub,
End of explanation
ratio = im1.divide(im2)
url = ratio.getThumbURL({'min': 0, 'max': 10})
disp.Image(url=url, width=800)
Explanation: Now we'll build the ratio of the VV bands and display it
End of explanation
hist = ratio.reduceRegion(ee.Reducer.fixedHistogram(0, 5, 500), aoi_sub).get('VV').getInfo()
mean = ratio.reduceRegion(ee.Reducer.mean(), aoi_sub).get('VV').getInfo()
variance = ratio.reduceRegion(ee.Reducer.variance(), aoi_sub).get('VV').getInfo()
Explanation: As in the first part of the Tutorial, standard GEE reducers can be used to calculate a histogram, mean and variance of the ratio image:
End of explanation
a = np.array(hist)
x = a[:, 0]
y = a[:, 1] / np.sum(a[:, 1])
plt.grid()
plt.plot(x, y, '.')
plt.show()
Explanation: Here is a plot of the (normalized) histogram using numpy and matplotlib:
End of explanation
m = 5
plt.grid()
plt.plot(x, y, '.', label='data')
plt.plot(x, f.pdf(x, 2*m, 2*m) / 100, '-r', label='F-dist')
plt.legend()
plt.show()
Explanation: This looks a bit like the gamma distribution we met in Part 1 but is in fact an F probability distribution. The F distribution is defined as the ratio of two chi square distributions, see Eq. (1.12), with $m_1$ and $m_2$ degrees of freedom. The above histogram is an $F$ distribution with $m_1=2m$ and $m_2=2m$ degrees of freedom and is given by
$$
p_{f;2m,2m}(x) = {\Gamma(2m)\over \Gamma(m)^2} x^{m-1}(1+x)^{-2m},
$$
$$
\quad {\rm mean}(x) = {m\over m-1},\tag{2.1}
$$
$$
\quad {\rm var}(x) = {m(2m-1)\over (m-1)^2 (m-2)}
$$
with parameter $m = 5$. We can see this empirically by overlaying the distribution onto the histogram with the help of scipy.stats.f. The histogram bucket widths are 0.01 so we have to divide by 100:
End of explanation
print(mean, m/(m-1))
print(variance, m*(2*m-1)/(m-1)**2/(m-2))
Explanation: Checking the mean and variance, we get approximate agreement
End of explanation
im1 = ee.Image(im_list.get(0)).select('VV').clip(aoi)
im2 = ee.Image(im_list.get(1)).select('VV').clip(aoi)
ratio = im1.divide(im2)
location = aoi.centroid().coordinates().getInfo()[::-1]
mp = folium.Map(location=location, zoom_start=12)
mp.add_ee_layer(ratio,
{'min': 0, 'max': 20, 'palette': ['black', 'white']}, 'Ratio')
mp.add_child(folium.LayerControl())
display(mp)
Explanation: So what is so special about this distribution? When looking for changes between two co-registered Sentinel-1 images acquired at different times, it might seem natural to subtract one from the other and then examine the difference, much as we would do for instance with visual/infrared ground reflectance images. In the case of SAR intensity images this is not a good idea. In the difference of two uncorrelated multilook images $\langle s_1\rangle$ and $\langle s_2\rangle$ the variances add together and, from Eq. (1.21) in the first part of the Tutorial,
$$
{\rm var}(\langle s_1\rangle-\langle s_2\rangle) = {a_1^2+a_2^2\over m}, \tag{2.4}
$$
where $a_1$ and $a_2$ are mean intensities. So difference pixels in bright areas will have a higher variance than difference pixels in darker areas. It is not possible to set a reliable threshold to determine with a given confidence where change has occurred.
It turns out that the F distributed ratio of the two images which we looked at above is much more informative. For each pixel position in the two images, the quotient $\langle s_1\rangle / \langle s_2\rangle$ is a likelihood ratio test statistic for deciding whether or not a change has occurred between the two acquisition dates at that position. We will explain what this means below. Here for now is the ratio of the two Frankfurt Airport images, this time within the complete aoi:
End of explanation
# Decision threshold alpha/2:
dt = f.ppf(0.0005, 2*m, 2*m)
# LRT statistics.
q1 = im1.divide(im2)
q2 = im2.divide(im1)
# Change map with 0 = no change, 1 = decrease, 2 = increase in intensity.
c_map = im1.multiply(0).where(q2.lt(dt), 1)
c_map = c_map.where(q1.lt(dt), 2)
# Mask no-change pixels.
c_map = c_map.updateMask(c_map.gt(0))
# Display map with red for increase and blue for decrease in intensity.
location = aoi.centroid().coordinates().getInfo()[::-1]
mp = folium.Map(
location=location, tiles='Stamen Toner',
zoom_start=13)
folium.TileLayer('OpenStreetMap').add_to(mp)
mp.add_ee_layer(ratio,
{'min': 0, 'max': 20, 'palette': ['black', 'white']}, 'Ratio')
mp.add_ee_layer(c_map,
{'min': 0, 'max': 2, 'palette': ['black', 'blue', 'red']},
'Change Map')
mp.add_child(folium.LayerControl())
display(mp)
Explanation: We might guess that the bright pixels here are significant changes, for instance due to aircraft movements on the tarmac or vehicles moving on the highway. Of course ''significant'' doesn't necessarily imply ''interesting''. We already know Frankfurt has a busy airport and that a German Autobahn is always crowded. The question is, how significant are the changes in the statistical sense? Let's now try to answer that question.
Statistical testing
A statistical hypothesis is a conjecture about the distributions of one or more measured variables. It might, for instance, be an assertion about the mean of a distribution, or about the equivalence of the variances of two different distributions. We distinguish between simple hypotheses, for which the distributions are completely specified, for example: the mean of a normal distribution with variance $\sigma^2$ is $\mu=0$, and composite hypotheses, for which this is not the case, e.g., the mean is $\mu\ge 0$.
In order to test such assertions on the basis of measured values, it is also necessary to formulate alternative hypotheses. To distinguish these from the original assertions, the latter are traditionally called null hypotheses. Thus we might be interested in testing the simple null hypothesis $\mu = 0$ against the composite alternative hypothesis $\mu\ne 0$. An appropriate combination of measurements for deciding whether or not to reject the null hypothesis in favor of its alternative is referred to as a test statistic, often denoted by the symbol $Q$. An appropriate test procedure will partition the possible test statistics into two subsets: an acceptance region for the null hypothesis and a rejection region. The latter is customarily referred to as the critical region.
Referring to the null hypothesis as $H_0$, there are two kinds of errors which can arise from any test procedure:
$H_0$ may be rejected when in fact it is true. This is called an error of the first kind and the probability that it will occur is denoted $\alpha$.
$H_0$ may be accepted when in fact it is false, which is called an error of the second kind with probability of occurrence $\beta$.
The probability of obtaining a value of the test statistic within the critical region when $H_0$ is true is thus $\alpha$. The probability $\alpha$ is also referred to as the level of significance of the test or the probability of a false positive. It is generally the case that the lower the value of $\alpha$, the higher is the probability $\beta$ of making a second kind error, so there is always a trade-off. (Judge Roy Bean, from the film of the same name, didn't believe in trade-offs. He hanged all defendants regardless of the evidence. His $\beta$ was zero, but his $\alpha$ was rather large.)
At any rate, traditionally, significance levels of 0.01 or 0.05 are often used.
The P value
Suppose we determine the test statistic to have the value $q$. The P value is defined as the probability of getting a test statistic $Q$ that is at least as extreme as the one observed given the null hypothesis. What is meant by "extreme" depends on how we choose the test statistic. If this probability is small, then the null hypothesis is unlikely. If it is smaller than the prescribed significance level $\alpha$, then the null hypothesis is rejected.
Likelihood Functions
The $m$-look VV intensity bands of the two Sentinel-1 images that we took from the archive have pixel values
$$
\langle s\rangle=\langle|S_{vv}|^2\rangle, \quad {\rm with\ mean}\ a=|S^a_{vv}|^2,
$$
and are gamma distributed according to Eq. (1.1), with parameters $\alpha=m$ and $\beta = a/m$. To make the notation a bit simpler, let's write $s = \langle s \rangle$, so that the multi-look averaging is understood.
Using subscript $i=1,2$ to refer to the two images, the probability densities are
$$
p(s_i| a_i) = {1 \over (a_i/m)^m\Gamma(m)}s_i^{m-1}e^{-s_i m/a_i},\quad i=1,2. \tag{2.5}
$$
We've left out the number of looks $m$ on the left hand side, since it is the same for both images.
Now let's formulate a null hypothesis, namely that no change has taken place in the signal strength $a = |S^a_{vv}|^2$ between the two acquisitions, i.e.,
$$
H_0: \quad a_1=a_2 = a
$$
and test it against the alternative hypothesis that a change took place
$$
H_1: \quad a_1\ne a_2.
$$
If the null hypothesis is true, then the so-called likelihood for getting the measured pixel intensities $s_1$ and $s_2$ is defined as the product of the probability densities for that value of $a$,
$$
L_0(a) = p(s_1|a)p(s_2|a) = {1\over(a/m)^{2m}\Gamma(m)^2}(s_1s_2)^{m-1}e^{-(s_1+s_2)m/a}. \tag{2.6}
$$
Taking the product of the probability densities like this is justified by the fact that the measurements $s_1$ and $s_2$ are independent.
The maximum likelihood is obtained by maximizing $L_0(a)$ with respect to $a$,
$$
L_0(\hat a) = p(s_1|\hat a)p(s_2|\hat a), \quad \hat a = \arg\max_a L_0(a).
$$
We can get $\hat a$ simply by solving the equation
$$
{d L_0(a)\over da} = 0
$$
for which we derive the maximum likelihood estimate (an easy exercise)
$$
\hat a = {s_1 + s_2 \over 2}.
$$
Makes sense: the only information we have is $s_1$ and $s_2$, so, if there was no change, our best estimate of the intensity $a$ is to take the average. Thus, substituting this value into Eq. (2.6), the maximum likelihood under $H_0$ is
$$
L_0(\hat a) = {1\over ((s_1+s_2)/2m)^{2m}\Gamma(m)^2}(s_1s_2)^{m-1}e^{-2m}. \tag{2.7}
$$
Similarly, under the alternative hypothesis $H_1$, the maximum likelihood is
$$
L_1(\hat a_1,\hat a_2) = p(s_1|\hat a_1)p(s_2|\hat a_2)\quad \hat a_1, \hat a_2 = \arg\max_{a_1,a_2} L_1(a_1,a_2).
$$
Again, setting derivatives equal to zero, we get for $H_1$
$$
\hat a_1 = s_1, \quad \hat a_2 = s_2,
$$
and the maximum likelihood
$$
L_1(\hat a_1,\hat a_2) = {m^{2m}\over \Gamma(m)^2}s_1s_2 e^{-2m}. \tag{2.8}
$$
The Likelihood Ratio Test
The theory of statistical testing specifies methods for
determining the most appropriate test procedure, one which minimizes the probability $\beta$ of an error of the second kind for a fixed level of significance $\alpha$. Rather than giving a general definition, we state the appropriate test for our case:
We should reject the null hypothesis if the ratio of the two likelihoods satisfies the inequality
$$
Q = {L_0(\hat a)\over L_1(\hat a_1,\hat a_2)} \le k \tag{2.9}
$$
for some appropriately small value of threshold $k$.
This definition simply reflects the fact that, if the null hypothesis is true, the maximum likelihood when $a_1=a_2$ should be close to the maximum likelihood without that restriction, given the measurements $s_1$ and $s_2$. Therefore, if the likelihood ratio is small, (less than or equal to some small value $k$), then $H_0$ should be rejected.
With some (very) simply algebra, Eq. (2.9) evaluates to
$$
Q = \left[2^2 \left( s_1s_2\over (s_1+s_2)^2\right)\right]^m \le k \tag{2.10}
$$
using (2.7) and (2.8). This is the same as saying
$$
{s_1s_2\over (s_1+s_2)^2} \le k'\quad {\rm or}\quad {(s_1+s_2)^2\over s_1s_2}\ge k''\quad {\rm or}\quad {s_1\over s_2}+{s_2\over s_1}\ge k''-2
$$
where $k',k''$ depend on $k$. The last inequality is satisfied if either term is small enough:
$$
{s_1\over s_2} < c_1 \quad {\rm or}\quad {s_2\over s_1} < c_2 \tag{2.11}
$$
again for some appropriate threshold $c_1$ and $c_2$ which depend on $k''$.
So the ratio image $s_1/s_2$ that we generated above is indeed a Likelihood Ratio Test (LRT) statistic, one of two possible. We'll call it $Q_1 = s_1/s_2$ and the other one $Q_2 = s_2/s_1$. The former tests for a significant increase in intensity between times $t_1$ and $t_2$, the latter for a significant decrease.
Fine, but where does the F distribution come in?
Both $s_1$ and $s_2$ are gamma distributed
$$
p(s\mid a) = {1\over (a/m)^m\Gamma(m)}s^{m-1}e^{-sm/a}.
$$
Let $z = 2sm/a$. Then
$$
p(z\mid a) = p(s\mid a)\left |{ds\over dz}\right | = {1\over (a/m)^m\Gamma(m)}\left({za\over 2m}\right)^{m-1}\left({a\over 2m}\right) = {1\over 2^m\Gamma(m)}z^{m-1}e^{-z/2}.
$$
Comparing this with Eq. (1.12) from the first part of the Tutorial, we see that $z$ is chi square distributed with $2m$ degrees of freedom, and therefore so are the variables $2s_1m/a$ and $2s_2m/a$. The quotients $s_1/s_2$ and $s_2/s_1$ are thus ratios of two chi square distributed variables with $2m$ degrees of freedom. They therefore have the F distribution of Eq. (2.1).
In order to decide the test for $Q_1$, we need the P value for a measurement $q_1$ of the statistic. Recall that this is the probability of getting a result at least as extreme as the one measured under the null hypothesis. So in this case
$$
P_1 = {\rm Prob}(Q_1\le q_1\mid H_0), \tag{2.12}
$$
which we can calculate from the percentiles of the F distribution, Eq. (2.1). Then if $P_1\le \alpha/2$ we reject $H_0$ and conclude with significance $\alpha/2$ that a change occurred. We do the same test for $Q_2$, so that the combined significance is $\alpha$.
Now we can make a change map for the Frankfurt Airport for the two acquisitions, August 5 and August 11, 2020. We want to see quite large changes associated primarily with airplane and vehicle movements, so we will set the significance generously low to $\alpha = 0.001$. We will also distinguish the direction of change and mask out the no-change pixels:
End of explanation
def det(im):
return im.expression('b(0) * b(1)')
# Number of looks.
m = 5
im1 = ee.Image(im_list.get(0)).select('VV', 'VH').clip(aoi)
im2 = ee.Image(im_list.get(1)).select('VV', 'VH').clip(aoi)
m2logQ = det(im1).log().add(det(im2).log()).subtract(
det(im1.add(im2)).log().multiply(2)).add(4*np.log(2)).multiply(-2*m)
Explanation: Most changes are within the airport or on the Autobahn. Barge movements on the Main River (upper left hand corner) are also signaled as significant changes. Note that the 'red' changes (significant increases in intensity) do not show up in the 'ratio' overlay, which displays $s_1/s_2$.
Bivariate change detection
Rather than analyzing the VV and VH bands individually, it would make more sense to treat them together, and that is what we will now do. It is convenient to work with the covariance matrix form for measured intensities that we introduce in Part 1, see Eq.(1.6a). Again with the aim of keeping the notation simple, define
$$
\pmatrix{ s_i & 0\cr 0 & r_i} = \pmatrix{\langle|S_{vv}|^2\rangle_i & 0 \cr 0 & \langle|S_{vh}|^2\rangle_i}, \quad {\rm with\ means}\quad a_i = \langle|S^{a_i}{vv}|^2\rangle, \quad b_i = \langle|S^{b_i}{vh}|^2\rangle \tag{2.13}
$$
for the two acquisition times $t_i,\ i=1,2$.
Under $H_0$ we have $a_1=a_2=a$ and $b_1=b_2=b$. Assuming independence of $s_i$ and $r_i$, the likelihood function is the product of the four gamma distributions
$$
L_0(a,b) = p(s_1\mid a)p(r_1\mid b)p(s_2\mid a)p(r_2\mid b).
$$
Under $H_1$,
$$
L_1(a_1,b_1,a_2,b_2) = p(s_1\mid a_1)p(r_1\mid b_1)p(s_2\mid a_2)p(r_2\mid b_2).
$$
With maximum likelihood estimates under $H_0$
$$
\hat a = (s_1+s_2)/2\quad {\rm and}\quad \hat b = (r_1+r_2)/2
$$
for the parameters and some simple algebra, we get
$$
L_0(\hat a,\hat b) = {(2m)^{4m}\over (s_1+s_2)^{2m}(r_1+r_2)^{2m}\Gamma(m)^4}s_1r_1s_2r_2e^{-4m}. \tag{2.14}
$$
Similarly with $\hat a_1=s_1,\ \hat b_1=r_1,\ \hat a_2=s_2,\ \hat b_2=r_2$, we calculate
$$
L_1(\hat a_1,\hat b_1,\hat a_2,\hat b_2) = {m^{4m}\over s_1r_1s_2r_2}e^{-4m}.
$$
The likelihood test statistic in then
$$
Q = {L_0(\hat a,\hat b)\over L_1(\hat a_1,\hat b_1,\hat a_2,\hat b_2)}={2^4(s_1r_1s_2r_2)^m\over (s_1+s_2)^{2m}(r_1+r_2)^{2m}}.
$$
Writing this in terms of the covariance matrix representation,
$$
c_i = \pmatrix{s_i & 0\cr 0 & r_i},\quad i=1,2,
$$
we derive, finally, the likelihood ratio test
$$
Q = \left[2^4\pmatrix{|c_1| |c_2|\over |c_1+c_2|^2 }\right]^m \le k, \tag{2.15}
$$
where $|\cdot|$ indicates the matrix determinant, $|c_i|=s_ir_i$.
So far so good. But in order to determine P values, we need the probability distribution of $Q$. This time we have no idea how to obtain it. Here again, statistical theory comes to our rescue.
Let $\Theta$ be the parameter space for the LRT. In our example it is
$$
\Theta = { a_1,b_1,a_2,b_2}
$$
and has $d=4$ dimensions. Under the null hypothesis the parameter space is restricted by the conditions $a=a_1=a_2$ and $b=b_1=b_2$ to
$$
\Theta_0 = { a,b}
$$
with $d_0=2$ dimensions. According to Wilks' Theorem, as the number of measurements determining the LRT statistic $Q$ approaches $\infty$, the test statistic $-2\log Q$ approaches a chi square distribution with $d-d_0=2$ degrees of freedom. (Recall that, in order to determine the matrices $c_1$ and $c_2$, five individual measurements were averaged or multi-looked.) So rather than working with $Q$ directly, we use $-2\log Q$ instead and hope that Wilk's theorem is a good enough approximation for our case.
In order to check if this is so, we just have to program
$$
-2\log Q = (\log{|c_1|}+\log{|c_2|}-2\log{|c_1+c_2|}+4\log{2})(-2m)
$$
in GEE-ese:
End of explanation
hist = m2logQ.reduceRegion(
ee.Reducer.fixedHistogram(0, 20, 200), aoi).get('VV').getInfo()
a = np.array(hist)
x = a[:, 0]
y = a[:, 1] / np.sum(a[:, 1])
plt.plot(x, y, '.', label='data')
plt.plot(x, chi2.pdf(x, 2)/10, '-r', label='chi square')
plt.legend()
plt.grid()
plt.show()
Explanation: and then plot its histogram, comparing it with the chi square distribution scipy.stats.chi2.pdf() with two degrees of freedom:
End of explanation
geoJSON ={
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
-98.2122802734375,
49.769291532628515
],
[
-98.00559997558594,
49.769291532628515
],
[
-98.00559997558594,
49.88578690918283
],
[
-98.2122802734375,
49.88578690918283
],
[
-98.2122802734375,
49.769291532628515
]
]
]
}
}
]
}
coords = geoJSON['features'][0]['geometry']['coordinates']
aoi1 = ee.Geometry.Polygon(coords)
Explanation: Looks pretty good. Note now that a small value of the LRT $Q$ in Eq. (2.15) corresponds to a large value of $-2\log{Q}$. Therefore the P value for a measurement $q$ is now the probability of getting the value $-2\log{q}$
or higher,
$$
P = {\rm Prob}(-2\log{Q} \ge -2\log{q}) = 1 - {\rm Prob}(-2\log{Q} < -2\log{q}).
$$
So let's try out our bivariate change detection procedure, this time on an agricultural scene where we expect to see larger regions of change.
End of explanation
im1 = ee.Image(ee.ImageCollection('COPERNICUS/S1_GRD_FLOAT')
.filterBounds(aoi1)
.filterDate(ee.Date('2018-08-01'), ee.Date('2018-08-31'))
.filter(ee.Filter.eq('orbitProperties_pass', 'ASCENDING'))
.filter(ee.Filter.eq('relativeOrbitNumber_start', 136))
.first()
.clip(aoi1))
im2 = ee.Image(ee.ImageCollection('COPERNICUS/S1_GRD_FLOAT').filterBounds(aoi1)
.filterDate(ee.Date('2018-09-01'), ee.Date('2018-09-30'))
.filter(ee.Filter.eq('orbitProperties_pass', 'ASCENDING'))
.filter(ee.Filter.eq('relativeOrbitNumber_start', 136))
.first()
.clip(aoi1))
Explanation: This is a mixed agricultural/forest area in southern Manitoba, Canada. We'll gather two images, one from the beginning of August and one from the beginning of September, 2018. A lot of harvesting takes place in this interval, so we expect some extensive changes.
End of explanation
acq_time = im1.get('system:time_start').getInfo()
print( time.strftime('%x', time.gmtime(acq_time/1000)) )
acq_time = im2.get('system:time_start').getInfo()
print( time.strftime('%x', time.gmtime(acq_time/1000)) )
Explanation: Here are the acquisition times:
End of explanation
def chi2cdf(chi2, df):
''' Chi square cumulative distribution function for df degrees of freedom
using the built-in incomplete gamma function gammainc() '''
return ee.Image(chi2.divide(2)).gammainc(ee.Number(df).divide(2))
# The observed test statistic image -2logq.
m2logq = det(im1).log().add(det(im2).log()).subtract(
det(im1.add(im2)).log().multiply(2)).add(4*np.log(2)).multiply(-2*m)
# The P value image prob(m2logQ > m2logq) = 1 - prob(m2logQ < m2logq).
p_value = ee.Image.constant(1).subtract(chi2cdf(m2logq, 2))
# Project onto map.
location = aoi1.centroid().coordinates().getInfo()[::-1]
mp = folium.Map(location=location, zoom_start=12)
mp.add_ee_layer(p_value,
{'min': 0,'max': 1, 'palette': ['black', 'white']}, 'P-value')
mp.add_child(folium.LayerControl())
Explanation: Fortunately it is possible to map the chi square cumulative distribution function over an ee.Image() so that a P value image can be calculated directly. This wasn't possible in the single band case, as the F cumulative distribution is not available on the GEE. Here are the P values:
End of explanation
c_map = p_value.multiply(0).where(p_value.lt(0.01), 1)
crop2018 = (ee.ImageCollection('AAFC/ACI')
.filter(ee.Filter.date('2018-01-01', '2018-12-01'))
.first()
.clip(aoi1))
mp = folium.Map(location=location, zoom_start=12)
mp.add_ee_layer(crop2018, {min: 0, max: 255}, 'crop2018')
mp.add_ee_layer(c_map.updateMask(
c_map.gt(0)), {'min': 0, 'max': 1, 'palette': ['black', 'red']}, 'c_map')
mp.add_child(folium.LayerControl())
Explanation: The uniformly dark areas correspond to small or vanishing P values and signify change. The bright areas correspond to no change. Why they are not uniformly bright will be explained below. Now we set a significance threshold of $\alpha=0.01$ and display the significant changes, whereby 1% of them will be false positives. For reference we also show the 2018 Canada AAFC Annual Crop Inventory map, which is available as a GEE collection:
End of explanation
geoJSON ={
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
-98.18550109863281,
49.769735012247885
],
[
-98.13949584960938,
49.769735012247885
],
[
-98.13949584960938,
49.798109268622
],
[
-98.18550109863281,
49.798109268622
],
[
-98.18550109863281,
49.769735012247885
]
]
]
}
}
]
}
coords = geoJSON['features'][0]['geometry']['coordinates']
aoi1_sub = ee.Geometry.Polygon(coords)
hist = p_value.reduceRegion(ee.Reducer.fixedHistogram(0, 1, 100), aoi1_sub).get('constant').getInfo()
a = np.array(hist)
x = a[:,0]
y = a[:,1]/np.sum(a[:,1])
plt.plot(x, y, '.b', label='p-value')
plt.ylim(0, 0.05)
plt.grid()
plt.legend()
plt.show()
Explanation: The major crops in the scene are soybeans (dark brown), oats (light brown), canola (light green), corn (light yellow) and winter wheat (dark gray). The wooded areas exhibit little change, while canola has evidently been extensively harvested in the interval.
A note on P values
Because small P values are indicative of change, it is tempting to say that, the larger the P value, the higher the probability of no change. Or more explicitly, the P value is itself the no change probability. Let's see why this is false. Below we choose a wooded area of the agricultural scene where few significant changes are to be expected and use it to subset the P value image. Then we plot the histogram of the subset:
End of explanation
c_map = p_value.multiply(0).where(p_value.lt(0.01), 1)
diff = im2.subtract(im1)
d_map = c_map.multiply(0) # Initialize the direction map to zero.
d_map = d_map.where(det(diff).gt(0), 2) # All pos or neg def diffs are now labeled 2.
d_map = d_map.where(diff.select(0).gt(0), 3) # Re-label pos def (and label some indef) to 3.
d_map = d_map.where(det(diff).lt(0), 1) # Label all indef to 1.
c_map = c_map.multiply(d_map) # Re-label the c_map, 0*X = 0, 1*1 = 1, 1*2= 2, 1*3 = 3.
Explanation: So the P values of no-change measurements are uniformly distributed over $[0, 1]$ (the excess of small P values at the left can be ascribed to genuine changes within the polygon). A large P value is no more indicative of no change than a small one. Of course it has to be this way. When, for example, we set a significance level of 5%, then the fraction of false positives, i.e., the fraction of P values smaller than 0.05 given $H_0$, must also be 5%. This accounts for the noisy appearance of the P value image in the no-change regions.
Change direction: the Loewner order
What about the direction of change in the bivariate case? This is less clear, as we can have the situation where the VV intensity gets larger and the VH smaller from time $t_1$ to $t_2$, or vice versa. When we are dealing with the C2 covariance matrix representation of SAR imagery, see Eq. (2.13), a characterization of change can be made as follows (Nielsen et al. (2019)): For each significantly changed pixel, we determine the difference $C2_{t_2}-C2_{t_1}$ and examine its so-called definiteness, also known as the Loewner order of the change. A matrix is said to be positive definite if all of its eigenvalues are positive, negative definite if they are all negative, otherwise indefinite. In the case of the $2\times 2$ diagonal matrices that we are concerned with the eigenvalues are just the two diagonal elements themselves, so determining the Loewner order is trivial. For full $2\times 2$ dual pol or $3\times 3$ quad pol SAR imagery, devising an efficient way to determine the Loewner order is more difficult, see Nielsen (2019).
So let's include the Loewner order in our change map:
End of explanation
mp = folium.Map(location=location, zoom_start=12)
mp.add_ee_layer(crop2018, {min: 0, max: 255}, 'crop2018')
mp.add_ee_layer(
c_map.updateMask(c_map.gt(0)), {
'min': 0,
'max': 3,
'palette': ['black', 'yellow', 'blue', 'red']
}, 'c_map')
mp.add_child(folium.LayerControl())
Explanation: Now we display the changes, with positive definite red, negative definite blue, and indefinite yellow:
End of explanation |
6,126 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Defining mock connections
Start with some setup.
Step1: Create a simple resource
Step2: Define a mock for the resource
Here we define an object with a method named document and assign it to the connection's mock attribute.
Note
Step3: Call the mocked resource
With a mock in place, we can make the same call as earlier, but instead of making a network connection,
the document method on the connection's mock attribute is called.
Step4: What is going on here?
The mock is not called until the arguments provided to the partial
are evaluated and prepared for the HTTP connection; this ensures that the
mock data matches the actual connection data.
The mock is called with | Python Code:
import sys
sys.path.append('/opt/rhc')
import rhc.micro as micro
import rhc.async as async
import logging
logging.basicConfig(level=logging.DEBUG)
Explanation: Defining mock connections
Start with some setup.
End of explanation
p=micro.load_connection([
'CONNECTION placeholder http://jsonplaceholder.typicode.com',
'RESOURCE document /posts/{id}',
])
async.wait(micro.connection.placeholder.document(1))
Explanation: Create a simple resource
End of explanation
class MyMock(object):
def document(self, method, path, headers, body):
print('method', method)
print('path', path)
print('headers', headers)
print('body', body)
return 'foo'
micro.connection.placeholder.mock = MyMock()
Explanation: Define a mock for the resource
Here we define an object with a method named document and assign it to the connection's mock attribute.
Note: the method name matches the RESOURCE name.
End of explanation
async.wait(micro.connection.placeholder.document(1))
Explanation: Call the mocked resource
With a mock in place, we can make the same call as earlier, but instead of making a network connection,
the document method on the connection's mock attribute is called.
End of explanation
async.wait(micro.connection.placeholder.document(1, test='value'))
Explanation: What is going on here?
The mock is not called until the arguments provided to the partial
are evaluated and prepared for the HTTP connection; this ensures that the
mock data matches the actual connection data.
The mock is called with:
the HTTP method
the path, with any substititions
headers as a dict
content as a dict, or None if no content
Notes:
The return value from the mock will be used as the partial's response.
The final line, "foo", is the return from the mock document RESOURCE
as printed by the default async.wait callback handler.
If the mock throws an exception, the callback will be called with a non-zero result.
The handler, setup and wrapper functions are not called.
The example uses a class; it could also be a collection of functions in a module.
Here is an example of content created from unused kwargs:
End of explanation |
6,127 | Given the following text description, write Python code to implement the functionality described.
Description:
Return True is list elements are monotonically increasing or decreasing.
This is how the function will work:
monotonic([1, 2, 4, 20])
True
This is how the function will work:
monotonic([1, 20, 4, 10])
False
This is how the function will work:
monotonic([4, 1, 0, -10])
True
| Python Code:
def monotonic(l: list):
if l == sorted(l) or l == sorted(l, reverse=True):
return True
return False |
6,128 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Partial Dependence Plots
While feature importance shows what variables most affect predictions, partial dependence plots show how a feature affects predictions.
This is useful to answer questions like
Step1: Our first example uses a decision tree, which you can see below. In practice, you'll use more sophistated models for real-world applications.
Step2: As guidance to read the tree
Step3: A few items are worth pointing out as you interpret this plot
- The y axis is interpreted as change in the prediction from what it would be predicted at the baseline or leftmost value.
- A blue shaded area indicates level of confidence
From this particular graph, we see that scoring a goal substantially increases your chances of winning "Man of The Match." But extra goals beyond that appear to have little impact on predictions.
Here is another example plot
Step4: This graph seems too simple to represent reality. But that's because the model is so simple. You should be able to see from the decision tree above that this is representing exactly the model's structure.
You can easily compare the structure or implications of different models. Here is the same plot with a Random Forest model.
Step5: This model thinks you are more likely to win Man of the Match if your players run a total of 100km over the course of the game. Though running much more causes lower predictions.
In general, the smooth shape of this curve seems more plausible than the step function from the Decision Tree model. Though this dataset is small enough that we would be careful in how we interpret any model.
2D Partial Dependence Plots
If you are curious about interactions between features, 2D partial dependence plots are also useful. An example may clarify this.
We will again use the Decision Tree model for this graph. It will create an extremely simple plot, but you should be able to match what you see in the plot to the tree itself. | Python Code:
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
data = pd.read_csv('../input/fifa-2018-match-statistics/FIFA 2018 Statistics.csv')
y = (data['Man of the Match'] == "Yes") # Convert from string "Yes"/"No" to binary
feature_names = [i for i in data.columns if data[i].dtype in [np.int64]]
X = data[feature_names]
train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1)
tree_model = DecisionTreeClassifier(random_state=0, max_depth=5, min_samples_split=5).fit(train_X, train_y)
Explanation: Partial Dependence Plots
While feature importance shows what variables most affect predictions, partial dependence plots show how a feature affects predictions.
This is useful to answer questions like:
Controlling for all other house features, what impact do longitude and latitude have on home prices? To restate this, how would similarly sized houses be priced in different areas?
Are predicted health differences between two groups due to differences in their diets, or due to some other factor?
If you are familiar with linear or logistic regression models, partial dependence plots can be interpreted similarly to the coefficients in those models. Though, partial dependence plots on sophisticated models can capture more complex patterns than coefficients from simple models. If you aren't familiar with linear or logistic regressions, don't worry about this comparison.
We will show a couple examples, explain the interpretation of these plots, and then review the code to create these plots.
How it Works
Like permutation importance, partial dependence plots are calculated after a model has been fit. The model is fit on real data that has not been artificially manipulated in any way.
In our soccer example, teams may differ in many ways. How many passes they made, shots they took, goals they scored, etc. At first glance, it seems difficult to disentangle the effect of these features.
To see how partial plots separate out the effect of each feature, we start by considering a single row of data. For example, that row of data might represent a team that had the ball 50% of the time, made 100 passes, took 10 shots and scored 1 goal.
We will use the fitted model to predict our outcome (probability their player won "man of the match"). But we repeatedly alter the value for one variable to make a series of predictions. We could predict the outcome if the team had the ball only 40% of the time. We then predict with them having the ball 50% of the time. Then predict again for 60%. And so on. We trace out predicted outcomes (on the vertical axis) as we move from small values of ball possession to large values (on the horizontal axis).
In this description, we used only a single row of data. Interactions between features may cause the plot for a single row to be atypical. So, we repeat that mental experiment with multiple rows from the original dataset, and we plot the average predicted outcome on the vertical axis.
Code Example
Model building isn't our focus, so we won't focus on the data exploration or model building code.
End of explanation
from sklearn import tree
import graphviz
tree_graph = tree.export_graphviz(tree_model, out_file=None, feature_names=feature_names)
graphviz.Source(tree_graph)
Explanation: Our first example uses a decision tree, which you can see below. In practice, you'll use more sophistated models for real-world applications.
End of explanation
from matplotlib import pyplot as plt
from pdpbox import pdp, get_dataset, info_plots
# Create the data that we will plot
pdp_goals = pdp.pdp_isolate(model=tree_model, dataset=val_X, model_features=feature_names, feature='Goal Scored')
# plot it
pdp.pdp_plot(pdp_goals, 'Goal Scored')
plt.show()
Explanation: As guidance to read the tree:
- Leaves with children show their splitting criterion on the top
- The pair of values at the bottom show the count of False values and True values for the target respectively, of data points in that node of the tree.
Here is the code to create the Partial Dependence Plot using the PDPBox library.
End of explanation
feature_to_plot = 'Distance Covered (Kms)'
pdp_dist = pdp.pdp_isolate(model=tree_model, dataset=val_X, model_features=feature_names, feature=feature_to_plot)
pdp.pdp_plot(pdp_dist, feature_to_plot)
plt.show()
Explanation: A few items are worth pointing out as you interpret this plot
- The y axis is interpreted as change in the prediction from what it would be predicted at the baseline or leftmost value.
- A blue shaded area indicates level of confidence
From this particular graph, we see that scoring a goal substantially increases your chances of winning "Man of The Match." But extra goals beyond that appear to have little impact on predictions.
Here is another example plot:
End of explanation
# Build Random Forest model
rf_model = RandomForestClassifier(random_state=0).fit(train_X, train_y)
pdp_dist = pdp.pdp_isolate(model=rf_model, dataset=val_X, model_features=feature_names, feature=feature_to_plot)
pdp.pdp_plot(pdp_dist, feature_to_plot)
plt.show()
Explanation: This graph seems too simple to represent reality. But that's because the model is so simple. You should be able to see from the decision tree above that this is representing exactly the model's structure.
You can easily compare the structure or implications of different models. Here is the same plot with a Random Forest model.
End of explanation
# Similar to previous PDP plot except we use pdp_interact instead of pdp_isolate and pdp_interact_plot instead of pdp_isolate_plot
features_to_plot = ['Goal Scored', 'Distance Covered (Kms)']
inter1 = pdp.pdp_interact(model=tree_model, dataset=val_X, model_features=feature_names, features=features_to_plot)
pdp.pdp_interact_plot(pdp_interact_out=inter1, feature_names=features_to_plot, plot_type='contour')
plt.show()
Explanation: This model thinks you are more likely to win Man of the Match if your players run a total of 100km over the course of the game. Though running much more causes lower predictions.
In general, the smooth shape of this curve seems more plausible than the step function from the Decision Tree model. Though this dataset is small enough that we would be careful in how we interpret any model.
2D Partial Dependence Plots
If you are curious about interactions between features, 2D partial dependence plots are also useful. An example may clarify this.
We will again use the Decision Tree model for this graph. It will create an extremely simple plot, but you should be able to match what you see in the plot to the tree itself.
End of explanation |
6,129 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 Verily Life Sciences LLC.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: This notebook demonstrates how one can dive deeper into QC results to explain some unexpected patterns. In this notebook, we will see that a few samples in the Platinum Genomes have a very low number of private variants, and we will figure out why.
Eberle, MA et al. (2017) A reference data set of 5.4 million phased human variants validated by genetic inheritance from sequencing a three-generation 17-member pedigree. Genome Research 27
Step2: Install additional Python dependencies plotnine for plotting and jinja2 for performing text replacements in the SQL templates.
Step3: Get a count of private variants
Compute the private variant counts BigQuery
Running this query is optional as this has already been done and saved to Cloud Storage. See the next section for how to retrieve these results from Cloud Storage.
Step4: Retrieve the private variant counts from Cloud Storage
We can read these values from the CSV created via Sample-Level-QC.Rmd.
Step5: Examine results and outliers
This small cohort does not contain enough samples to estimate the expected number of private variants. It is used here for demonstration purposes only.
Step6: Let's take a look at the samples who are more than one standard deviation away from the mean.
Step8: Next let's see if the sample metadata can be used to help explain the explain the low number of private variants that we see.
Retrieve sample metadata
The platinum genomes samples are also members of the larger 1000 genomes dataset. We can retrieve the metadata for those samples from the 1000 genomes metadata.
Step9: Visualize results by ancestry
Step11: All individuals in this dataset are of the same ancestry, so that does not explain the pattern we see.
Visualize results by relationship
We know from the paper that all members of this cohort are from the same family. | Python Code:
#@title Default title text
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 Verily Life Sciences LLC.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
!git clone https://github.com/verilylifesciences/variant-qc.git
Explanation: This notebook demonstrates how one can dive deeper into QC results to explain some unexpected patterns. In this notebook, we will see that a few samples in the Platinum Genomes have a very low number of private variants, and we will figure out why.
Eberle, MA et al. (2017) A reference data set of 5.4 million phased human variants validated by genetic inheritance from sequencing a three-generation 17-member pedigree. Genome Research 27: 157-164. doi:10.1101/gr.210500.116
Setup
Check out the code for the various QC methods to the current working directory. Further down in the notebook we will read the SQL templates from this clone.
End of explanation
!pip install --upgrade plotnine jinja2
import jinja2
import numpy as np
import os
import pandas as pd
import plotnine
from plotnine import *
plotnine.options.figure_size = (10, 6)
# Change this to be your project id.
PROJECT_ID = 'your-project-id' #@param
from google.colab import auth
auth.authenticate_user()
print('Authenticated')
def run_query(sql_template, replacements={}):
if os.path.isfile(sql_template):
sql_template = open(sql_template, "r").read()
sql = jinja2.Template(sql_template).render(replacements)
print('SQL to be executed:\n', sql)
df = pd.io.gbq.read_gbq(sql, project_id=PROJECT_ID, dialect='standard')
print('\nResult shape:\t', df.shape)
return df
Explanation: Install additional Python dependencies plotnine for plotting and jinja2 for performing text replacements in the SQL templates.
End of explanation
# Read the SQL template from the cloned repository in the home directory, perform
# the variable replacements and execute the query.
df = run_query(
sql_template='variant-qc/sql/private_variants.sql',
replacements={
'GENOME_CALL_OR_MULTISAMPLE_VARIANT_TABLE': 'bigquery-public-data.human_genome_variants.platinum_genomes_deepvariant_variants_20180823',
'HIGH_QUALITY_CALLS_FILTER': 'NOT EXISTS (SELECT ft FROM UNNEST(c.FILTER) ft WHERE ft NOT IN ("PASS", "."))'
}
)
Explanation: Get a count of private variants
Compute the private variant counts BigQuery
Running this query is optional as this has already been done and saved to Cloud Storage. See the next section for how to retrieve these results from Cloud Storage.
End of explanation
df = pd.read_csv("https://storage.googleapis.com/genomics-public-data/platinum-genomes/reports/DeepVariant_Platinum_Genomes_sample_results.csv")[["name", "private_variant_count"]]
df.shape
Explanation: Retrieve the private variant counts from Cloud Storage
We can read these values from the CSV created via Sample-Level-QC.Rmd.
End of explanation
df
Explanation: Examine results and outliers
This small cohort does not contain enough samples to estimate the expected number of private variants. It is used here for demonstration purposes only.
End of explanation
df.loc[abs(df.private_variant_count - df.private_variant_count.mean()) > df.private_variant_count.std(), :]
Explanation: Let's take a look at the samples who are more than one standard deviation away from the mean.
End of explanation
metadata_df = run_query(
sql_template=
SELECT
Sample AS name,
Gender AS sex,
Super_Population AS ancestry,
Relationship AS relationship
FROM
`bigquery-public-data.human_genome_variants.1000_genomes_sample_info`
WHERE
Sample IN ('NA12877', 'NA12878', 'NA12889', 'NA12890', 'NA12891', 'NA12892')
)
Explanation: Next let's see if the sample metadata can be used to help explain the explain the low number of private variants that we see.
Retrieve sample metadata
The platinum genomes samples are also members of the larger 1000 genomes dataset. We can retrieve the metadata for those samples from the 1000 genomes metadata.
End of explanation
joined_results = pd.merge(df, metadata_df, how='left', on='name')
joined_results.shape
assert(joined_results.shape == (6, 5))
p = (ggplot(joined_results) +
geom_boxplot(aes(x = 'ancestry', y = 'private_variant_count', fill = 'ancestry')) +
theme_minimal()
)
p
Explanation: Visualize results by ancestry
End of explanation
run_query(
SELECT
*
FROM
`bigquery-public-data.human_genome_variants.1000_genomes_pedigree`
WHERE
Individual_ID IN ('NA12877', 'NA12878', 'NA12889', 'NA12890', 'NA12891', 'NA12892')
)
p = (ggplot(joined_results) +
geom_text(aes(x = 'name', y = 'private_variant_count', label = 'relationship')) +
theme_minimal()
)
p
Explanation: All individuals in this dataset are of the same ancestry, so that does not explain the pattern we see.
Visualize results by relationship
We know from the paper that all members of this cohort are from the same family.
End of explanation |
6,130 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center>
<br><br>
<font size=6>
First Steps with<br><br>
Numerical Computing in Python
</font>
</b>
<br><br>
<font size=3>
Paul M. Magwene
<br>
Spring 2016
</font>
</center>
How to use IPython notebooks
This document was written in an IPython notebook. IPython notebooks allow us to
weave together explantory text, code, and figures.
Don't copy and paste!
Learning to program has similarities to learning a foreign language. You need to
practice writing your own code, not just copying and pasting it from another
document (that's why I'm providing this as a PDF rather than a notebook itself,
to make the copy-and-paste process less convenient).
Part of the practice of learning to program is making mistakes (bugs). Learning
to find and correct bugs in your own code is vital.
Code cells
Each of the grey boxes below that has In [n]
Step1: Gee-whiz!
Let's kick off our tour of Python with some nice visualizations. In this first
section I'm not going to explain any of the code in detail, I'm simply going to
generate some figures to show of some of what Python is capable of. However,
once you work your way through this notebook you should be able to come back to
this first section and understand most of the code written here.
Step2: Numeric data types
One of the simplest ways to use the Python interpretter is as a fancy
calculator. We'll illustrate this below and use this as an opportunity to
introduce the core numeric data types that Python supports.
Step3: Querying objects for their type
There is a built-in Python function called type that we can use to query a
variable for it's data type.
Step4: Booleans
Python has a data type to represent True and False values (Boolean variables)
and supports standard Boolean operators like "and", "or", and "not"
Step5: Comparison operators
Python supports comparison operators on numeric data types. When you carry out a
comparison you get back a Boolean (True,False) value.
Step6: Variable assignment
A value or the result of a calculation can be given a name, and then reused in a
different context by referring to that name. This is called variable assignment.
Step7: Functions
A "function" is a named sequence of statements that performs a computation.
Functions allow us to encapsulate or abstract away the steps required to perform
a useful operation or calculation.
There are a number of Python funtions that are always available to you
Step9: There are many other built-in functions, and we'll see more examples of these
below. See the Python documentation on "Built-in
Functions" for more
details.
Defining functions
You can write your own functions. The general form of a function definition in Python is
Step10: Importing Functions
Python has a mechanism to allow you to build libraries of code, which can then
be "imported" as needed. Python libraries are usually referred to as "modules".
Here’s how we would make functions and various definitions from the math
module available for use.
Step11: If you get tired of writing the module name, you can import all the functions
from a module by writing from math import *. You have to be careful with this
though, as any functions or constants imported this way wil overwrite any
variables/names in your current environment that already exits.
At the beginning of this notebook I imported a library for numerical computing
called NumPy as well as a library for plotting called
Matplotlib.
Step12: Numpy includes most of the functions defined in the math module so we didn't
really need to add the math. prefix.
Step13: Lists
Lists are the simplest "data structure". Data structures are computational
objects for storing, accessing, and operating on data.
List represent ordered collections of arbitrary objects. We'll begin by working
with lists of numbers.
Step14: Indexing lists
Accessing the elements of a list is called "indexing". In Python lists are
"zero-indexed" which means when you can access lists elements, the first element
has the index 0, the second element has the index 1, ..., and the last
element has the index len(x)-1.
Step15: You can use negative indexing to get elements from the end of a list.
Step16: Indexing can be used to get, set, and delete items in a list.
Step17: You can append and delete list elements as well as concatenate two lists
Step18: Slicing lists
Python lists support the notion of ‘slices’ - a continuous sublist of a larger
list. The following code illustrates this concept.
Step19: List slices support a "step" specified by a third colon
Step20: As with single indexing, the slice notation can be used to set elements of a
list.
Step21: Finally, there are a number of useful methods associated with list objects, such
as reverse() and sort().
Step22: NumPy arrays
NumPy is an extension package for Python that provides many facilities for numerical computing. There is also a related package called SciPy that provides even more facilities for scientific computing. Both NumPy and SciPy can be downloaded from http
Step23: Notice how all the arithmetic operations operate elementwise on arrays. You can also perform arithmetic operations between arrays, which also operate element wise
Step24: The last example above shows that the lengths of the two arrays have to be the same in order to do element-wise operations.
By default,most operations on arrays work element-wise. However there are a variety of functions for doing array-wise operations such as matrix multiplication or matrix inversion. Here are a few examples of using NumPy arrays to represent matrices
Step25: Indexing and Slicing NumPy arrays
Like the built-in lists, NumPy arrays are zero-indexed.
Step26: Again, you can use negative indexing to get elements from the end of the vector and slicing to get subsets of the array.
Step27: Comparison operators on arrays
NumPy arrays support the comparison operators, returning arrays of Booleans.
Step28: Combining indexing and comparison on arrays
NumPy arrays allows us to combine the comparison operators with indexing. This facilitates data filtering and subsetting.
Step29: In the first example we retrieved all the elements of x that are larger than 5 (read "x where x is greater than 5"). In the second example we retrieved those elements of x that did not equal six. The third example is slightly more complicated. We combined the logical_or function with comparison and indexing. This allowed us to return those elements of the array x that are either less than four or greater than six. Combining indexing and comparison is a powerful concept. See the numpy documentation on logical functions for more information.
Generating Regular Sequences
Creating sequences of numbers that are separated by a specified value or that follow a particular patterns turns out to be a common task in programming. Python and NumPy have functions to simplify this task.
Step30: You can also do some fancy tricks on lists to generate repeating patterns.
Step31: Mathematical functions applied to arrays
Most of the standard mathematical functions can be applied to numpy arrays however you must use the functions defined in the NumPy module.
Step32: Plots with Matplotlib
Matplotlib is a Python library for making nice 2D and
3D plots. There are a number of other plotting libraries available for Python
but matplotlib has probably the most active developer community and is capable
of producing publication quality figures.
Matplotlib plots can be generated in a variety of ways but the easiest way to
get quick plots is to use the functions defined in the matplotlib.pyplot
module.
Step33: Commonly used functions from matplotlib.pyplot include plot, scatter, imshow, savefig among others. We explored a decent numbers of plotting functionality at the beginning of this notebook. Here are a few more examples.
Step34: Strings
Strings aren't numerical data, but working with strings comes up often
enough in numerical computing that it's worth mentioning them here.
Strings represent textual information, data or input. String are an interesting
data type because they share properties with data structures like lists (data
structures will be introduced in the next handout). | Python Code:
help(min)
?min # this will pop-up a documentation window in the ipython notebook
Explanation: <center>
<br><br>
<font size=6>
First Steps with<br><br>
Numerical Computing in Python
</font>
</b>
<br><br>
<font size=3>
Paul M. Magwene
<br>
Spring 2016
</font>
</center>
How to use IPython notebooks
This document was written in an IPython notebook. IPython notebooks allow us to
weave together explantory text, code, and figures.
Don't copy and paste!
Learning to program has similarities to learning a foreign language. You need to
practice writing your own code, not just copying and pasting it from another
document (that's why I'm providing this as a PDF rather than a notebook itself,
to make the copy-and-paste process less convenient).
Part of the practice of learning to program is making mistakes (bugs). Learning
to find and correct bugs in your own code is vital.
Code cells
Each of the grey boxes below that has In [n]: to the left shows a so called
"code cell". The text in the code cells is what you should type into the code
cells of your own notebook. The regions starting with Out [n]: show you the
result of the code you type in the proceeding input cell(s).
Evaluating code cells
After you type Python code into a code cell, hit Shift-Enter (hold down the
Shift key while you press the Enter (or Return) key) to evaluate the code
cell. If you type valid code you'll usually get some sort of output (at least in
these first few examples). If you make a mistake and get an error message, click
the input code cell and try and correct your mistake(s).
Try your own code
Test your understanding of the examples I've provided by writing additional code
to illustrate the same principle or concept. Don't be afraid to make mistakes.
Help and Documentation
A key skill for becoming an efficient programmer is learning to efficiently
navigate documentation resources. The Python standard library is very well
documented, and can be quickly accessed from the IPython notebook help menu or
online at the http://python.org website. Similar links to some of the more
commonly used scientific and numeric libraries are also found in the Ipython
help menu.
In addition, there are several ways to access abbreviated versions of the
documentation from the interpetter itself.
End of explanation
%matplotlib inline
from numpy import *
from scipy import stats
from matplotlib.pyplot import *
# this is a comment
x = array([1,2,3,4,5,6,7,8,9,10])
plot(x,x**2, color='red', marker='o')
xlabel("Length")
ylabel("Area")
title("Length vs Area for Squares")
pass
x = linspace(0, 10, 100) # generate 100 evenly space points
# between 0 and 10
sinx = sin(x)
sinsqrx = sinx * sinx
plot(x, sinx, color='red', label='sin(x)')
plot(x, sinsqrx, color='blue', label='sin^2(x)')
legend(loc='best') # add optional legend to plot
pass
# draw 1000 random samples from a normal distribution
# with mean = 1000, sd = 15
mean = 1000
sd = 15
samples = random.normal(mean, sd, size=1000)
# draw a histogram
# normed means to make the total area under the
# histogram sum to 1 (i.e. a density histogram)
hist(samples, bins=50, normed=True, color='steelblue')
# draw probability density function for a normal
# distribution with the same parameters
x = linspace(940,1080,250)
y = stats.norm.pdf(x, loc=mean, scale=sd)
plot(x, y, color='firebrick', linestyle='dashed', linewidth=3)
# label axes
xlabel("x")
ylabel("density")
pass
# the function of 2 variables we want to plot
def f(x,y):
return cos(radians(x)) * sin(radians(y))
# generate a grid of x,y points at 10 step
# intervals from 0 to 360
x,y = meshgrid(arange(0, 361, 10), arange(0, 361, 10))
# calculate a function over the grid
z = f(x,y)
# draw a contour plot representing the function f(x,y)
contourf(x, y, z, cmap='inferno')
title("A contour plot\nof z = cos(x)*sin(x)")
pass
# function from previous plot, now represented in 3D
from mpl_toolkits.mplot3d import Axes3D
fig = figure()
ax = Axes3D(fig)
ax.plot_surface(x, y, z, rstride=2, cstride=2, cmap='inferno')
# setup axis labels
ax.set_xlabel("x (degrees)")
ax.set_ylabel("y (degrees)")
ax.set_zlabel("z")
# set elevation and azimuth for viewing
ax.view_init(68, -11)
title("A 3D representation\nof z = cos(x)*sin(x)")
pass
Explanation: Gee-whiz!
Let's kick off our tour of Python with some nice visualizations. In this first
section I'm not going to explain any of the code in detail, I'm simply going to
generate some figures to show of some of what Python is capable of. However,
once you work your way through this notebook you should be able to come back to
this first section and understand most of the code written here.
End of explanation
# this is a comment, the interpretter ignores it
# you can use comments to add short notes or explanation
2 + 10 # add two integers (whole numbers)
2.0 + 10.0 # add two floating point numbers (real (decimal) numbers)
2 + 10.0 # operations that mix integers and floats return floats
2 * 10 # multiplication of integers
2.0 * 10.0 # multiplication of floats
1.0/5.0 # division
2/5 # in Python 2 this used to default to integer division
# in Python 3 division always returns a float
10 % 3 # The % (modulo) operator yields the remainder after division
2**10 # exponentiation -- 2 raised to the power 10
2**0.5 # exponentiation with fractional powers
# **0.5 = square root, **(1/3.) = cube root
(10+2)/(4-5) # numerical operators differ in their precedence
# contrast the output of this line with the line below
(10+2)/4-5 # it is a good habit to use parentheses to disambiguate
# potentially confusing calculations
(1 + 1j) # complex numbers; we won't use these in the course
# but you might occasionally find the need for them
# in biological research
(1 + 1j) + (3 + 2j) # adding complex numbers
(0 + 1j) * (1j) # complex multiplication
Explanation: Numeric data types
One of the simplest ways to use the Python interpretter is as a fancy
calculator. We'll illustrate this below and use this as an opportunity to
introduce the core numeric data types that Python supports.
End of explanation
type(2)
type(2.0)
type(2 + 10.0) # when adding variables of two numeric types, the outcome
# is always the more general type
Explanation: Querying objects for their type
There is a built-in Python function called type that we can use to query a
variable for it's data type.
End of explanation
x = True
y = False
x
not x
y
not y # if True return False, if False return True
x and y # if both arguments are True return true, else return False
x and (not y)
x or y # if either argument is True, return True, else return False
Explanation: Booleans
Python has a data type to represent True and False values (Boolean variables)
and supports standard Boolean operators like "and", "or", and "not"
End of explanation
4 < 5 # less than
4 > 5 # greater than
5 <= 5.0 # less than or equal to
5 == 5 # tests equality
5 == (5**0.5)**2 # the results of this comparison might surprise you
(5**0.5)**2 # the problem is that sqrt(5) can not be represented
# exactly with floating point numbers. This is not a
# limitation of only Python but is generally true
# for all programming languages
# here's one way to test approximate equality when you suspect
# a floating point calculation might be imprecise
epsilon = 0.0000001
(5 - epsilon) <= ((5**0.5)**2) <= (5 + epsilon)
Explanation: Comparison operators
Python supports comparison operators on numeric data types. When you carry out a
comparison you get back a Boolean (True,False) value.
End of explanation
pi = 3.141592654
radius = 4.0
area_circ = pi * radius**2
# notice that you don't get any output from this code cell
# however, once you evaluate this code cell you will see
# the results of your calculation
area_circ
Explanation: Variable assignment
A value or the result of a calculation can be given a name, and then reused in a
different context by referring to that name. This is called variable assignment.
End of explanation
min(1,2) # find the minimum of its input
max(10, 9, 11) # find maximum of inputs
abs(-99) # return absolute value of numerical input
Explanation: Functions
A "function" is a named sequence of statements that performs a computation.
Functions allow us to encapsulate or abstract away the steps required to perform
a useful operation or calculation.
There are a number of Python funtions that are always available to you:
End of explanation
# a function that carries out a simple mathematical calculation
def area_of_circle(radius):
radius of circle --> area of circle
return 3.141592654 * radius**2
area_of_circle(1)
area_of_circle(8)
Explanation: There are many other built-in functions, and we'll see more examples of these
below. See the Python documentation on "Built-in
Functions" for more
details.
Defining functions
You can write your own functions. The general form of a function definition in Python is:
def func_name(arg1, arg2, ...):
body of function
return result
Note:
* Python is white space sensitive, body of a function must be indented
(idiomatic style is to indent by 4 spaces NOT tabs)
* Use a Python aware editor/environment to help get indenting correct. Jupyter
will help you get the indentation correct
End of explanation
import math
math.cos(2 * 3.141592654) # cosine
math.pi # a constant defined in math
pi = math.pi
math.cos(2 * pi)
Explanation: Importing Functions
Python has a mechanism to allow you to build libraries of code, which can then
be "imported" as needed. Python libraries are usually referred to as "modules".
Here’s how we would make functions and various definitions from the math
module available for use.
End of explanation
from numpy import *
from matplotlib.pyplot import *
Explanation: If you get tired of writing the module name, you can import all the functions
from a module by writing from math import *. You have to be careful with this
though, as any functions or constants imported this way wil overwrite any
variables/names in your current environment that already exits.
At the beginning of this notebook I imported a library for numerical computing
called NumPy as well as a library for plotting called
Matplotlib.
End of explanation
exp(1) # e^1
log(e) # natural logarithm of e
log10(100) # log base 10 of 100
Explanation: Numpy includes most of the functions defined in the math module so we didn't
really need to add the math. prefix.
End of explanation
x = [1,2,3,4,5] # a list with the numbers 1..5
x
# a list with floats and ints and complex numbers
y = [2.0, 4, 6, 8, 10.0, 11, (1+1j), 3.14159]
y
# lists of a length. We can use the `len` function to get this
len(x)
len(y)
Explanation: Lists
Lists are the simplest "data structure". Data structures are computational
objects for storing, accessing, and operating on data.
List represent ordered collections of arbitrary objects. We'll begin by working
with lists of numbers.
End of explanation
z = [2, 4, 6, 8, 10]
z[0] # first element
z[3] # fourth element
len(z)
z[5] ## this generates an error -- why?
z[4] # last element of z
Explanation: Indexing lists
Accessing the elements of a list is called "indexing". In Python lists are
"zero-indexed" which means when you can access lists elements, the first element
has the index 0, the second element has the index 1, ..., and the last
element has the index len(x)-1.
End of explanation
z[-1] # last element
z[-2] # second to last element
Explanation: You can use negative indexing to get elements from the end of a list.
End of explanation
m = [1, 2, 4, 6, 8, "hike"]
m[-1] = "learning python is so great!" # set the last element
m
del m[0]
m
Explanation: Indexing can be used to get, set, and delete items in a list.
End of explanation
x = [1,2,3]
y = ['a', 'b', 'c', 'd']
x.append(4)
x
x + y
Explanation: You can append and delete list elements as well as concatenate two lists
End of explanation
c = ['a','b','c','d','e','f']
c[0:3] # get the elements of from index 0 up to
# but not including the element at index 3
c[:3] # same as above, first index implied
c[2:5] # from element 2 up to 5
c[3:] # from index three to end (last index implied)
c[-1:0] # how come this returned an empty list?
Explanation: Slicing lists
Python lists support the notion of ‘slices’ - a continuous sublist of a larger
list. The following code illustrates this concept.
End of explanation
c[0:5:2] # c from 0 to 5, step by 2
# you can you a negative step to walk backward over a list
# note where the output stops (why didn't we get 'a'?)
c[-1:0:-1]
Explanation: List slices support a "step" specified by a third colon
End of explanation
c[2:4] = ['C', 'D']
c
Explanation: As with single indexing, the slice notation can be used to set elements of a
list.
End of explanation
d = [1, 5, 3, 4, 1, 11, 3]
d.sort() # sort in place
d
d.reverse() # reverse in place
d
Explanation: Finally, there are a number of useful methods associated with list objects, such
as reverse() and sort().
End of explanation
from numpy import *
x = array([2,4,6,8,10])
x
type(x)
-x
x**2
x * pi
Explanation: NumPy arrays
NumPy is an extension package for Python that provides many facilities for numerical computing. There is also a related package called SciPy that provides even more facilities for scientific computing. Both NumPy and SciPy can be downloaded from http://www.scipy.org/. NumPy does not come with the standard Python distribution, but it does come as an included package if you use the Anaconda Python distribution. The NumPy package comes with documentation and a tutorial. You can access the documentation here: http://docs.scipy.org/doc/.
The basic data structure in NumPy is the array, which you've already seen in several examples above. As opposed to lists, all the elements in a NumPy array must be of the same type (but this type can differ between different arrays). Arrays are commonly used to represent matrices (2D-arrays) but can be used to represent arrays of arbitrary dimension ($n$-dimensional arrays).
Arithmetic operations on NumPy arrays
End of explanation
y = array([0, 1, 3, 5, 9])
x + y
x * y
z = array([1, 4, 7, 11])
x + z
Explanation: Notice how all the arithmetic operations operate elementwise on arrays. You can also perform arithmetic operations between arrays, which also operate element wise
End of explanation
m = np.array([[1,2],
[3,4]])
m
m.transpose()
linalg.inv(m)
Explanation: The last example above shows that the lengths of the two arrays have to be the same in order to do element-wise operations.
By default,most operations on arrays work element-wise. However there are a variety of functions for doing array-wise operations such as matrix multiplication or matrix inversion. Here are a few examples of using NumPy arrays to represent matrices:
End of explanation
x
x[0]
x[1]
x[4]
x[5]
Explanation: Indexing and Slicing NumPy arrays
Like the built-in lists, NumPy arrays are zero-indexed.
End of explanation
x[-1]
x[-2]
x[2:]
x[::3] # every third element of x
Explanation: Again, you can use negative indexing to get elements from the end of the vector and slicing to get subsets of the array.
End of explanation
x
x < 5
x >= 6
Explanation: Comparison operators on arrays
NumPy arrays support the comparison operators, returning arrays of Booleans.
End of explanation
x = array([2, 4, 6, 10, 8, 7, 9, 2, 11])
x[x > 5]
x[x != 2]
x[logical_or(x <4, x > 8)]
Explanation: Combining indexing and comparison on arrays
NumPy arrays allows us to combine the comparison operators with indexing. This facilitates data filtering and subsetting.
End of explanation
arange(10)
# generate numbers from 3 to 12 (non-inclusive) stepping by 4
arange(3, 12, 4)
arange(1,10,0.5)
Explanation: In the first example we retrieved all the elements of x that are larger than 5 (read "x where x is greater than 5"). In the second example we retrieved those elements of x that did not equal six. The third example is slightly more complicated. We combined the logical_or function with comparison and indexing. This allowed us to return those elements of the array x that are either less than four or greater than six. Combining indexing and comparison is a powerful concept. See the numpy documentation on logical functions for more information.
Generating Regular Sequences
Creating sequences of numbers that are separated by a specified value or that follow a particular patterns turns out to be a common task in programming. Python and NumPy have functions to simplify this task.
End of explanation
[True,True,False]*3
Explanation: You can also do some fancy tricks on lists to generate repeating patterns.
End of explanation
x = array([2, 4, 6, 8])
cos(x)
sin(x)
log(x)
Explanation: Mathematical functions applied to arrays
Most of the standard mathematical functions can be applied to numpy arrays however you must use the functions defined in the NumPy module.
End of explanation
# this tells Jupyter to draw plots in the notebook itself
%matplotlib inline
# import all the plotting functions from matplotlib.pyplot
from matplotlib.pyplot import *
Explanation: Plots with Matplotlib
Matplotlib is a Python library for making nice 2D and
3D plots. There are a number of other plotting libraries available for Python
but matplotlib has probably the most active developer community and is capable
of producing publication quality figures.
Matplotlib plots can be generated in a variety of ways but the easiest way to
get quick plots is to use the functions defined in the matplotlib.pyplot
module.
End of explanation
x = arange(1, 10, 0.25)
y = x + random.normal(size=len(x))
scatter(x,y,color='black')
pass
# see http://matplotlib.org/users/image_tutorial.html
import matplotlib.image as mpimg # required for loading images
img = mpimg.imread("http://matplotlib.org/_images/stinkbug.png")
imshow(img)
pass
# demonstrating subplots
fig, (ax1, ax2, ax3) = subplots(nrows=1, ncols=3)
fig.set_size_inches(15,5)
x = linspace(1, 100, 200)
y = log(x**2) - sqrt(x) + sin(x)
ax1.plot(x, y, color='blue')
ax1.set_xlabel("x")
ax1.set_ylabel("y")
z = sqrt(x) * sin(x) - exp(1/x**2)
ax2.plot(x, z, color='orange')
ax2.set_xlabel("x")
ax2.set_ylabel("z")
ax3.plot(x, y*z,color='purple')
ax3.set_xlabel("x")
ax3.set_ylabel("y * z")
pass
Explanation: Commonly used functions from matplotlib.pyplot include plot, scatter, imshow, savefig among others. We explored a decent numbers of plotting functionality at the beginning of this notebook. Here are a few more examples.
End of explanation
# strings can be enclosed in double quotes
s1 = "Beware the Jabberwock, my son!"
print(s1)
type(s1) # what type are you, s1?
# OR in single quotes
s2 = 'The jaws that bite, the claws that catch!'
print(s2)
# If the string you want to write has a quote character
# you need to wrap it in the other type of quote
# note the single quote at the beginning of 'Twas
s3 = "'Twas brillig, and the slithy toves"
print(s3)
# Concatenating (adding) string
s4 = "abc"
s5 = "def"
print(s4 + s5)
Explanation: Strings
Strings aren't numerical data, but working with strings comes up often
enough in numerical computing that it's worth mentioning them here.
Strings represent textual information, data or input. String are an interesting
data type because they share properties with data structures like lists (data
structures will be introduced in the next handout).
End of explanation |
6,131 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2.1.1. Question
Step1: 2.1.2. Question
Step2: 2.2. Exercise
Step3: 2.3. Exercise
Step4: 2.4. Exercise | Python Code:
from neurodynex.leaky_integrate_and_fire import LIF
print("resting potential: {}".format(LIF.V_REST))
Explanation: 2.1.1. Question: minimal current (calculation)
For the default neuron parameters (see above) compute the minimal amplitude i_min of a step current to elicitate a spike. You can access these default values in your code and do the calculation with correct units:
End of explanation
import brian2 as b2
from neurodynex.leaky_integrate_and_fire import LIF
from neurodynex.tools import input_factory
# create a step current with amplitude= i_min
i_min = 2.005 * b2.namp
step_current = input_factory.get_step_current(t_start=5, t_end=100, unit_time=b2.ms, amplitude= i_min)
# run the LIF model.
# Note: As we do not specify any model parameters, the simulation runs with the default values
(state_monitor,spike_monitor) = LIF.simulate_LIF_neuron(input_current=step_current, simulation_time = 100 * b2.ms)
# plot I and vm
plot_tools.plot_voltage_and_current_traces(
state_monitor, step_current, title="min input", firing_threshold=LIF.FIRING_THRESHOLD)
print("nr of spikes: {}".format(spike_monitor.count[0]))
Explanation: 2.1.2. Question: minimal current (simulation)
Use the value i_min you’ve computed and verify your result:
inject a step current of amplitude i_min for 100ms into the LIF neuron and plot the membrane voltage. Vm should approach the firing threshold but not fire. We have implemented a couple of helper functions to solve this task. Use this code block, but make sure you understand it and you’ve read the docs of the functions LIF.simulate_LIF_neuron(), input_factory.get_step_current() and plot_tools.plot_voltage_and_current_traces().
End of explanation
import brian2 as b2
from neurodynex.leaky_integrate_and_fire import LIF
from neurodynex.tools import input_factory
for iamp in np.arrange
# create a step current with amplitude= i_min
step_current = input_factory.get_step_current(
t_start=5, t_end=100, unit_time=b2.ms,
amplitude= i_min) # set i_min to your value
ABSOLUTE_REFRACTORY_PERIOD = 3.0 * b2.ms
(state_monitor,spike_monitor) = LIF.simulate_LIF_neuron(input_current=step_current,
simulation_time = 100 * b2.ms)
# plot I and vm
plot_tools.plot_voltage_and_current_traces(
state_monitor, step_current, title="min input", firing_threshold=LIF.FIRING_THRESHOLD)
print("nr of spikes: {}".format(spike_monitor.count[0])) # should be 0
Explanation: 2.2. Exercise: f-I Curve
For a constant input current I, a LIF neuron fires regularly with firing frequency f. If the current is to small (I < I_min) f is 0Hz; for larger I the rate increases. A neuron’s firing-rate versus input-amplitude relationship is visualized in an “f-I curve”.
2.2.1. Question: f-I Curve and refractoryness
We now study the f-I curve for a neuron with a refractory period of 3ms (see LIF.simulate_LIF_neuron() to learn how to set a refractory period).
Sketch the f-I curve you expect to see
What is the maximum rate at which this neuron can fire?
Inject currents of different amplitudes (from 0nA to 100nA) into a LIF neuron. For each current, run the simulation for 500ms and determine the firing frequency in Hz. Then plot the f-I curve. Pay attention to the low input current.
End of explanation
# get a random parameter. provide a random seed to have a reproducible experiment
random_parameters = LIF.get_random_param_set(random_seed=432)
# define your test current
test_current = input_factory.get_step_current(
t_start=1, t_end=10, unit_time=b2.ms, amplitude= 2.5 * b2.namp)
# probe the neuron. pass the test current AND the random params to the function
state_monitor, spike_monitor = LIF.simulate_random_neuron(test_current, random_parameters)
# plot
plot_tools.plot_voltage_and_current_traces(state_monitor, test_current, title="experiment")
# print the parameters to the console and compare with your estimates
# LIF.print_obfuscated_parameters(random_parameters)
Explanation: 2.3. Exercise: “Experimentally” estimate the parameters of a LIF neuron
A LIF neuron is determined by the following parameters: Resting potential, Reset voltage, Firing threshold, Membrane resistance, Membrane time-scale, Absolute refractory period. By injecting a known test current into a LIF neuron (with unknown parameters), you can determine the neuron properties from the voltage response.
2.3.1. Question: “Read” the LIF parameters out of the vm plot
Get a random parameter set
Create an input current of your choice.
Simulate the LIF neuron using the random parameters and your test-current. Note that the simulation runs for a fixed duration of 50ms.
Plot the membrane voltage and estimate the parameters. You do not have to write code to analyse the voltage data in the StateMonitor. Simply estimate the values from the plot. For the Membrane resistance and the Membrane time-scale you might have to change your current.
compare your estimates with the true values.
Again, you do not have to write much code. Use the helper functions:
End of explanation
# note the higher resolution when discretizing the sine wave: we specify unit_time=0.1 * b2.ms
sinusoidal_current = input_factory.get_sinusoidal_current(200, 1000, unit_time=0.1 * b2.ms,
amplitude= 2.5 * b2.namp, frequency=250*b2.Hz,
direct_current=0. * b2.namp)
# run the LIF model. By setting the firing threshold to to a high value, we make sure to stay in the linear (non spiking) regime.
(state_monitor, spike_monitor) = LIF.simulate_LIF_neuron(input_current=sinusoidal_current, simulation_time = 120 * b2.ms, firing_threshold=0*b2.mV)
# plot the membrane voltage
plot_tools.plot_voltage_and_current_traces(state_monitor, sinusoidal_current, title="Sinusoidal input current")
print("nr of spikes: {}".format(spike_monitor.count[0]))
Explanation: 2.4. Exercise: Sinusoidal input current and subthreshold response
In the subthreshold regime (no spike), the LIF neuron is a linear system and the membrane voltage is a filtered version of the input current. In this exercise we study the properties of this linear system when it gets a sinusoidal stimulus.
2.4.1. Question
Create a sinusoidal input current (see example below) and inject it into the LIF neuron. Determine the phase and amplitude of the membrane voltage.
End of explanation |
6,132 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Representation with a Feature Cross
In this exercise, you'll experiment with different ways to represent features.
Learning Objectives
Step1: Call the import statements
The following code imports the necessary code to run the code in the rest of this Colaboratory.
Step2: Load, scale, and shuffle the examples
The following code cell loads the separate .csv files and creates the following two pandas DataFrames
Step3: Represent latitude and longitude as floating-point values
Previous Colabs trained on only a single feature or a single synthetic feature. By contrast, this exercise trains on two features. Furthermore, this Colab introduces feature columns, which provide a sophisticated way to represent features.
You create feature columns as possible
Step7: When used, the layer processes the raw inputs, according to the transformations described by the feature columns, and packs the result into a numeric array. (The model will train on this numeric array.)
Define functions that create and train a model, and a plotting function
The following code defines three functions
Step8: Train the model with floating-point representations
The following code cell calls the functions you just created to train, plot, and evaluate a model.
Step9: Task 1
Step10: Represent latitude and longitude in buckets
The following code cell represents latitude and longitude in buckets (bins). Each bin represents all the neighborhoods within a single degree. For example,
neighborhoods at latitude 35.4 and 35.8 are in the same bucket, but neighborhoods in latitude 35.4 and 36.2 are in different buckets.
The model will learn a separate weight for each bucket. For example, the model will learn one weight for all the neighborhoods in the "35" bin", a different weight for neighborhoods in the "36" bin, and so on. This representation will create approximately 20 buckets
Step11: Train the model with bucket representations
Run the following code cell to train the model with bucket representations rather than floating-point representations
Step12: Task 2
Step13: Task 3
Step14: Represent location as a feature cross
The following code cell represents location as a feature cross. That is, the following code cell first creates buckets and then calls tf.feature_column.crossed_column to cross the buckets.
Step15: Invoke the following code cell to test your solution for Task 3. Please ignore the warning messages.
Step16: Task 4
Step17: Task 5 | Python Code:
%tensorflow_version 2.x
Explanation: Representation with a Feature Cross
In this exercise, you'll experiment with different ways to represent features.
Learning Objectives:
After doing this Colab, you'll know how to:
Use tf.feature_column methods to represent features in different ways.
Represent features as bins.
Cross bins to create a feature cross.
The Dataset
Like several of the previous Colabs, this exercise uses the California Housing Dataset.
Use the right version of TensorFlow
End of explanation
#@title Load the imports
# from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow import feature_column
from tensorflow.keras import layers
from matplotlib import pyplot as plt
# The following lines adjust the granularity of reporting.
pd.options.display.max_rows = 10
pd.options.display.float_format = "{:.1f}".format
tf.keras.backend.set_floatx('float32')
print("Imported the modules.")
Explanation: Call the import statements
The following code imports the necessary code to run the code in the rest of this Colaboratory.
End of explanation
# Load the dataset
train_df = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv")
test_df = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv")
# Scale the labels
scale_factor = 1000.0
# Scale the training set's label.
train_df["median_house_value"] /= scale_factor
# Scale the test set's label
test_df["median_house_value"] /= scale_factor
# Shuffle the examples
train_df = train_df.reindex(np.random.permutation(train_df.index))
Explanation: Load, scale, and shuffle the examples
The following code cell loads the separate .csv files and creates the following two pandas DataFrames:
train_df, which contains the training set
test_df, which contains the test set
The code cell then scales the median_house_value to a more human-friendly range and then shuffles the examples.
End of explanation
# Create an empty list that will eventually hold all feature columns.
feature_columns = []
# Create a numerical feature column to represent latitude.
latitude = tf.feature_column.numeric_column("latitude")
feature_columns.append(latitude)
# Create a numerical feature column to represent longitude.
longitude = tf.feature_column.numeric_column("longitude")
feature_columns.append(longitude)
# Convert the list of feature columns into a layer that will ultimately become
# part of the model. Understanding layers is not important right now.
fp_feature_layer = layers.DenseFeatures(feature_columns)
Explanation: Represent latitude and longitude as floating-point values
Previous Colabs trained on only a single feature or a single synthetic feature. By contrast, this exercise trains on two features. Furthermore, this Colab introduces feature columns, which provide a sophisticated way to represent features.
You create feature columns as possible:
Call a tf.feature_column method to represent a single feature, single feature cross, or single synthetic feature in the desired way. For example, to represent a certain feature as floating-point values, call tf.feature_column.numeric_column. To represent a certain feature as a series of buckets or bins, call tf.feature_column.bucketized_column.
Assemble the created representations into a Python list.
A neighborhood's location is typically the most important feature in determining a house's value. The California Housing dataset provides two features, latitude and longitude that identify each neighborhood's location.
The following code cell calls tf.feature_column.numeric_column twice, first to represent latitude as floating-point value and a second time to represent longitude as floating-point values.
This code cell specifies the features that you'll ultimately train the model on and how each of those features will be represented. The transformations (collected in fp_feature_layer) don't actually get applied until you pass a DataFrame to it, which will happen when we train the model.
End of explanation
#@title Define functions to create and train a model, and a plotting function
def create_model(my_learning_rate, feature_layer):
Create and compile a simple linear regression model.
# Most simple tf.keras models are sequential.
model = tf.keras.models.Sequential()
# Add the layer containing the feature columns to the model.
model.add(feature_layer)
# Add one linear layer to the model to yield a simple linear regressor.
model.add(tf.keras.layers.Dense(units=1, input_shape=(1,)))
# Construct the layers into a model that TensorFlow can execute.
model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=my_learning_rate),
loss="mean_squared_error",
metrics=[tf.keras.metrics.RootMeanSquaredError()])
return model
def train_model(model, dataset, epochs, batch_size, label_name):
Feed a dataset into the model in order to train it.
features = {name:np.array(value) for name, value in dataset.items()}
label = np.array(features.pop(label_name))
history = model.fit(x=features, y=label, batch_size=batch_size,
epochs=epochs, shuffle=True)
# The list of epochs is stored separately from the rest of history.
epochs = history.epoch
# Isolate the mean absolute error for each epoch.
hist = pd.DataFrame(history.history)
rmse = hist["root_mean_squared_error"]
return epochs, rmse
def plot_the_loss_curve(epochs, rmse):
Plot a curve of loss vs. epoch.
plt.figure()
plt.xlabel("Epoch")
plt.ylabel("Root Mean Squared Error")
plt.plot(epochs, rmse, label="Loss")
plt.legend()
plt.ylim([rmse.min()*0.94, rmse.max()* 1.05])
plt.show()
print("Defined the create_model, train_model, and plot_the_loss_curve functions.")
Explanation: When used, the layer processes the raw inputs, according to the transformations described by the feature columns, and packs the result into a numeric array. (The model will train on this numeric array.)
Define functions that create and train a model, and a plotting function
The following code defines three functions:
create_model, which tells TensorFlow to build a linear regression model and to use the feature_layer_as_fp as the representation of the model's features.
train_model, which will ultimately train the model from training set examples.
plot_the_loss_curve, which generates a loss curve.
End of explanation
# The following variables are the hyperparameters.
learning_rate = 0.05
epochs = 30
batch_size = 100
label_name = 'median_house_value'
# Create and compile the model's topography.
my_model = create_model(learning_rate, fp_feature_layer)
# Train the model on the training set.
epochs, rmse = train_model(my_model, train_df, epochs, batch_size, label_name)
plot_the_loss_curve(epochs, rmse)
print("\n: Evaluate the new model against the test set:")
test_features = {name:np.array(value) for name, value in test_df.items()}
test_label = np.array(test_features.pop(label_name))
my_model.evaluate(x=test_features, y=test_label, batch_size=batch_size)
Explanation: Train the model with floating-point representations
The following code cell calls the functions you just created to train, plot, and evaluate a model.
End of explanation
#@title Double-click to view an answer to Task 1.
# No. Representing latitude and longitude as
# floating-point values does not have much
# predictive power. For example, neighborhoods at
# latitude 35 are not 36/35 more valuable
# (or 35/36 less valuable) than houses at
# latitude 36.
# Representing `latitude` and `longitude` as
# floating-point values provides almost no
# predictive power. We're only using the raw values
# to establish a baseline for future experiments
# with better representations.
Explanation: Task 1: Why aren't floating-point values a good way to represent latitude and longitude?
Are floating-point values a good way to represent latitude and longitude?
End of explanation
resolution_in_degrees = 1.0
# Create a new empty list that will eventually hold the generated feature column.
feature_columns = []
# Create a bucket feature column for latitude.
latitude_as_a_numeric_column = tf.feature_column.numeric_column("latitude")
latitude_boundaries = list(np.arange(int(min(train_df['latitude'])),
int(max(train_df['latitude'])),
resolution_in_degrees))
latitude = tf.feature_column.bucketized_column(latitude_as_a_numeric_column,
latitude_boundaries)
feature_columns.append(latitude)
# Create a bucket feature column for longitude.
longitude_as_a_numeric_column = tf.feature_column.numeric_column("longitude")
longitude_boundaries = list(np.arange(int(min(train_df['longitude'])),
int(max(train_df['longitude'])),
resolution_in_degrees))
longitude = tf.feature_column.bucketized_column(longitude_as_a_numeric_column,
longitude_boundaries)
feature_columns.append(longitude)
# Convert the list of feature columns into a layer that will ultimately become
# part of the model. Understanding layers is not important right now.
buckets_feature_layer = layers.DenseFeatures(feature_columns)
Explanation: Represent latitude and longitude in buckets
The following code cell represents latitude and longitude in buckets (bins). Each bin represents all the neighborhoods within a single degree. For example,
neighborhoods at latitude 35.4 and 35.8 are in the same bucket, but neighborhoods in latitude 35.4 and 36.2 are in different buckets.
The model will learn a separate weight for each bucket. For example, the model will learn one weight for all the neighborhoods in the "35" bin", a different weight for neighborhoods in the "36" bin, and so on. This representation will create approximately 20 buckets:
10 buckets for latitude.
10 buckets for longitude.
End of explanation
# The following variables are the hyperparameters.
learning_rate = 0.04
epochs = 35
# Build the model, this time passing in the buckets_feature_layer.
my_model = create_model(learning_rate, buckets_feature_layer)
# Train the model on the training set.
epochs, rmse = train_model(my_model, train_df, epochs, batch_size, label_name)
plot_the_loss_curve(epochs, rmse)
print("\n: Evaluate the new model against the test set:")
my_model.evaluate(x=test_features, y=test_label, batch_size=batch_size)
Explanation: Train the model with bucket representations
Run the following code cell to train the model with bucket representations rather than floating-point representations:
End of explanation
#@title Double-click for an answer to Task 2.
# Bucket representation outperformed
# floating-point representations.
# However, you can still do far better.
Explanation: Task 2: Did buckets outperform floating-point representations?
Compare the model's root_mean_squared_error values for the two representations (floating-point vs. buckets)? Which model produced lower losses?
End of explanation
#@title Double-click to view an answer to Task 3.
# Representing location as a feature cross should
# produce better results.
# In Task 2, you represented latitude in
# one-dimensional buckets and longitude in
# another series of one-dimensional buckets.
# Real-world locations, however, exist in
# two dimension. Therefore, you should
# represent location as a two-dimensional feature
# cross. That is, you'll cross the 10 or so latitude
# buckets with the 10 or so longitude buckets to
# create a grid of 100 cells.
# The model will learn separate weights for each
# of the cells.
Explanation: Task 3: What is a better way to represent location?
Buckets are a big improvement over floating-point values. Can you identify an even better way to identify location with latitude and longitude?
End of explanation
resolution_in_degrees = 1.0
# Create a new empty list that will eventually hold the generated feature column.
feature_columns = []
# Create a bucket feature column for latitude.
latitude_as_a_numeric_column = tf.feature_column.numeric_column("latitude")
latitude_boundaries = list(np.arange(int(min(train_df['latitude'])), int(max(train_df['latitude'])), resolution_in_degrees))
latitude = tf.feature_column.bucketized_column(latitude_as_a_numeric_column, latitude_boundaries)
# Create a bucket feature column for longitude.
longitude_as_a_numeric_column = tf.feature_column.numeric_column("longitude")
longitude_boundaries = list(np.arange(int(min(train_df['longitude'])), int(max(train_df['longitude'])), resolution_in_degrees))
longitude = tf.feature_column.bucketized_column(longitude_as_a_numeric_column, longitude_boundaries)
# Create a feature cross of latitude and longitude.
latitude_x_longitude = tf.feature_column.crossed_column([latitude, longitude], hash_bucket_size=100)
crossed_feature = tf.feature_column.indicator_column(latitude_x_longitude)
feature_columns.append(crossed_feature)
# Convert the list of feature columns into a layer that will later be fed into
# the model.
feature_cross_feature_layer = layers.DenseFeatures(feature_columns)
Explanation: Represent location as a feature cross
The following code cell represents location as a feature cross. That is, the following code cell first creates buckets and then calls tf.feature_column.crossed_column to cross the buckets.
End of explanation
# The following variables are the hyperparameters.
learning_rate = 0.04
epochs = 35
# Build the model, this time passing in the feature_cross_feature_layer:
my_model = create_model(learning_rate, feature_cross_feature_layer)
# Train the model on the training set.
epochs, rmse = train_model(my_model, train_df, epochs, batch_size, label_name)
plot_the_loss_curve(epochs, rmse)
print("\n: Evaluate the new model against the test set:")
my_model.evaluate(x=test_features, y=test_label, batch_size=batch_size)
Explanation: Invoke the following code cell to test your solution for Task 3. Please ignore the warning messages.
End of explanation
#@title Double-click for an answer to this question.
# Yes, representing these features as a feature
# cross produced much lower loss values than
# representing these features as buckets
Explanation: Task 4: Did the feature cross outperform buckets?
Compare the model's root_mean_squared_error values for the two representations (buckets vs. feature cross)? Which model produced
lower losses?
End of explanation
#@title Double-click for possible answers to Task 5.
#1. A resolution of ~0.4 degree provides the best
# results.
#2. Below ~0.4 degree, loss increases because the
# dataset does not contain enough examples in
# each cell to accurately predict prices for
# those cells.
#3. Postal code would be a far better feature
# than latitude X longitude, assuming that
# the dataset contained sufficient examples
# in each postal code.
Explanation: Task 5: Adjust the resolution of the feature cross
Return to the code cell in the "Represent location as a feature cross" section. Notice that resolution_in_degrees is set to 1.0. Therefore, each cell represents an area of 1.0 degree of latitude by 1.0 degree of longitude, which corresponds to a cell of 110 km by 90 km. This resolution defines a rather large neighborhood.
Experiment with resolution_in_degrees to answer the following questions:
What value of resolution_in_degrees produces the best results (lowest loss value)?
Why does loss increase when the value of resolution_in_degrees drops below a certain value?
Finally, answer the following question:
What feature (that does not exist in the California Housing Dataset) would
be a better proxy for location than latitude X longitude.
End of explanation |
6,133 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generating a training set of typical geological structures
(Based on 4-Create-model)
Step1: Single normal fault
We first start with a very simple 3-layer model of a fault
Step2: Idea
Step3: Implementation in an experiment object
Step4: Create experiment object for training set
Step5: Set values for random variables
Step6: And now | Python Code:
from matplotlib import rc_params
from IPython.core.display import HTML
css_file = 'pynoddy.css'
HTML(open(css_file, "r").read())
import sys, os
import matplotlib.pyplot as plt
# adjust some settings for matplotlib
from matplotlib import rcParams
# print rcParams
rcParams['font.size'] = 15
# determine path of repository to set paths corretly below
repo_path = os.path.realpath('../..')
import pynoddy.history
import numpy as np
%matplotlib inline
rcParams.update({'font.size': 20})
Explanation: Generating a training set of typical geological structures
(Based on 4-Create-model)
End of explanation
reload(pynoddy.history)
reload(pynoddy.events)
nm = pynoddy.history.NoddyHistory()
# add stratigraphy
strati_options = {'num_layers' : 3,
'layer_names' : ['layer 1', 'layer 2', 'layer 3'],
'layer_thickness' : [1500, 500, 1500]}
nm.add_event('stratigraphy', strati_options )
# The following options define the fault geometry:
fault_options = {'name' : 'Fault_E',
'pos' : (4000, 0, 5000),
'dip_dir' : 90.,
'dip' : 60,
'slip' : 1000}
nm.add_event('fault', fault_options)
history = 'normal_fault.his'
output_name = 'normal_fault_out'
nm.write_history(history)
# Compute the model
pynoddy.compute_model(history, output_name)
# Plot output
import pynoddy.output
reload(pynoddy.output)
nout = pynoddy.output.NoddyOutput(output_name)
nout.plot_section('y', layer_labels = strati_options['layer_names'][::-1],
colorbar = True, title = "",
savefig = False, fig_filename = "normal_fault.eps")
Explanation: Single normal fault
We first start with a very simple 3-layer model of a fault
End of explanation
nout.block[np.where(nout.block == 3)] = 1
nout.plot_section('y', layer_labels = strati_options['layer_names'][::-1],
colorbar = True, title = "", cmap = 'gray_r',
savefig = False, fig_filename = "normal_fault.eps")
Explanation: Idea: add tilting event?!
Let's make it a bit simpler (for our ML algorithm): we simply set block id's to either 1's or 0's (be careful about this choice... is it a good idea?)
End of explanation
import pynoddy.experiment
reload(pynoddy.experiment)
reload(pynoddy.history)
reload(pynoddy.events)
Explanation: Implementation in an experiment object
End of explanation
ts = pynoddy.experiment.Experiment(history = 'normal_fault.his')
sec = ts.get_section()
sec.block[np.where(sec.block == 3)] = 1
# ts.plot_section(data = sec.block)
sec.plot_section(cmap = 'gray_r')
Explanation: Create experiment object for training set: (idea: a test set should easily be produced with new draws from the simulation!)
End of explanation
ts.events[1].properties
param_stats = [{'event' : 2,
'parameter': 'Slip',
'stdev': 500.0,
'type': 'normal'},
{'event' : 2,
'parameter': 'Dip',
'stdev': 10.0,
'type': 'normal'},
{'event' : 2,
'parameter': 'X',
'stdev': 500.0,
'type': 'normal'},
{'event' : 2,
'parameter': 'Z',
'stdev': 1500.0,
'type': 'normal'}]
ts.set_parameter_statistics(param_stats)
ts.random_draw()
ts.plot_section()
ts.random_draw()
sec = ts.get_section()
sec.block[np.where(sec.block == 3)] = 1
# ts.plot_section(data = sec.block)
sec.plot_section(cmap = 'gray_r')
Explanation: Set values for random variables
End of explanation
import copy
ts_rev = copy.deepcopy(ts)
ts_rev.events[2].properties['Slip'] = -1000
ts_rev.events[2].properties['X'] += -1000
# "freeze" to update model base parameters
ts_rev.freeze()
ts_rev.plot_section()
ts_rev.random_draw()
ts_rev.plot_section()
ts_rev.change_cube_size(100)
ts_rev.random_draw()
sec = ts_rev.get_section()
sec.block[np.where(sec.block == 3)] = 1
# ts.plot_section(data = sec.block)
sec.plot_section(cmap = 'gray_r')
sec.block.shape
Explanation: And now: changing to reverse fault
Use same base object, but change slip to negative to invoke reverse fault
End of explanation |
6,134 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generative Adversarial Networks in Keras
Step1: The original GAN!
See this paper for details of the approach we'll try first for our first GAN. We'll see if we can generate hand-drawn numbers based on MNIST, so let's load that dataset first.
We'll be refering to the discriminator as 'D' and the generator as 'G'.
Step2: Train
This is just a helper to plot a bunch of generated images.
Step3: Create some random data for the generator.
Step4: Create a batch of some real and some generated data, with appropriate labels, for the discriminator.
Step5: Train a few epochs, and return the losses for D and G. In each epoch we
Step6: MLP GAN
We'll keep thinks simple by making D & G plain ole' MLPs.
Step7: The loss plots for most GANs are nearly impossible to interpret - which is one of the things that make them hard to train.
Step8: This is what's known in the literature as "mode collapse".
Step9: OK, so that didn't work. Can we do better?...
DCGAN
There's lots of ideas out there to make GANs train better, since they are notoriously painful to get working. The paper introducing DCGANs is the main basis for our next section. Add see https
Step10: Our generator uses a number of upsampling steps as suggested in the above papers. We use nearest neighbor upsampling rather than fractionally strided convolutions, as discussed in our style transfer notebook.
Step11: The discriminator uses a few downsampling steps through strided convolutions.
Step12: We train D a "little bit" so it can at least tell a real image from random noise.
Step13: Now we can train D & G iteratively.
Step14: Better than our first effort, but still a lot to be desired | Python Code:
%matplotlib inline
import importlib
import utils2; importlib.reload(utils2)
from utils2 import *
from tqdm import tqdm
Explanation: Generative Adversarial Networks in Keras
End of explanation
from keras.datasets import mnist
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train.shape
n = len(X_train)
X_train = X_train.reshape(n, -1).astype(np.float32)
X_test = X_test.reshape(len(X_test), -1).astype(np.float32)
X_train /= 255.; X_test /= 255.
Explanation: The original GAN!
See this paper for details of the approach we'll try first for our first GAN. We'll see if we can generate hand-drawn numbers based on MNIST, so let's load that dataset first.
We'll be refering to the discriminator as 'D' and the generator as 'G'.
End of explanation
def plot_gen(G, n_ex=16):
plot_multi(G.predict(noise(n_ex)).reshape(n_ex, 28,28), cmap='gray')
Explanation: Train
This is just a helper to plot a bunch of generated images.
End of explanation
def noise(bs): return np.random.rand(bs,100)
Explanation: Create some random data for the generator.
End of explanation
def data_D(sz, G):
real_img = X_train[np.random.randint(0,n,size=sz)]
X = np.concatenate((real_img, G.predict(noise(sz))))
return X, [0]*sz + [1]*sz
def make_trainable(net, val):
net.trainable = val
for l in net.layers: l.trainable = val
Explanation: Create a batch of some real and some generated data, with appropriate labels, for the discriminator.
End of explanation
def train(D, G, m, nb_epoch=5000, bs=128):
dl,gl=[],[]
for e in tqdm(range(nb_epoch)):
X,y = data_D(bs//2, G)
dl.append(D.train_on_batch(X,y))
make_trainable(D, False)
gl.append(m.train_on_batch(noise(bs), np.zeros([bs])))
make_trainable(D, True)
return dl,gl
Explanation: Train a few epochs, and return the losses for D and G. In each epoch we:
Train D on one batch from data_D()
Train G to create images that the discriminator predicts as real.
End of explanation
MLP_G = Sequential([
Dense(200, input_shape=(100,), activation='relu'),
Dense(400, activation='relu'),
Dense(784, activation='sigmoid'),
])
MLP_D = Sequential([
Dense(300, input_shape=(784,), activation='relu'),
Dense(300, activation='relu'),
Dense(1, activation='sigmoid'),
])
MLP_D.compile(Adam(1e-4), "binary_crossentropy")
MLP_m = Sequential([MLP_G,MLP_D])
MLP_m.compile(Adam(1e-4), "binary_crossentropy")
dl,gl = train(MLP_D, MLP_G, MLP_m, 8000)
Explanation: MLP GAN
We'll keep thinks simple by making D & G plain ole' MLPs.
End of explanation
plt.plot(dl[100:])
plt.plot(gl[100:])
Explanation: The loss plots for most GANs are nearly impossible to interpret - which is one of the things that make them hard to train.
End of explanation
plot_gen()
Explanation: This is what's known in the literature as "mode collapse".
End of explanation
X_train = X_train.reshape(n, 28, 28, 1)
X_test = X_test.reshape(len(X_test), 28, 28, 1)
Explanation: OK, so that didn't work. Can we do better?...
DCGAN
There's lots of ideas out there to make GANs train better, since they are notoriously painful to get working. The paper introducing DCGANs is the main basis for our next section. Add see https://github.com/soumith/ganhacks for many tips!
Because we're using a CNN from now on, we'll reshape our digits into proper images.
End of explanation
CNN_G = Sequential([
Dense(512*7*7, input_dim=100, activation=LeakyReLU()),
BatchNormalization(mode=2),
Reshape((7, 7, 512)),
UpSampling2D(),
Convolution2D(64, 3, 3, border_mode='same', activation=LeakyReLU()),
BatchNormalization(mode=2),
UpSampling2D(),
Convolution2D(32, 3, 3, border_mode='same', activation=LeakyReLU()),
BatchNormalization(mode=2),
Convolution2D(1, 1, 1, border_mode='same', activation='sigmoid')
])
Explanation: Our generator uses a number of upsampling steps as suggested in the above papers. We use nearest neighbor upsampling rather than fractionally strided convolutions, as discussed in our style transfer notebook.
End of explanation
CNN_D = Sequential([
Convolution2D(256, 5, 5, subsample=(2,2), border_mode='same',
input_shape=(28, 28, 1), activation=LeakyReLU()),
Convolution2D(512, 5, 5, subsample=(2,2), border_mode='same', activation=LeakyReLU()),
Flatten(),
Dense(256, activation=LeakyReLU()),
Dense(1, activation = 'sigmoid')
])
CNN_D.compile(Adam(1e-3), "binary_crossentropy")
Explanation: The discriminator uses a few downsampling steps through strided convolutions.
End of explanation
sz = n//200
x1 = np.concatenate([np.random.permutation(X_train)[:sz], CNN_G.predict(noise(sz))])
CNN_D.fit(x1, [0]*sz + [1]*sz, batch_size=128, nb_epoch=1, verbose=2)
CNN_m = Sequential([CNN_G, CNN_D])
CNN_m.compile(Adam(1e-4), "binary_crossentropy")
K.set_value(CNN_D.optimizer.lr, 1e-3)
K.set_value(CNN_m.optimizer.lr, 1e-3)
Explanation: We train D a "little bit" so it can at least tell a real image from random noise.
End of explanation
dl,gl = train(CNN_D, CNN_G, CNN_m, 2500)
plt.plot(dl[10:])
plt.plot(gl[10:])
Explanation: Now we can train D & G iteratively.
End of explanation
plot_gen(CNN_G)
Explanation: Better than our first effort, but still a lot to be desired:...
End of explanation |
6,135 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial
Step1: Estimating the optimal number of topics ($k$)
Non-negative matrix factorization approximates $A$, the document-term matrix, in the following way
Step2: Weighted Jaccard average stability
The figure below shows this metric for a number of topics varying between 5 and 50 (higher is better).
Step3: Symmetric Kullback-Liebler divergence
The figure below shows this metric for a number of topics varying between 5 and 50 (lower is better).
Step4: Guided by the two metrics described previously, we manually evaluate the quality of the topics identified with $k$ varying between 15 and 20. Eventually, we judge that the best results are achieved with NMF for $k=15$.
Step5: Results
Description of the discovered topics
The table below lists the most relevant words for each of the 15 topics discovered from the articles with NMF. They reveal that the people who form the EGC society are interested in a wide variety of both theoretical and applied issues. For instance, topics 11 and 12 are related to theoretical issues
Step6: In the following, we leverage the discovered topics to highlight interesting particularities about the EGC society. To be able to analyze the topics, supplemented with information about the related papers, we partition the papers into 15 non-overlapping clusters, i.e. a cluster per topic. Each article $i \in [0;1-n]$ is assigned to the cluster $j$ that corresponds to the topic with the highest weight $w_{ij}$
Step7: Shifting attention, evolving interests
Here we focus on topics topics 12 (social network analysis and mining) and 3 (association rule mining). The following figures describe these topics in terms of their respective top 10 words and top 3 documents.
Step8: Topic #12
Top 10 words
Step9: Top 3 articles
Step10: Topic #3
Top 10 words
Step11: Top 3 articles
Step12: Evolution of the frequencies of topics 3 and 12
The figure below shows the frequency of topics 12 (social network analysis and mining) and 3 (association rule mining) per year, from 2004 until 2015. The frequency of a topic for a given year is defined as the proportion of articles, among those published this year, that belong to the corresponding cluster. This figure reveals two opposite trends | Python Code:
from tom_lib.structure.corpus import Corpus
from tom_lib.visualization.visualization import Visualization
corpus = Corpus(source_file_path='input/egc_lemmatized.csv',
language='french',
vectorization='tfidf',
max_relative_frequency=0.8,
min_absolute_frequency=4)
print('corpus size:', corpus.size)
print('vocabulary size:', len(corpus.vocabulary))
Explanation: Tutorial: topic modeling to analyze the EGC conference
EGC is a French-speaking conference on knowledge discovery in databases (KDD). In this notebook we show how to use TOM for inferring latent topics that pervade the corpus of articles published at EGC between 2004 and 2015 using non-negative matrix factorization. Based on the discovered topics we use TOM to shed light on interesting facts on the topical structure of the EGC society.
Loading and vectorizing the corpus
We prune words which absolute frequency in the corpus is less than 4, as well as words which relative frequency is higher than 80%, with the aim to only keep the most significant ones. Eventually, we build the vector space representation of these articles with $tf \cdot idf$ weighting. It is a $n \times m$ matrix denoted by $A$, where each line represents an article, with $n = 817$ (i.e. the number of articles) and $m = 1738$ (i.e. the number of words).
End of explanation
from tom_lib.nlp.topic_model import NonNegativeMatrixFactorization
topic_model = NonNegativeMatrixFactorization(corpus)
Explanation: Estimating the optimal number of topics ($k$)
Non-negative matrix factorization approximates $A$, the document-term matrix, in the following way:
$$
A \approx HW
$$
where $H$ is a $n \times k$ matrix that describes the documents in terms of topics, and $W$ is a $k \times m$ matrix that describes topics in terms of words. More precisely, the coefficient $h_{i,j}$ defines the importance of topic $j$ in article $i$, and the coefficient $w_{i,j}$ defines the importance of word $j$ in topic $i$.
Determining an appropriate value of $k$ is critical to ensure a pertinent analysis of the EGC anthology. If $k$ is too small, then the discovered topics will be too vague; if $k$ is too large, then the discovered topics will be too narrow and may be redundant. To help us with this task, we compute two metrics implemented in TOM : the stability metric proposed by Greene et al. (2014) and the spectral metric proposed by Arun et al. (2010).
End of explanation
from bokeh.io import show, output_notebook
from bokeh.plotting import figure
output_notebook()
p = figure(plot_height=250)
p.line(range(10, 51), topic_model.greene_metric(min_num_topics=10, step=1, max_num_topics=50, top_n_words=10, tao=10), line_width=2)
show(p)
Explanation: Weighted Jaccard average stability
The figure below shows this metric for a number of topics varying between 5 and 50 (higher is better).
End of explanation
p = figure(plot_height=250)
p.line(range(10, 51), topic_model.arun_metric(min_num_topics=10, max_num_topics=50, iterations=10), line_width=2)
show(p)
Explanation: Symmetric Kullback-Liebler divergence
The figure below shows this metric for a number of topics varying between 5 and 50 (lower is better).
End of explanation
k = 15
topic_model.infer_topics(num_topics=k)
Explanation: Guided by the two metrics described previously, we manually evaluate the quality of the topics identified with $k$ varying between 15 and 20. Eventually, we judge that the best results are achieved with NMF for $k=15$.
End of explanation
import pandas as pd
pd.set_option('display.max_colwidth', 500)
d = {'Most relevant words': [', '.join([word for word, weight in topic_model.top_words(i, 10)]) for i in range(k)]}
df = pd.DataFrame(data=d)
df.head(k)
Explanation: Results
Description of the discovered topics
The table below lists the most relevant words for each of the 15 topics discovered from the articles with NMF. They reveal that the people who form the EGC society are interested in a wide variety of both theoretical and applied issues. For instance, topics 11 and 12 are related to theoretical issues: topic 11 covers papers about model and variable selection, and topic 12 covers papers that propose new or improved learning algorithms. On the other hand, topics 0 and 6 are related to applied issues: topic 13 covers papers about social network analysis, and topic 6 covers papers about Web usage mining.
End of explanation
p = figure(x_range=[str(_) for _ in range(k)], plot_height=350, x_axis_label='topic', y_axis_label='proportion')
p.vbar(x=[str(_) for _ in range(k)], top=topic_model.topics_frequency(), width=0.7)
show(p)
Explanation: In the following, we leverage the discovered topics to highlight interesting particularities about the EGC society. To be able to analyze the topics, supplemented with information about the related papers, we partition the papers into 15 non-overlapping clusters, i.e. a cluster per topic. Each article $i \in [0;1-n]$ is assigned to the cluster $j$ that corresponds to the topic with the highest weight $w_{ij}$:
\begin{equation}
\text{cluster}i = \underset{j}{\mathrm{argmax}}(w{i,j})
\label{eq:cluster}
\end{equation}
Global topic proportions
End of explanation
def plot_top_words(topic_id):
words = [word for word, weight in topic_model.top_words(topic_id, 10)]
weights = [weight for word, weight in topic_model.top_words(topic_id, 10)]
p = figure(x_range=words, plot_height=300, plot_width=800, x_axis_label='word', y_axis_label='weight')
p.vbar(x=words, top=weights, width=0.7)
show(p)
def top_documents_df(topic_id):
top_docs = topic_model.top_documents(topic_id, 3)
d = {'Article title': [corpus.title(doc_id) for doc_id, weight in top_docs], 'Year': [int(corpus.date(doc_id)) for doc_id, weight in top_docs]}
df = pd.DataFrame(data=d)
return df
Explanation: Shifting attention, evolving interests
Here we focus on topics topics 12 (social network analysis and mining) and 3 (association rule mining). The following figures describe these topics in terms of their respective top 10 words and top 3 documents.
End of explanation
plot_top_words(12)
Explanation: Topic #12
Top 10 words
End of explanation
top_documents_df(12).head()
Explanation: Top 3 articles
End of explanation
plot_top_words(3)
Explanation: Topic #3
Top 10 words
End of explanation
top_documents_df(3).head()
Explanation: Top 3 articles
End of explanation
p = figure(plot_height=250, x_axis_label='year', y_axis_label='topic frequency')
p.line(range(2004, 2015), [topic_model.topic_frequency(3, date=i) for i in range(2004, 2015)], line_width=2, line_color='blue', legend='topic #3')
p.line(range(2004, 2015), [topic_model.topic_frequency(12, date=i) for i in range(2004, 2015)], line_width=2, line_color='red', legend='topic #12')
show(p)
Explanation: Evolution of the frequencies of topics 3 and 12
The figure below shows the frequency of topics 12 (social network analysis and mining) and 3 (association rule mining) per year, from 2004 until 2015. The frequency of a topic for a given year is defined as the proportion of articles, among those published this year, that belong to the corresponding cluster. This figure reveals two opposite trends: topic 12 is emerging and topic 3 is fading over time. While there was apparently no article about social network analysis in 2004, in 2013, 12% of the articles presented at the conference were related to this topic. In contrast, papers related to association rule mining were the most frequent in 2006 (12%), but their frequency dropped down to as low as 0.2% in 2014. This illustrates how the attention of the members of the EGC society is shifting between topics through time. This goes on to show that the EGC society is evolving and is enlarging its scope to incorporate works about novel issues.
End of explanation |
6,136 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: https
Step2: $$I_{xivia} \approx (twitter + instagram + \Delta facebook ) \bmod 2$$
$$I_{xivia} \approx (note + mathtodon) \bmod 2$$ | Python Code:
# 読書計画用スニペット
from datetime import date
import math
def reading_plan(title, total_number_of_pages, period):
current_page = int(input("Current page?: "))
deadline = (date(*period) - date.today()).days
remaining_pages = total_number_of_pages - current_page
print(title, period, "まで", math.ceil(remaining_pages / deadline), "p/day 残り",
remaining_pages, "p/", deadline, "days" )
print(date.today(), ":", current_page + math.ceil(remaining_pages / deadline), "pまで")
Explanation: <a href="https://colab.research.google.com/github/hidenori-t/snippet/blob/master/reading_plan.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
End of explanation
reading_plan("『哲学は何を問うてきたか』", 38, (2020,6,15))
reading_plan("『死す哲』", 261, (2020,6,15))
Explanation: https://wwp.shizuoka.ac.jp/philosophy/%E5%93%B2%E5%AD%A6%E5%AF%BE%E8%A9%B1%E5%A1%BE/
End of explanation
reading_plan("『メノン』", 263, (2020,10,17))
date(2020, 6, 20) - date.today()
reading_plan("『饗宴 解説』", 383, (2020,6,20))
Explanation: $$I_{xivia} \approx (twitter + instagram + \Delta facebook ) \bmod 2$$
$$I_{xivia} \approx (note + mathtodon) \bmod 2$$
End of explanation |
6,137 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An Introduction to Visualizing Astronomical Images
Version 0.1
This session has focused on image processing, with particular attention paid to how we make standard measurements (such as flux or position) in data taken with a wide-field survey, such as the Legacy Survey of Space and Time (LSST) to be conducted by the Vera C. Rubin Observatory.
Of course, as every good astro-data scientist knows, you should
Step1: If you haven't already, please download the images that will be necessary for this exercise. Download the entire data directory and keep it in the same folder as this notebook.
Some quick background
Step2: The first thing that jumps out of this image is all the white pixels. They are $\equiv$ nan because they have been flagged in the ZTF image processing pipeline as unreliable/no good.
Note - from an aesthic standpoint, you can make these disappear by inputting the following
Step3: We can see a nearly logarithmic distribution, with nearly all the pixels near the "noise floor" of 175 or so, and a large tail to larger values.
Based on this histogram, we will clip the range we show to create better constrast in the visualization.
Problem 1c
Replot the i-band image while limiting the range that is displayed to extend from $m$ the minimum pixel value to $M$ the maximum pixel value. While it is possible to do this directly in imshow(), use numpy to limit the range of what is plotted (the utility of this will become clear later).
Step4: Between these two examples we have seen the challenge of linear scales
Step5: Once again the brightest pixels dominate the range that is plotted, however, fainter structures are far more clear than with the linear stretch.
It is also worth noting that the eye's response to intensity is roughly logarithmic*. Therefore, plotting the data with a logrithmic stretch "mimics" what we actually see when looking up at the night sky.
* This is the origin of the "magnitude" system, which is one of the worst units conventions in all of science.
As before, we can do better by limiting the range that is displayed after the conversion to a log stretch.
Problem 1e
Re-plot the i-band data with a log stretch and a cap on the minimum and maximum values to better highlight the "shadows" or faint structure within the data.
Step6: With the log scaling, we finally see the beautiful barred spiral galaxy to the northwest of the ring nebula. However, as was the case with the linear scaling, in order to see the spiral arms in the galaxy all the structure in the ring nebula gets blown out.
(There are also issues with log scaling when making false color images - we will come back to this.)
It is now generally accepted that the best scaling for CCD images is the inverse hyperbolic sine. This non-linear transformation $$\operatorname{arsinh} x = \ln (x + \sqrt{x^2 + 1})$$ has the really nice property that for $|x| \ll 1$ $\operatorname{arsinh} x \approx x$ and for $|x| \gg 1$ $\operatorname{arsinh} x \approx \ln (2x)$. Therefore, with a single transformation we can get the best properties of each of the stretches condisered above.
Problem 1f
Plot the i-band data with an inverse hyperbolic sine scaling.
Step7: Off the shelf that looks pretty good. However we are missing some of the fainter structures. We can "re-center" the data so that the low-level shadows are closer to the $\sim$linear portion of the $\operatorname{arsinh}$ response curve.
Problem 1g
Subtract the median from the data and re-plot using an inverse hyperbolic sine scaling.
Hint – do not forget about the nan values in the array.
Step8: And just like that we have the best looking image (by far!) with a relatively minimal amount of effort. In this representation the ring nebula is a little saturated/washed out. We can correct that with a softening parameter (more below).
From Problem 1 it is clear
Step9: To display an RGB color image with imshow() we need to create an MxNx3 array, where MxN is the size of the image, and the last axis corresponds to the red, green, and blue channels respectively.
Building this 3D array is simple using numpy.dstack().
Problem 2b
Build an RGB array of the ZTF images and display the data using imshow().
Step10: That looks terrible!
As the warning states, RGB data needs to be scaled from 0 to 1 (or 0 to 255 for integer inputs) in order to properly display the relative intensities in each channel.
Problem 2c
Build a helper function scaled_intensity() that takes as input 3 MxN arrays, and returns an MxNx3 array in which the input arrays have all been rescaled between 0 and 1.
Use your function to display a false color image of the ring nebula.
Hint –– here (and below) it may be helpful to write a solution in multiple cells.
Step11: As before, there are few pixels bright enough over the full scale of the image to see any real structure in the data (though you can make out hints of the ring nebula in this otherwise awful image).
There are two ways to address this and we will do both
Step12: Now the ring nebular stands out, (AND the color is telling us something, the strong green color reflects strong emission from $\mathrm{H}\alpha$ in the outer layers of the nebular, while the blue center is the result of [O III] emission), however, there is very little information being transmitted by the rest of the image. We can improve this by adding limits to the plotting range.
Problem 2e
Modify scaled_log_intensity() to "clip" each array at a maximum and minimum value. Replot your RGB array after clipping.
Hint – in principle this means six new variables should be added to the function (r_min, r_max, g_min, g_max, b_min, b_max). For now, use the same lower and upper bound for all 3 filters for simplicity. np.percentile() is a decent way to do this uniformly across all 3 filters.
Step13: We have a false color image! The problem, as highlighted in Lupton et al. 2004, is that the "standard" log scaling of each channel leads to the cores of each of the individual stars appearing as a whitish color. While we have a false color image, we have actually lost all the color information.
Problem 3) Lupton et al. (2004)
As the Sloan Digital Sky Survey began producing multi-filter wide-field images at an "industrial" scale, it was quickly realized that superior solutions were needed for visualizing the 5 filter images (in particular, the use of "log" scaling dramatically reduced the information content).
Lupton et al. proposed a solution that has now been widely adopted. Half of the solution is to use $\operatorname{arsinh}$, and the other half is to be clever about defining the relative color in each pixel (perhaps the most useful insight from Lupton et al.).
We start by defining the intensity in each pixel $I \equiv (r + g + b)/3$, where the $r$, $g$, and $b$ values are the value in each pixel of the R(ed), G(reen), and B(lue) images, respectively. Then the output false color image can be defined by
Step14: Problem 3b
Create a new function tmp_lupton_2 that does everything in tmp_lupton_1 but also mutliplies the individual arrays by $f(I)$ as defined above. There will be 3 new parameters in the model now
Step15: Problem 3c
Use tmp_lupton_2 to rescale the ZTF data and plot the results.
Hint – if the results appear a little underwhelming, do not forget to subtract the median value from the input arrays.
Step16: You should already see a huge improvement relative to the $\log$ scaled false color image that was produced in Problem 2. Further improvements can still be made, however. For instance, the background is a little "muddy" in the sense that everything looks a little brown instead of black.
Problem 3d
Create a new function lupton that does everything in tmp_lupton_2 but also clips and normalizes data with $I \equiv 0$ or $\mathrm{max_{RGB}} > 1$.
Step17: That looks worse than what we had before!
However - we have not attempted to tune the visualization parameters at all (and the default values in the solutions, namely $Q = 1$ and $\alpha = 1$ are attrocious.
Problem 3e
Re-plot the ZTF data after adjusting the tuning parameters $m, Q, \mathrm{and} \alpha$. Expect some trial and error.
Hint –– a footnote in (Lupton et al. 2004) suggests setting $Q \rightarrow 0$ to tune $\alpha$ in such a way to highlight faint structures, then adjusting $Q$ to highlight the bright regions in the image.
Hint 2 –– it is very likely that your initial "solution" will appear overly "green." This is because the color-balance is off. One secret about creating such images is that you are allowed to play with the relative intensities of the R, G, and B channels. This is a False color image, after all. If you want something that appears less "green" try multiplying the B or R channel (or both) by some constant $> 1$ to get a color balance you prefer. The optimal solution maximizes the salience of your final image.
Step18: Wow!
This image of the Ring nebular and the nearby spiral is really nice. The background is black. The stars are not saturated (in the sense that it is extremely easy to tell the difference between the red stars and blue stars). Structure in the ring nebular can be seen (though the outermost halo is missing, see e.g., 3c). And the nearby barred-spiral looks especially good. The nucleus is somewhat orange, suggesting an older stellar population, while the arms are clearly blue (and therefore forming stars at a higher rate than the nucleus).
This is a very very nice depiction of the ring nebula.
Problem 4) Astropy
As is often the case, the good people at astropy have already created a helper function that essentially does everything that we just developed in this notebook. In this case, with the astropy.visualization library there is a function called make_lupton_rgb that takes as input 3 MxN arrays, and returns a scaled MxNx3 rgb array for display purposes.
Problem 4a
Load make_lupton_rgb and make a plot of the results. How does it compare to your own solution?
Hint –– the make_lupton_rgb function uses a parameter called stretch instead of $\alpha$. As you tune the output note that $\mathrm{stretch} \approx 1/\alpha$.
Step19: This is an extremely similar solution to what we derived.
The stars all more or less look the same, though the outer outer halo of the ring nebular is now more clear (at the cost of saturating the inner $\mathrm{H}\alpha$ ring).
Problem 4b
This is optional In the cell below images from SDSS are loaded. Use make_lupton_rgb to create a false color RGB image. The star formation rate in the three main galaxies in the image are actually somewhat different, see if you can highlight that. | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from astropy.io import fits
%matplotlib notebook
Explanation: An Introduction to Visualizing Astronomical Images
Version 0.1
This session has focused on image processing, with particular attention paid to how we make standard measurements (such as flux or position) in data taken with a wide-field survey, such as the Legacy Survey of Space and Time (LSST) to be conducted by the Vera C. Rubin Observatory.
Of course, as every good astro-data scientist knows, you should:
<div align="center">
<br>
<font size="+7"> WORRY ABOUT THE DATA </font>
</div>
For CCDs, that means looking at the actual images.
This lecture is about the visualization of astronomical images. There are many different aspects that can (should?) be considered within this context. We will focus a primary principle of visualization (recall previous sessions):
Salience - the most important aspects of the figure should stand out
As is often the case when it comes to image processing, Robert Lupton (Lupton et al. 2004) has written some definitive text on building false color images (we will return to this later).
By AA Miller (CIERA/Northwestern & Adler)
2020 July 13
End of explanation
i_data = fits.getdata('./data/ztf_i_band.fits.gz')
# complete
Explanation: If you haven't already, please download the images that will be necessary for this exercise. Download the entire data directory and keep it in the same folder as this notebook.
Some quick background: these images are taken from the Zwicky Transient Facilty.* ZTF has a 47 $\mathrm{deg}^2$ camera, with $\sim$1 arcsec pixels and $\sim$2 arcsec seeing. It has 3 filters (g, r, i) and has mapped most of the Northern hemisphere. Today we are working with ZTF stacks, roughly 16 individual images has gone into each of these stacks.
* Yes, I am biased as a member of ZTF. However, I chose these images because they aren't all that nice. SDSS, DES, HSC, PS1, all would have been suitable replacements, typically with better seeing and as a result nicer results.
Problem 1) Luminance
We begin by examining how we want to represent the intensity of the light as detected by the ZTF CCD.
Problem 1a
Load the data from data\ztf_i_band.fits.gz and display a 2D "heatmap" using imshow().
Note - CCDs are linear detectors, so use a linear scaling (the default for imshow()) of the data in order to "see" what the detector sees.
End of explanation
fig, ax = plt.subplots( # complete
# complete
Explanation: The first thing that jumps out of this image is all the white pixels. They are $\equiv$ nan because they have been flagged in the ZTF image processing pipeline as unreliable/no good.
Note - from an aesthic standpoint, you can make these disappear by inputting the following:
np.where(np.isnan(i_data), np.nanmedian(i_data), i_data)
to imshow(), which replaces the nan values with the median value from the image. While this is aesthetically more pleasing, it is also misleading. Most of the masked pixels are saturated.
We can quickly see the problem with a linear scaling. The dynamic range is too large to reveal any significant structure in the data. We can see the brightest stars, but that is it.
We can adjust the bounds on the plotted distribution to better highlight the salient features in the data.
Problem 1b
Make a histogram of the counts in the image. If necessary, adjust the default settings to get a better sense of the distribution.
Hint - be sure to input a 1D array, i_data is 2D.
End of explanation
m =
M =
fig, ax = plt.subplots(# complete
# complete
Explanation: We can see a nearly logarithmic distribution, with nearly all the pixels near the "noise floor" of 175 or so, and a large tail to larger values.
Based on this histogram, we will clip the range we show to create better constrast in the visualization.
Problem 1c
Replot the i-band image while limiting the range that is displayed to extend from $m$ the minimum pixel value to $M$ the maximum pixel value. While it is possible to do this directly in imshow(), use numpy to limit the range of what is plotted (the utility of this will become clear later).
End of explanation
fig, ax = plt.subplots(# complete
# complete
Explanation: Between these two examples we have seen the challenge of linear scales:
(i) when featuring the full dynamic range they only feature "highlights" associated with the brightest pixels,
(ii) when the the plotting range is limited then information about the brightest stars and galaxy cores is lost.
Problem 1d
Plot the i-band data using a log stretch. How does this compare to the linear stretch?
Hint – think about the salient features of the image.
End of explanation
fig, ax = plt.subplots(# complete
# complete
Explanation: Once again the brightest pixels dominate the range that is plotted, however, fainter structures are far more clear than with the linear stretch.
It is also worth noting that the eye's response to intensity is roughly logarithmic*. Therefore, plotting the data with a logrithmic stretch "mimics" what we actually see when looking up at the night sky.
* This is the origin of the "magnitude" system, which is one of the worst units conventions in all of science.
As before, we can do better by limiting the range that is displayed after the conversion to a log stretch.
Problem 1e
Re-plot the i-band data with a log stretch and a cap on the minimum and maximum values to better highlight the "shadows" or faint structure within the data.
End of explanation
fig, ax = plt.subplots(# complete
# complete
Explanation: With the log scaling, we finally see the beautiful barred spiral galaxy to the northwest of the ring nebula. However, as was the case with the linear scaling, in order to see the spiral arms in the galaxy all the structure in the ring nebula gets blown out.
(There are also issues with log scaling when making false color images - we will come back to this.)
It is now generally accepted that the best scaling for CCD images is the inverse hyperbolic sine. This non-linear transformation $$\operatorname{arsinh} x = \ln (x + \sqrt{x^2 + 1})$$ has the really nice property that for $|x| \ll 1$ $\operatorname{arsinh} x \approx x$ and for $|x| \gg 1$ $\operatorname{arsinh} x \approx \ln (2x)$. Therefore, with a single transformation we can get the best properties of each of the stretches condisered above.
Problem 1f
Plot the i-band data with an inverse hyperbolic sine scaling.
End of explanation
fig, ax = plt.subplots(# complete
# complete
Explanation: Off the shelf that looks pretty good. However we are missing some of the fainter structures. We can "re-center" the data so that the low-level shadows are closer to the $\sim$linear portion of the $\operatorname{arsinh}$ response curve.
Problem 1g
Subtract the median from the data and re-plot using an inverse hyperbolic sine scaling.
Hint – do not forget about the nan values in the array.
End of explanation
g_data = fits.getdata('./data/ztf_g_band.fits.gz')
r_data = fits.getdata('./data/ztf_r_band.fits.gz')
Explanation: And just like that we have the best looking image (by far!) with a relatively minimal amount of effort. In this representation the ring nebula is a little saturated/washed out. We can correct that with a softening parameter (more below).
From Problem 1 it is clear: the inverse hyperbolic sine is the best transform to use when displaying astronomical images.
Problem 2) Color
As we have covered previously and extensively color is not a particularly useful tool for conveying information when visualizing data.
And yet...
When it comes to 2D CCD images, simply put, color wins.*
<br>
* With the important and glaring exception that the standard RGB format is not color-blind friendly, and therefore color images are not always inclusive.
For example, this image with intensity information shows some stars and galaxies (and not much more).
<img style="display: block; margin-left: auto; margin-right: auto" src="images/false_color_intensity.png" align="middle">
<div align="right"> <font size="-3">(credit: Zolt Levay/STScI) </font></div>
Whereas the RGB version immediately reveals a high-$z$ quasar. (So this isn't just about "pretty pictures" we are using color to better communicate what we are looking at.)
<img style="display: block; margin-left: auto; margin-right: auto" src="images/false_color_RGB.png" align="middle">
<div align="right"> <font size="-3">(credit: Zolt Levay/STScI) </font></div>
Suppose you have data in 3 filters (e.g., g, r, i) and you wish to make a false color image. DSFP "best-practices" would suggest optimizing the luminance display for each filter and then showing them side by side (this is the colorblind-friendly solution):
<img style="display: block; margin-left: auto; margin-right: auto" src="images/false_color_1.png" align="middle">
<div align="right"> <font size="-3">(credit: Zolt Levay/STScI) </font></div>
You could even play with the color scheme slightly to be more subjective (while remaining colorblind-friendly):
<img style="display: block; margin-left: auto; margin-right: auto" src="images/false_color_2.png" align="middle">
<div align="right"> <font size="-3">(credit: Zolt Levay/STScI) </font></div>
But in combination it becomes possible to truly pick out structure (such as H$\alpha$ emission, traced by red in this instance):
<img style="display: block; margin-left: auto; margin-right: auto" src="images/false_color_3.png" width="550" align="middle">
<div align="right"> <font size="-3">(credit: Zolt Levay/STScI) </font></div>
Following the suggestions laid out in Lupton et al. (2004) we will now make a false color vizualization of the ring nebula using the ZTF g,r,i reference images.
Problem 2a
Load the data from the g- and r-band images, and store it in variables named g_data and r_data respectively.
End of explanation
rgb = # complete
fig, ax = plt.subplots( # complete
# complete
Explanation: To display an RGB color image with imshow() we need to create an MxNx3 array, where MxN is the size of the image, and the last axis corresponds to the red, green, and blue channels respectively.
Building this 3D array is simple using numpy.dstack().
Problem 2b
Build an RGB array of the ZTF images and display the data using imshow().
End of explanation
def scaled_intensity( # complete
'''
Linearly scale input RGB arrays from 0 ---> 1
Parameters
----------
R_array : array-like of shape (M columns, N rows)
R channel data array
G_array : array-like of shape (M columns, N rows)
G channel data array
B_array : array-like of shape (M columns, N rows)
B channel data array
Returns
-------
rgb_scaled : array-like of shape (M columns, N rows, 3)
RGB array for display with matplotlib
'''
r_scaled = # complete
g_scaled = # complete
b_scaled = # complete
rgb_scaled = # complete
return # complete
fig, ax = plt.subplots( # complete
# complete
Explanation: That looks terrible!
As the warning states, RGB data needs to be scaled from 0 to 1 (or 0 to 255 for integer inputs) in order to properly display the relative intensities in each channel.
Problem 2c
Build a helper function scaled_intensity() that takes as input 3 MxN arrays, and returns an MxNx3 array in which the input arrays have all been rescaled between 0 and 1.
Use your function to display a false color image of the ring nebula.
Hint –– here (and below) it may be helpful to write a solution in multiple cells.
End of explanation
def scaled_log_intensity( # complete
'''
Log scale input RGB arrays from 0 ---> 1
Parameters
----------
R_array : array-like of shape (M columns, N rows)
R channel data array
G_array : array-like of shape (M columns, N rows)
G channel data array
B_array : array-like of shape (M columns, N rows)
B channel data array
Returns
-------
rgb_scaled : array-like of shape (M columns, N rows, 3)
RGB array for display with matplotlib
'''
# complete
# complete
# complete
fig, ax = plt.subplots( # complete
# complete
# complete
Explanation: As before, there are few pixels bright enough over the full scale of the image to see any real structure in the data (though you can make out hints of the ring nebula in this otherwise awful image).
There are two ways to address this and we will do both: (i) use a different scaling, such as $\log$, and (ii) limit the full plotting range.
Problem 2d
Write a new (but very similar to 2c) function called scaled_log_intensity() that returns the log of the flux (to better highlight faint features), scaled from 0 to 1 (to allow output as an RGB image). Plot the newly scaled rgb array.
End of explanation
def scaled_log_intensity( # complete
'''
Log scale input RGB arrays from 0 ---> 1
Parameters
----------
R_array : array-like of shape (M columns, N rows)
R channel data array
G_array : array-like of shape (M columns, N rows)
G channel data array
B_array : array-like of shape (M columns, N rows)
B channel data array
# complete : # complete
# complete
# complete : # complete
# complete
Returns
-------
rgb_scaled : array-like of shape (M columns, N rows, 3)
RGB array for display with matplotlib
'''
# complete
# complete
# complete
fig, ax = plt.subplots(# complete
# complete
# complete
Explanation: Now the ring nebular stands out, (AND the color is telling us something, the strong green color reflects strong emission from $\mathrm{H}\alpha$ in the outer layers of the nebular, while the blue center is the result of [O III] emission), however, there is very little information being transmitted by the rest of the image. We can improve this by adding limits to the plotting range.
Problem 2e
Modify scaled_log_intensity() to "clip" each array at a maximum and minimum value. Replot your RGB array after clipping.
Hint – in principle this means six new variables should be added to the function (r_min, r_max, g_min, g_max, b_min, b_max). For now, use the same lower and upper bound for all 3 filters for simplicity. np.percentile() is a decent way to do this uniformly across all 3 filters.
End of explanation
def tmp_lupton_1( # complete
'''
normalize intensity and scale input RGB arrays from 0 ---> 1
Parameters
----------
R_array : array-like of shape (M columns, N rows)
R channel data array
G_array : array-like of shape (M columns, N rows)
G channel data array
B_array : array-like of shape (M columns, N rows)
B channel data array
Returns
-------
rgb_scaled : array-like of shape (M columns, N rows, 3)
intensity scaled RGB array for display with matplotlib
'''
# complete
# complete
# complete
Explanation: We have a false color image! The problem, as highlighted in Lupton et al. 2004, is that the "standard" log scaling of each channel leads to the cores of each of the individual stars appearing as a whitish color. While we have a false color image, we have actually lost all the color information.
Problem 3) Lupton et al. (2004)
As the Sloan Digital Sky Survey began producing multi-filter wide-field images at an "industrial" scale, it was quickly realized that superior solutions were needed for visualizing the 5 filter images (in particular, the use of "log" scaling dramatically reduced the information content).
Lupton et al. proposed a solution that has now been widely adopted. Half of the solution is to use $\operatorname{arsinh}$, and the other half is to be clever about defining the relative color in each pixel (perhaps the most useful insight from Lupton et al.).
We start by defining the intensity in each pixel $I \equiv (r + g + b)/3$, where the $r$, $g$, and $b$ values are the value in each pixel of the R(ed), G(reen), and B(lue) images, respectively. Then the output false color image can be defined by:
$$R = r f(I)/I, \ G = g f(I)/I, \ B = b f(I)/I,$$
where $f(I)$ is some scaling function, such as $\log$ or $\operatorname{arsinh}$.
As a final step the intensity in each filter is also clipped (which allows the preservation of color information even in pixels that have an intensity = 1; this is a hugely important insight). If
$$\mathrm{max_{RGB}} \equiv \mathrm{max(R,G,B)} > 1$$
then we need to set $R \equiv R/\mathrm{max_{RGB}}, G \equiv G/\mathrm{max_{RGB}},$ and $B \equiv B/\mathrm{max_{RGB}}$, similarly if $\mathrm{min_{RGB}} \leq 0$ then set $R \equiv G \equiv B \equiv 0$.
Finally, Lupton et al. point out that a useful parameterization of $f(I)$ is: $$f(I) = \operatorname{arsinh}(\alpha Q [I-m])/Q,$$ where $Q$ and $\alpha$ are both "softening" parameters that enable the user to highlight specific regions within the image with a linear stretch (and $m$ is the minimum for the plotting range as described earlier).
In summary, in order to obtain the "optimal" RGB scaling as described in Lupton et al. we must
(i) create a total intensity map for each pixel $I$,
(ii) apply an $\operatorname{arsinh}$ transform to all the data, and
(iii) rescale the intensity to be clipped at unity (but in such a way that preserves color information in the clipped pixels) and zero.
(In true DSFP fashion,) We will build this functionality one step at a time, but ultimately we want a single function that can take 3 input arrays and output a nice RGB array for plotting.
Problem 3a
Create a function tmp_lupton_1 that takes as input 3 MxN arrays, computes the intensity array $I$, and returns an MxNx3 array in which the input arrays have been divided by I AND rescaled between 0 and 1. (we will add $\operatorname{arsinh}$ in a later step)
Hint – you can plot the results, but they won't be very compelling.
End of explanation
def tmp_lupton_2( # complete
'''
arsinh intensities scaled to RGB arrays from 0 ---> 1
Parameters
----------
R_array : array-like of shape (M columns, N rows)
R channel data array
G_array : array-like of shape (M columns, N rows)
G channel data array
B_array : array-like of shape (M columns, N rows)
B channel data array
# complete : # complete
# complete
# complete : # complete
# complete
# complete : # complete
# complete
Returns
-------
rgb_scaled : array-like of shape (M columns, N rows, 3)
intensity scaled RGB array for display with matplotlib
'''
# complete
# complete
# complete
Explanation: Problem 3b
Create a new function tmp_lupton_2 that does everything in tmp_lupton_1 but also mutliplies the individual arrays by $f(I)$ as defined above. There will be 3 new parameters in the model now: $m$, $Q$, and $\alpha$.
End of explanation
fig, ax = plt.subplots( # complete
# complete
Explanation: Problem 3c
Use tmp_lupton_2 to rescale the ZTF data and plot the results.
Hint – if the results appear a little underwhelming, do not forget to subtract the median value from the input arrays.
End of explanation
def lupton( # complete
'''
arsinh intensities scaled to RGB arrays from 0 ---> 1
Parameters
----------
R_array : array-like of shape (M columns, N rows)
R channel data array
G_array : array-like of shape (M columns, N rows)
G channel data array
B_array : array-like of shape (M columns, N rows)
B channel data array
# complete : # complete
# complete
# complete : # complete
# complete
# complete : # complete
# complete
Returns
-------
rgb_scaled : array-like of shape (M columns, N rows, 3)
intensity scaled RGB array for display with matplotlib
'''
# complete
# complete
# complete
# complete
# complete
# complete
# complete
# complete
# complete
fig, ax = plt.subplots( # complete
# complete
Explanation: You should already see a huge improvement relative to the $\log$ scaled false color image that was produced in Problem 2. Further improvements can still be made, however. For instance, the background is a little "muddy" in the sense that everything looks a little brown instead of black.
Problem 3d
Create a new function lupton that does everything in tmp_lupton_2 but also clips and normalizes data with $I \equiv 0$ or $\mathrm{max_{RGB}} > 1$.
End of explanation
fig, ax = plt.subplots( # complete
# complete
Explanation: That looks worse than what we had before!
However - we have not attempted to tune the visualization parameters at all (and the default values in the solutions, namely $Q = 1$ and $\alpha = 1$ are attrocious.
Problem 3e
Re-plot the ZTF data after adjusting the tuning parameters $m, Q, \mathrm{and} \alpha$. Expect some trial and error.
Hint –– a footnote in (Lupton et al. 2004) suggests setting $Q \rightarrow 0$ to tune $\alpha$ in such a way to highlight faint structures, then adjusting $Q$ to highlight the bright regions in the image.
Hint 2 –– it is very likely that your initial "solution" will appear overly "green." This is because the color-balance is off. One secret about creating such images is that you are allowed to play with the relative intensities of the R, G, and B channels. This is a False color image, after all. If you want something that appears less "green" try multiplying the B or R channel (or both) by some constant $> 1$ to get a color balance you prefer. The optimal solution maximizes the salience of your final image.
End of explanation
from astropy.visualization import make_lupton_rgb
rgb = make_lupton_rgb( # complete
fig, ax = plt.subplots( # complete
Explanation: Wow!
This image of the Ring nebular and the nearby spiral is really nice. The background is black. The stars are not saturated (in the sense that it is extremely easy to tell the difference between the red stars and blue stars). Structure in the ring nebular can be seen (though the outermost halo is missing, see e.g., 3c). And the nearby barred-spiral looks especially good. The nucleus is somewhat orange, suggesting an older stellar population, while the arms are clearly blue (and therefore forming stars at a higher rate than the nucleus).
This is a very very nice depiction of the ring nebula.
Problem 4) Astropy
As is often the case, the good people at astropy have already created a helper function that essentially does everything that we just developed in this notebook. In this case, with the astropy.visualization library there is a function called make_lupton_rgb that takes as input 3 MxN arrays, and returns a scaled MxNx3 rgb array for display purposes.
Problem 4a
Load make_lupton_rgb and make a plot of the results. How does it compare to your own solution?
Hint –– the make_lupton_rgb function uses a parameter called stretch instead of $\alpha$. As you tune the output note that $\mathrm{stretch} \approx 1/\alpha$.
End of explanation
from astropy.utils.data import get_pkg_data_filename
# Read in the three images downloaded from here:
g_name = get_pkg_data_filename('visualization/reprojected_sdss_g.fits.bz2')
r_name = get_pkg_data_filename('visualization/reprojected_sdss_r.fits.bz2')
i_name = get_pkg_data_filename('visualization/reprojected_sdss_i.fits.bz2')
g_dat = fits.open(g_name)[0].data
r_dat = fits.open(r_name)[0].data
i_dat = fits.open(i_name)[0].data
# complete
# complete
# complete
Explanation: This is an extremely similar solution to what we derived.
The stars all more or less look the same, though the outer outer halo of the ring nebular is now more clear (at the cost of saturating the inner $\mathrm{H}\alpha$ ring).
Problem 4b
This is optional In the cell below images from SDSS are loaded. Use make_lupton_rgb to create a false color RGB image. The star formation rate in the three main galaxies in the image are actually somewhat different, see if you can highlight that.
End of explanation |
6,138 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2017 L.A. Barba, N.C. Clementi, modified by D. Koehn © 2019
Step1: Short Jupyter and Python tutorial
This is the second lesson of our course in "Engineering Computations." In the first lesson, Interacting with Python, we used IPython, the interactive Python shell. It is really great to type single-line Python expressions and get the outputs, interactively. Yet, believe it or not, there are greater things!
In this lesson, you will continue playing with data using Python, but you will do so in a Jupyter notebook. This very lesson is written in a Jupyter notebook. Ready? You will love it.
What is Jupyter?
Jupyter is a set of open-source tools for interactive and exploratory computing. You work right on your browser, which becomes the user interface through which Jupyter gives you a file explorer (the dashboard) and a document format
Step2: You can also call external programs like Matlab or compiled Fortran and C code. Essentially, you could easily write your whole Master thesis in a Jupyter notebook and ensure that your work is documented and also reproducible.
Interactive computing in the notebook
Look at the icons on the menu of Jupyter (see the screenshots above). The first icon on the left (an old floppy disk) is for saving your notebook. You can add a new cell with the big + button. Then you have the cut, copy, and paste buttons. The arrows are to move your current cell up or down. Then you have a button to "run" a code cell (execute the code), the square icon means "stop" and the swirly arrow is to "restart" your notebook's kernel (if the computation is stuck, for example). Next to that, you have the cell-type selector
Step3: Edit mode and Command mode
Once you click on a notebook cell to select it, you may interact with it in two ways, which are called modes. Later on, when you are reviewing this material again, read more about this in Reference 1.
Edit mode
Step4: Easy peasy!! You just wrote your first program and you learned how to use the print() function. Yes, print() is a function
Step5: Let's see an interesting case
Step6: What happened? Isn't $9^{1/2} = 3$? (Raising to the power $1/2$ is the same as taking the square root.) Did Python get this wrong?
Compare with this
Step7: Yes! The order of operations matters!
If you don't remember what we are talking about, review the Arithmetics/Order of operations. A frequent situation that exposes this is the following
Step8: Let's do some arithmetic operations with our new variables
Step9: String variables
In addition to name and value, Python variables have a type
Step10: What if you try to "add" two strings?
Step11: The operation above is called concatenation
Step12: Error! Why? Let's inspect what Python has to say and explore what is happening.
Python is a dynamic language, which means that you don't need to specify a type to invoke an existing object. The humorous nickname for this is "duck typing"
Step13: More assignments
What if you want to assign to a new variable the result of an operation that involves other variables? Well, you totally can!
Step14: Notice what we did above
Step15: Some more advanced computations
In order to compute more advanced functions, we have to import the NumPy library
Step16: So, let 's try some functions, like sine and cosine
Step17: Notice, that the underlying NumPy functions use angles in radians instead of degrees as input parameters. However, if you prefer degrees, NumPy can also solve this problem
Step18: What about complex numbers? With the imaginary number
$j^2 = -1$ you can define complex numbers
Step19: We can also add and multiply complex number
Step20: Real and imaginary parts can be extracted by the NumPy functions
Step21: Vectors
We can define vectors in many different ways, e.g. as a NumPy array
Step22: Matlab users should be cautioned, because the index of the first vector element is not 1 but 0, so you access the first element by
Step23: The last element is accessible by the index -1
Step24: What element is accessed by the index -2
As in Matlab you can also access different ranges of vector elements
Step25: Compared to Matlab, the last element is not inclusive.
Computation with vectors is quite easy
Step26: Due to the equal length of vectors a and b, we can simply add them
Step27: or apply elementwise multiplication
Step28: You can transpose the vector
Step29: and calculate the scalar product of two vectors by vector-vector multiplication @
Step30: Vectors with equidistant elements can be created by
Step31: Note that the last element is not inclusive! Another significant difference compared to Matlab, so keep this in mind, when creating vectors
Step32: You can replace individual elements in a vector or delete them. Do not forget that the first vector element has index 0 and the last element is not inclusive
Step33: You can concatenate vectors to new vectors
Step34: or new matrices
Step35: Matrices
Similar to vectors we can access matrix elements. Again, I emphasize that the first index is 0 and the last element is not inclusive
Step36: Similar to Matlab you can access matrix rows via
Step37: Special matrix operations are for example the diagonal
Step38: or the inverse
Step39: For-loops and if-statements
Two important aspects of flow control in a Python code are For-loops and If-statements. Again, I emphasize that the last element in the loop is not inclusive
Step40: Before discussing If-statements, we introduce the logical operators <=, ==, >=, or, and
Step41: Now, we can control the flow inside the FOR-loop with if-statements
Step42: Functions
It is good coding practice to avoid repeating ourselves
Step43: Time to write our own Python function. Do you now the date of easter sunday this year? That's quite important, because on the following monday there will be no TEW2 lecture. ;-)
Let's write a Python function to calculate the date using the
Easter algorithm by Carl Friedrich Gauss
Step44: Time to Plot
You will love the Python library Matplotlib! You'll learn here about its module pyplot, which makes line plots.
We need some data to plot. Let's define a NumPy array, compute derived data using its square, cube and square root (element-wise), and plot these values with the original array in the x-axis.
Step45: To plot the resulting arrays as a function of the orginal one (xarray) in the x-axis, we need to import the module pyplot from Matplotlib.
Step46: We'll use the pyplot.plot() function, specifying the line color ('k' for black) and line style ('-', '--' and '
Step47: To illustrate other features, we will plot the same data, but varying the colors instead of the line style. We'll also use LaTeX syntax to write formulas in the labels. If you want to know more about LaTeX syntax, there is a quick guide to LaTeX available online.
Adding a semicolon (';') to the last line in the plotting code block prevents that ugly output, like <matplotlib.legend.Legend at 0x7f8c83cc7898>. Try it. | Python Code:
# Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file = '../style/custom.css'
HTML(open(css_file, "r").read())
Explanation: Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2017 L.A. Barba, N.C. Clementi, modified by D. Koehn © 2019
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo('1vHWlRio8n0')
Explanation: Short Jupyter and Python tutorial
This is the second lesson of our course in "Engineering Computations." In the first lesson, Interacting with Python, we used IPython, the interactive Python shell. It is really great to type single-line Python expressions and get the outputs, interactively. Yet, believe it or not, there are greater things!
In this lesson, you will continue playing with data using Python, but you will do so in a Jupyter notebook. This very lesson is written in a Jupyter notebook. Ready? You will love it.
What is Jupyter?
Jupyter is a set of open-source tools for interactive and exploratory computing. You work right on your browser, which becomes the user interface through which Jupyter gives you a file explorer (the dashboard) and a document format: the notebook.
A Jupyter notebook can contain: input and output of code, formatted text, images, videos, pretty math equations, and much more. The computer code is executable, which means that you can run the bits of code, right in the document, and get the output of that code displayed for you. This interactive way of computing, mixed with the multi-media narrative, allows you to tell a story (even to yourself) with extra powers!
Working in Jupyter
Several things will seem counter-intuitive to you at first. For example, most people are used to launching apps in their computers by clicking some icon: this is the first thing to "unlearn." Jupyter is launched from the command line (like when you launched IPython). Next, we have two types of content—code and markdown—that handle a bit differently. The fact that your browser is an interface to a compute engine (called "kernel") leads to some extra housekeeping (like shutting down the kernel). But you'll get used to it pretty quick!
Start Jupyter
The standard way to start Jupyter is to type the following in the command-line interface:
jupyter notebook
Hit enter and tadah!!
After a little set up time, your default browser will open with the Jupyter app. It should look like in the screenshot below, but you may see a list of files and folders, depending on the location of your computer where you launched it.
Note:
Don't close the terminal window where you launched Jupyter (while you're still working on Jupyter). If you need to do other tasks on the command line, open a new terminal window.
<img src="images/jupyter-main.png" style="width: 800px;"/>
Screenshot of the Jupyter dashboard, open in the browser.
To start a new Jupyter notebook, click on the top-right, where it says New, and select Python 3. Check out the screenshot below.
<img src="images/create_notebook.png" style="width: 800px;"/>
Screenshot showing how to create a new notebook.
A new tab will appear in your browser and you will see an empty notebook, with a single input line, waiting for you to enter some code. See the next screenshot.
<img src="images/new_notebook.png" style="width: 800px;"/>
Screenshot showing an empty new notebook.
The notebook opens by default with a single empty code cell. Try to write some Python code there and execute it by hitting [shift] + [enter].
Notebook cells
The Jupyter notebook uses cells: blocks that divide chunks of text and code. Any text content is entered in a Markdown cell: it contains text that you can format using simple markers to get headings, bold, italic, bullet points, hyperlinks, and more.
Markdown is easy to learn, check out the syntax in the "Daring Fireball" webpage (by John Gruber). A few tips:
to create a title, use a hash to start the line: # Title
to create the next heading, use two hashes (and so on): ## Heading
to italicize a word or phrase, enclose it in asterisks (or underdashes): *italic* or _italic_
to make it bold, enclose it with two asterisks: **bolded**
to make a hyperlink, use square and round brackets: [hyperlinked text](url)
Computable content is entered in code cells. We will be using the IPython kernel ("kernel" is the name used for the computing engine), but you should know that Jupyter can be used with many different computing languages. It's amazing.
A code cell will show you an input mark, like this:
In [ ]:
Once you add some code and execute it, Jupyter will add a number ID to the input cell, and produce an output marked like this:
Out [1]:
A bit of history:
Markdown was co-created by the legendary but tragic Aaron Swartz. The biographical documentary about him is called "The Internet's Own Boy," and you can view it in YouTube or Netflix. Recommended!
Other stuff you can incorporate into your notebook
Beside plain text, hyperlinks and code it is also possible to incorporate LaTeX code, images and YouTube movies in notebook cells.
To type equations, you can also use LaTeX, for example the acoustic wave equation is
$\frac{1}{v_p^2}\frac{\partial^2 P}{\partial t^2} = \nabla^2 P$
with the pressure $P$, P-wave velocity $v_p$, time $t$ and the Laplace operator $\nabla^2$
Images can be incorporated in a notebook cell by
<img src="images/image_name" style="width: 800px;"/>
Let's take a look into the future of the TEW2 lecture:
<img src="images/TEW2_overview.jpg" style="width: 800px;"/>
You can also embed YouTube videos in your Jupyter notebook, like this time-lapse movie of a starry night:
End of explanation
print("Hello World!")
1 + 1
Explanation: You can also call external programs like Matlab or compiled Fortran and C code. Essentially, you could easily write your whole Master thesis in a Jupyter notebook and ensure that your work is documented and also reproducible.
Interactive computing in the notebook
Look at the icons on the menu of Jupyter (see the screenshots above). The first icon on the left (an old floppy disk) is for saving your notebook. You can add a new cell with the big + button. Then you have the cut, copy, and paste buttons. The arrows are to move your current cell up or down. Then you have a button to "run" a code cell (execute the code), the square icon means "stop" and the swirly arrow is to "restart" your notebook's kernel (if the computation is stuck, for example). Next to that, you have the cell-type selector: Code or Markdown (or others that you can ignore for now).
You can test-drive a code cell by writing some arithmetic operations, for example the Python operators which are:
python
+ - * / ** % //
There's addition, subtraction, multiplication and division. The last three operators are exponent (raise to the power of), modulo (divide and return remainder) and floor division.
Typing [shift] + [enter] will execute the cell and give you the output in a new line, labeled Out[1] (the numbering increases each time you execute a cell).
Try it!
Add a cell with the plus button, enter some operations, and [shift] + [enter] to execute.
Try out some of the things, like "Hello World!" or basic calculations:
End of explanation
print("Hello world!!")
Explanation: Edit mode and Command mode
Once you click on a notebook cell to select it, you may interact with it in two ways, which are called modes. Later on, when you are reviewing this material again, read more about this in Reference 1.
Edit mode:
We enter edit mode by pressing Enter or double-clicking on the cell.
We know we are in this mode when we see a green cell border and a prompt in the cell area.
When we are in edit mode, we can type into the cell, like a normal text editor.
Command mode:
We enter in command mode by pressing Esc or clicking outside the cell area.
We know we are in this mode when we see a grey cell border with a left blue margin.
In this mode, certain keys are mapped to shortcuts to help with
common actions.
You can find a list of the shortcuts by selecting Help->Keyboard Shortcuts
from the notebook menu bar. You may want to leave this for later, and come back to it, but it becomes more helpful the more you use Jupyter.
How to shut down the kernel and exit
Closing the browser tab where you've been working on a notebook does not immediately "shut down" the compute kernel. So you sometimes need to do a little housekeeping.
Once you close a notebook, you will see in the main Jupyter app that your
notebook file has a green book symbol next to it. You should click in the box at the left of that symbol, and then click where it says Shutdown. You don't need to do this all the time, but if you have a lot of notebooks running, they will use resources in your machine.
Similarly, Jupyter is still running even after you close the tab that has the Jupyter dashboard open. To exit the Jupyter app, you should go to the terminal that you used to open Jupyter, and type [Ctrl] + [c] to exit.
Nbviewer
Nbviewer is a free web service that allows you to share static versions of hosted notebook files, as if they were a web page. If a notebook file is publicly available on the web, you can view it by entering its URL in the nbviewer web page, and hitting the Go! button. The notebook will be rendered as a static page: visitors can read everything, but they cannot interact with the code.
What is Python?
Python is now 26 years old. Its creator, Guido van Rossum, named it after the British comedy "Monty Python's Flying Circus." His goals for the language were that it be an "an easy and intuitive language just as powerful as major competitors," producing computer code "that is as understandable as plain English."
It is a general-purpose language, which means that you can use it for anything: organizing data, scraping the web, creating websites, analyzing sounds, creating games, and of course engineering computations.
Python is an interpreted language. This means that you can write Python commands and the computer can execute those instructions directly. Other programming languages—like C, C++ and Fortran—require a previous compilation step: translating the commands into machine language.
A neat ability of Python is to be used interactively. Fernando Perez famously created IPython as a side-project during his PhD.
Why Python?
Because it's fun! With Python, the more you learn, the more you want to learn.
You can find lots of resources online and, since Python is an open-source project, you'll also find a friendly community of people sharing their knowledge.
Python is known as a high-productivity language. As a programmer, you'll need less time to develop a solution with Python than with most languages.
This is important to always bring up whenever someone complains that "Python is slow."
Your time is more valuable than a machine's!
(See the Recommended Readings section at the end.)
And if we really need to speed up our program, we can re-write the slow parts in a compiled language afterwards.
Because Python plays well with other languages :–)
The top technology companies use Python: Google, Facebook, Dropbox, Wikipedia, Yahoo!, YouTube… And this year, Python took the No. 1 spot in the interactive list of The 2017 Top Programming Languages, by IEEE Spectrum (IEEE is the world's largest technical professional society).
Python is a versatile language, you can analyze data, build websites (e.g., Instagram, Mozilla, Pinterest), make art or music, etc. Because it is a versatile language, employers love Python: if you know Python they will want to hire you. —Jessica McKellar, ex Director of the Python Software Foundation, in a 2014 tutorial.
Your first program
In every programming class ever, your first program consists of printing a "Hello" message. In Python, you use the print() function, with your message inside quotation marks.
End of explanation
2 + 2
1.25 + 3.65
5 - 3
2 * 4
7 / 2
2**3
Explanation: Easy peasy!! You just wrote your first program and you learned how to use the print() function. Yes, print() is a function: we pass the argument we want the function to act on, inside the parentheses. In the case above, we passed a string, which is a series of characters between quotation marks. Don't worry, we will come back to what strings are later on in this lesson.
Key concept: function
A function is a compact collection of code that executes some action on its arguments. Every Python function has a name, used to call it, and takes its arguments inside round brackets. Some arguments may be optional (which means they have a default value defined inside the function), others are required. For example, the print() function has one required argument: the string of characters it should print out for you.
Python comes with many built-in functions, but you can also build your own. Chunking blocks of code into functions is one of the best strategies to deal with complex programs. It makes you more efficient, because you can reuse the code that you wrote into a function. Modularity and reuse are every programmer's friend.
Python as a calculator
Try any arithmetic operation in IPython. The symbols are what you would expect, except for the "raise-to-the-power-of" operator, which you obtain with two asterisks: **. Try all of these:
python
+ - * / ** % //
The % symbol is the modulo operator (divide and return remainder), and the double-slash is floor division.
End of explanation
9**1/2
Explanation: Let's see an interesting case:
End of explanation
9**(1/2)
Explanation: What happened? Isn't $9^{1/2} = 3$? (Raising to the power $1/2$ is the same as taking the square root.) Did Python get this wrong?
Compare with this:
End of explanation
x = 3
y = 4.5
Explanation: Yes! The order of operations matters!
If you don't remember what we are talking about, review the Arithmetics/Order of operations. A frequent situation that exposes this is the following:
Variables and their type
Variables consist of two parts: a name and a value. When we want to give a variable its name and value, we use the equal sign: name = value. This is called an assignment. The name of the variable goes on the left and the value on the right.
The first thing to get used to is that the equal sign in an assignment has a different meaning than it has in Algebra! Think of it as an arrow pointing from name to value.
<img src="images/variables.png" style="width: 400px;"/>
We have many possibilities for variable names: they can be made up of upper and lowercase letters, underscores and digits… although digits cannot go on the front of the name. For example, valid variable names are:
python
x
x1
X_2
name_3
NameLastname
Keep in mind, there are reserved words that you can't use; they are the special Python keywords.
OK. Let's assign some values to variables and do some operations with them:
End of explanation
x + y
2**x
y - 3
Explanation: Let's do some arithmetic operations with our new variables:
End of explanation
z = 'this is a string'
w = '1'
Explanation: String variables
In addition to name and value, Python variables have a type: the type of the value it refers to. For example, an integer value has type int, and a real number has type float. A string is a variable consisting of a sequence of characters marked by two quotes, and it has type str.
End of explanation
z + w
Explanation: What if you try to "add" two strings?
End of explanation
x + w
Explanation: The operation above is called concatenation: chaining two strings together into one. Insteresting, eh? But look at this:
End of explanation
type(x)
type(w)
type(y)
Explanation: Error! Why? Let's inspect what Python has to say and explore what is happening.
Python is a dynamic language, which means that you don't need to specify a type to invoke an existing object. The humorous nickname for this is "duck typing":
"If it looks like a duck, and quacks like a duck, then it's probably a duck."
In other words, a variable has a type, but we don't need to specify it. It will just behave like it's supposed to when we operate with it (it'll quack and walk like nature intended it to).
But sometimes you need to make sure you know the type of a variable. Thankfully, Python offers a function to find out the type of a variable: type().
End of explanation
sum_xy = x + y
diff_xy = x - y
print('The sum of x and y is:', sum_xy)
print('The difference between x and y is:', diff_xy)
Explanation: More assignments
What if you want to assign to a new variable the result of an operation that involves other variables? Well, you totally can!
End of explanation
type(sum_xy)
type(diff_xy)
Explanation: Notice what we did above: we used the print() function with a string message, followed by a variable, and Python printed a useful combination of the message and the variable value. This is a pro tip! You want to print for humans. Let's now check the type of the new variables we just created above:
End of explanation
import numpy
Explanation: Some more advanced computations
In order to compute more advanced functions, we have to import the NumPy library
End of explanation
numpy.cos(0.0)
numpy.sin(0.0)
Explanation: So, let 's try some functions, like sine and cosine
End of explanation
numpy.sin(numpy.deg2rad(90))
Explanation: Notice, that the underlying NumPy functions use angles in radians instead of degrees as input parameters. However, if you prefer degrees, NumPy can also solve this problem
End of explanation
z = 1 + 2j
type(z)
Explanation: What about complex numbers? With the imaginary number
$j^2 = -1$ you can define complex numbers
End of explanation
y = 2 + 1j
y + z
y * z
Explanation: We can also add and multiply complex number
End of explanation
numpy.real(y)
numpy.imag(y)
Explanation: Real and imaginary parts can be extracted by the NumPy functions
End of explanation
a = numpy.array([1, 2, 3, 4, 5, 6, 7])
print(a)
Explanation: Vectors
We can define vectors in many different ways, e.g. as a NumPy array:
End of explanation
a[0]
Explanation: Matlab users should be cautioned, because the index of the first vector element is not 1 but 0, so you access the first element by
End of explanation
a[-1]
Explanation: The last element is accessible by the index -1
End of explanation
a[0:3]
Explanation: What element is accessed by the index -2
As in Matlab you can also access different ranges of vector elements
End of explanation
b = a + 5
print(b)
Explanation: Compared to Matlab, the last element is not inclusive.
Computation with vectors is quite easy
End of explanation
c = a + b
print(c)
Explanation: Due to the equal length of vectors a and b, we can simply add them
End of explanation
d = a * b
print(d)
Explanation: or apply elementwise multiplication
End of explanation
c = c.T
Explanation: You can transpose the vector
End of explanation
a @ c
Explanation: and calculate the scalar product of two vectors by vector-vector multiplication @
End of explanation
f = numpy.arange(0,10)
print(f)
Explanation: Vectors with equidistant elements can be created by:
End of explanation
f = numpy.arange(0,11)
print(f)
g = numpy.arange(10,21,2)
print(g)
Explanation: Note that the last element is not inclusive! Another significant difference compared to Matlab, so keep this in mind, when creating vectors:
End of explanation
# set f[4] = 100
f[4] = 100
print(f)
# set f[0], f[1], f[2] = 1
f[0:3] = 1
print(f)
# delete elements 3 - 9
f = numpy.delete(f,numpy.arange(3,10))
print(f)
Explanation: You can replace individual elements in a vector or delete them. Do not forget that the first vector element has index 0 and the last element is not inclusive
End of explanation
k = numpy.arange(1,4)
print(k)
l = numpy.hstack((k, k, k))
print(l)
Explanation: You can concatenate vectors to new vectors:
End of explanation
M = numpy.vstack((k, k, k))
print(M)
Explanation: or new matrices
End of explanation
M[1,1]
M[1:3,1:3]
Explanation: Matrices
Similar to vectors we can access matrix elements. Again, I emphasize that the first index is 0 and the last element is not inclusive
End of explanation
M[1,:]
Explanation: Similar to Matlab you can access matrix rows via :, the second row of matrix M
End of explanation
numpy.diag(M)
Explanation: Special matrix operations are for example the diagonal
End of explanation
A = numpy.matrix('1 2 3; 5 7 6; 1 4 6')
print(A)
A.I
A @ A.I
Explanation: or the inverse
End of explanation
for i in range(1,4):
print("i = ", i)
h=0
for i in range(1,4):
h = h + i
print(h)
Explanation: For-loops and if-statements
Two important aspects of flow control in a Python code are For-loops and If-statements. Again, I emphasize that the last element in the loop is not inclusive:
End of explanation
a = 1
b = 2
c = 3
a <= 2
a >= 2
b == 2
a <= 2 and b==2
a <=2 or a>=2
Explanation: Before discussing If-statements, we introduce the logical operators <=, ==, >=, or, and
End of explanation
for i in range(1,4):
if(i>=2 and i<3):
print("i = ", i)
Explanation: Now, we can control the flow inside the FOR-loop with if-statements
End of explanation
?numpy.sin
Explanation: Functions
It is good coding practice to avoid repeating ourselves: we want to write code that is reusable, not only because it leads to less typing but also because it reduces errors. If you find yourself doing the same calculation multiple times, it's better to encapsulate it into a function.
A function is a compact collection of code that executes some action on its arguments.
Once defined, you can call a function as many times as you want. When we call a function, we execute all the code inside the function. The result of the execution depends on the definition of the function and on the values that are passed into it as arguments. Functions might or might not return values in their last operation.
The syntax for defining custom Python functions is:
python
def function_name(arg_1, arg_2, ...):
'''
docstring: description of the function
'''
<body of the function>
The docstring of a function is a message from the programmer documenting what he or she built. Docstrings should be descriptive and concise. They are important because they explain (or remind) the intended use of the function to the users. You can later access the docstring of a function using the function help() and passing the name of the function. If you are in a notebook, you can also prepend a question mark '?' before the name of the function and run the cell to display the information of a function.
Let 's try it!
End of explanation
def gauss_easter(year):
'''
Computation of easter date using the Computus by Carl Friedrich Gauss
input parameter: year
'''
a = year % 19
b = year % 4
c = year % 7
k = year//100
p = (13 + 8 * k) // 25
q = k // 4
M = (15 - p + k - q) % 30
N = (4 + k - q) % 7
d = (19 * a + M) % 30
e = (2 * b + 4 * c + 6 * d + N) % 7
if(22 + d + e <= 31):
print("Gregorian Easter is", 22 + d + e, "of March")
if(d + e - 9 > 0.0):
if(d==29 and e==6):
print("Gregorian Easter is", 19, "of April")
elif(d==28 and e ==6 and ((11 * M + 11) % 30) < 19):
print("Gregorian Easter is", 18, "of April")
else:
print("Gregorian Easter is", d + e - 9, "of April")
# calculate easter for year 1777
gauss_easter(1777)
# calculate easter for year 2019
gauss_easter(2019)
# calculate easter for year 2049
gauss_easter(2049)
# calculate easter for year 1981
gauss_easter(1981)
Explanation: Time to write our own Python function. Do you now the date of easter sunday this year? That's quite important, because on the following monday there will be no TEW2 lecture. ;-)
Let's write a Python function to calculate the date using the
Easter algorithm by Carl Friedrich Gauss:
a = year mod 19
b = year mod 4
c = year mod 7
k = floor(year/100)
p = floor((13 + 8k)/25)
q = floor(k/4)
M = (15 - p + k - q) mod 30
N = (4 + k - q) mod 7
d = (19a + M) mod 30
e = (2b + 4c + 6d + N) mod 7
Gregorian Easter is 22 + d + e March or d + e − 9 April
if d = 29 and e = 6, replace 26 April with 19 April
if d = 28, e = 6, and (11M + 11) mod 30 < 19, replace 25 April with 18 April
With a modified version you can also calculate the jewish Passover date
End of explanation
xarray = numpy.linspace(0, 2, 41)
print(xarray)
pow2 = xarray**2
pow3 = xarray**3
pow_half = numpy.sqrt(xarray)
Explanation: Time to Plot
You will love the Python library Matplotlib! You'll learn here about its module pyplot, which makes line plots.
We need some data to plot. Let's define a NumPy array, compute derived data using its square, cube and square root (element-wise), and plot these values with the original array in the x-axis.
End of explanation
from matplotlib import pyplot
Explanation: To plot the resulting arrays as a function of the orginal one (xarray) in the x-axis, we need to import the module pyplot from Matplotlib.
End of explanation
#Plot x^2
pyplot.plot(xarray, pow2, color='k', linestyle='-', label='square')
#Plot x^3
pyplot.plot(xarray, pow3, color='k', linestyle='--', label='cube')
#Plot sqrt(x)
pyplot.plot(xarray, pow_half, color='k', linestyle=':', label='square root')
#Plot the legends in the best location
pyplot.legend(loc='best')
Explanation: We'll use the pyplot.plot() function, specifying the line color ('k' for black) and line style ('-', '--' and ':' for continuous, dashed and dotted line), and giving each line a label. Note that the values for color, linestyle and label are given in quotes.
End of explanation
#Plot x^2
pyplot.plot(xarray, pow2, color='red', linestyle='-', label='$x^2$')
#Plot x^3
pyplot.plot(xarray, pow3, color='green', linestyle='-', label='$x^3$')
#Plot sqrt(x)
pyplot.plot(xarray, pow_half, color='blue', linestyle='-', label='$\sqrt{x}$')
#Plot the legends in the best location
pyplot.legend(loc='best');
Explanation: To illustrate other features, we will plot the same data, but varying the colors instead of the line style. We'll also use LaTeX syntax to write formulas in the labels. If you want to know more about LaTeX syntax, there is a quick guide to LaTeX available online.
Adding a semicolon (';') to the last line in the plotting code block prevents that ugly output, like <matplotlib.legend.Legend at 0x7f8c83cc7898>. Try it.
End of explanation |
6,139 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter
Step1: Lesson
Step2: Project 1 | Python Code:
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
len(reviews)
reviews[0]
labels[0]
Explanation: Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter: @iamtrask
Blog: http://iamtrask.github.io
What You Should Already Know
neural networks, forward and back-propagation
stochastic gradient descent
mean squared error
and train/test splits
Where to Get Help if You Need it
Re-watch previous Udacity Lectures
Leverage the recommended Course Reading Material - Grokking Deep Learning (40% Off: traskud17)
Shoot me a tweet @iamtrask
Tutorial Outline:
Intro: The Importance of "Framing a Problem"
Curate a Dataset
Developing a "Predictive Theory"
PROJECT 1: Quick Theory Validation
Transforming Text to Numbers
PROJECT 2: Creating the Input/Output Data
Putting it all together in a Neural Network
PROJECT 3: Building our Neural Network
Understanding Neural Noise
PROJECT 4: Making Learning Faster by Reducing Noise
Analyzing Inefficiencies in our Network
PROJECT 5: Making our Network Train and Run Faster
Further Noise Reduction
PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary
Analysis: What's going on in the weights?
Lesson: Curate a Dataset
End of explanation
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
Explanation: Lesson: Develop a Predictive Theory
End of explanation
from collections import Counter
import numpy as np
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
positive_counts.most_common()
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt > 100):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio+0.01)))
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
Explanation: Project 1: Quick Theory Validation
End of explanation |
6,140 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 Google
Step1: Quantum Chess REST Client
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: The server for the Quantum Chess Rest API endpoints should provide you with an ngrok url when you run it. Paste the url provided by your server in the form below. If your server is running, the following code should produce the message
Step3: You should be able to see the server output indicting a connection was made.
Initialization
Make a simple request to initialize a board with the starting occupancy state of all pieces. Using the bitboard format, the initial positions of pieces are given by the hex 0xFFFF00000000FFFF. This initializes all squares in ranks 1, 2, 7, and 8 to be occupied.
Step4: Superposition
With the board initialized, you can execute a few moves to see what happens. You can create superposition by executing a split move from b1 to a3 and c3. Watch the server output to see the execution of this move.
Step5: Entanglement
You can see, in the probabilities returned, a roughly 50/50 split for two of the squares. A pawn two-step move, from c2 to c4, will entangle the pawn on c2 with the piece in superposition on a3 and c3.
Step6: Measurement
The probability distribution returned doesn't show the entanglement, but it still exists in the underlying state. You can see this by doing a move that forces a measurement. An excluded move from d1 to c2 will force a measurement of the c2 square. In the server output you should see the collapse of the state, with c2, c3, c4, and a3 taking definite 0 or 100% probabilities.
Step7: You can see the entanglement correlation by running the following cell a few times. There should be two different outcomes, the first with both c2 and c3 are 100%, and the second with c4 and a3 both 100%. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 Google
End of explanation
try:
import recirq
except ImportError:
!pip install git+https://github.com/quantumlib/ReCirq -q
try:
import requests
except ImportError:
!pip install requests -q
Explanation: Quantum Chess REST Client
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/cirq/experiments/quantum_chess/quantum_chess_client"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/ReCirq/blob/master/docs/quantum_chess/quantum_chess_client.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/ReCirq/blob/master/docs/quantum_chess/quantum_chess_client.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/ReCirq/docs/quantum_chess/quantum_chess_client.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
This is a basic client meant to test the server implemented at the end of the Quantum Chess REST API documentation. You must run that previous Colab for this one to work.
Setup
End of explanation
url = "http://bd626d83c9ec.ngrok.io/" # @param {type:"string"}
!curl -s $url
Explanation: The server for the Quantum Chess Rest API endpoints should provide you with an ngrok url when you run it. Paste the url provided by your server in the form below. If your server is running, the following code should produce the message: "Running Flask on Google Colab!"
End of explanation
import requests
init_board_json = {"init_basis_state": 0xFFFF00000000FFFF}
response = requests.post(url + "/quantumboard/init", json=init_board_json)
print(response.content)
Explanation: You should be able to see the server output indicting a connection was made.
Initialization
Make a simple request to initialize a board with the starting occupancy state of all pieces. Using the bitboard format, the initial positions of pieces are given by the hex 0xFFFF00000000FFFF. This initializes all squares in ranks 1, 2, 7, and 8 to be occupied.
End of explanation
from recirq.quantum_chess.enums import MoveType, MoveVariant
from recirq.quantum_chess.bit_utils import square_to_bit
split_b1_a3_c3 = {
"square1": square_to_bit("b1"),
"square2": square_to_bit("a3"),
"square3": square_to_bit("c3"),
"type": int(MoveType.SPLIT_JUMP.value),
"variant": int(MoveVariant.BASIC.value),
}
response = requests.post(url + "/quantumboard/do_move", json=split_b1_a3_c3)
print(response.content)
Explanation: Superposition
With the board initialized, you can execute a few moves to see what happens. You can create superposition by executing a split move from b1 to a3 and c3. Watch the server output to see the execution of this move.
End of explanation
move_c2_c4 = {
"square1": square_to_bit("c2"),
"square2": square_to_bit("c4"),
"square3": 0,
"type": int(MoveType.PAWN_TWO_STEP.value),
"variant": int(MoveVariant.BASIC.value),
}
response = requests.post(url + "/quantumboard/do_move", json=move_c2_c4)
print(response.content)
Explanation: Entanglement
You can see, in the probabilities returned, a roughly 50/50 split for two of the squares. A pawn two-step move, from c2 to c4, will entangle the pawn on c2 with the piece in superposition on a3 and c3.
End of explanation
move_d1_c2 = {
"square1": square_to_bit("d1"),
"square2": square_to_bit("c2"),
"square3": 0,
"type": int(MoveType.JUMP.value),
"variant": int(MoveVariant.EXCLUDED.value),
}
response = requests.post(url + "/quantumboard/do_move", json=move_d1_c2)
print(response.content)
Explanation: Measurement
The probability distribution returned doesn't show the entanglement, but it still exists in the underlying state. You can see this by doing a move that forces a measurement. An excluded move from d1 to c2 will force a measurement of the c2 square. In the server output you should see the collapse of the state, with c2, c3, c4, and a3 taking definite 0 or 100% probabilities.
End of explanation
response = requests.post(url + "/quantumboard/undo_last_move")
print(response.content)
response = requests.post(url + "/quantumboard/do_move", json=move_d1_c2)
print(response.content)
Explanation: You can see the entanglement correlation by running the following cell a few times. There should be two different outcomes, the first with both c2 and c3 are 100%, and the second with c4 and a3 both 100%.
End of explanation |
6,141 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Repeated measures ANOVA on source data with spatio-temporal clustering
This example illustrates how to make use of the clustering functions
for arbitrary, self-defined contrasts beyond standard t-tests. In this
case we will tests if the differences in evoked responses between
stimulation modality (visual VS auditory) depend on the stimulus
location (left vs right) for a group of subjects (simulated here
using one subject's data). For this purpose we will compute an
interaction effect using a repeated measures ANOVA. The multiple
comparisons problem is addressed with a cluster-level permutation test
across space and time.
Step1: Set parameters
Step2: Read epochs for all channels, removing a bad one
Step3: Transform to source space
Step4: Transform to common cortical space
Normally you would read in estimates across several subjects and morph them
to the same cortical space (e.g. fsaverage). For example purposes, we will
simulate this by just having each "subject" have the same response (just
noisy in source space) here.
We'll only consider the left hemisphere in this tutorial.
Step5: It's a good idea to spatially smooth the data, and for visualization
purposes, let's morph these to fsaverage, which is a grade 5 ICO source space
with vertices 0
Step6: Now we need to prepare the group matrix for the ANOVA statistic. To make the
clustering function work correctly with the ANOVA function X needs to be a
list of multi-dimensional arrays (one per condition) of shape
Step7: Prepare function for arbitrary contrast
As our ANOVA function is a multi-purpose tool we need to apply a few
modifications to integrate it with the clustering function. This
includes reshaping data, setting default arguments and processing
the return values. For this reason we'll write a tiny dummy function.
We will tell the ANOVA how to interpret the data matrix in terms of
factors. This is done via the factor levels argument which is a list
of the number factor levels for each factor.
Step8: Finally we will pick the interaction effect by passing 'A
Step9: A stat_fun must deal with a variable number of input arguments.
Inside the clustering function each condition will be passed as flattened
array, necessitated by the clustering procedure. The ANOVA however expects an
input array of dimensions
Step10: Compute clustering statistic
To use an algorithm optimized for spatio-temporal clustering, we
just pass the spatial connectivity matrix (instead of spatio-temporal).
Step11: Visualize the clusters
Step12: Finally, let's investigate interaction effect by reconstructing the time
courses | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
# Eric Larson <[email protected]>
# Denis Engemannn <[email protected]>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
from numpy.random import randn
import matplotlib.pyplot as plt
import mne
from mne.stats import (spatio_temporal_cluster_test, f_threshold_mway_rm,
f_mway_rm, summarize_clusters_stc)
from mne.minimum_norm import apply_inverse, read_inverse_operator
from mne.datasets import sample
print(__doc__)
Explanation: Repeated measures ANOVA on source data with spatio-temporal clustering
This example illustrates how to make use of the clustering functions
for arbitrary, self-defined contrasts beyond standard t-tests. In this
case we will tests if the differences in evoked responses between
stimulation modality (visual VS auditory) depend on the stimulus
location (left vs right) for a group of subjects (simulated here
using one subject's data). For this purpose we will compute an
interaction effect using a repeated measures ANOVA. The multiple
comparisons problem is addressed with a cluster-level permutation test
across space and time.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
subjects_dir = data_path + '/subjects'
src_fname = subjects_dir + '/fsaverage/bem/fsaverage-ico-5-src.fif'
tmin = -0.2
tmax = 0.3 # Use a lower tmax to reduce multiple comparisons
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
Explanation: Set parameters
End of explanation
raw.info['bads'] += ['MEG 2443']
picks = mne.pick_types(raw.info, meg=True, eog=True, exclude='bads')
# we'll load all four conditions that make up the 'two ways' of our ANOVA
event_id = dict(l_aud=1, r_aud=2, l_vis=3, r_vis=4)
reject = dict(grad=1000e-13, mag=4000e-15, eog=150e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=True)
# Equalize trial counts to eliminate bias (which would otherwise be
# introduced by the abs() performed below)
epochs.equalize_event_counts(event_id)
Explanation: Read epochs for all channels, removing a bad one
End of explanation
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE, sLORETA, or eLORETA)
inverse_operator = read_inverse_operator(fname_inv)
# we'll only use one hemisphere to speed up this example
# instead of a second vertex array we'll pass an empty array
sample_vertices = [inverse_operator['src'][0]['vertno'], np.array([], int)]
# Let's average and compute inverse, then resample to speed things up
conditions = []
for cond in ['l_aud', 'r_aud', 'l_vis', 'r_vis']: # order is important
evoked = epochs[cond].average()
evoked.resample(50, npad='auto')
condition = apply_inverse(evoked, inverse_operator, lambda2, method)
# Let's only deal with t > 0, cropping to reduce multiple comparisons
condition.crop(0, None)
conditions.append(condition)
tmin = conditions[0].tmin
tstep = conditions[0].tstep
Explanation: Transform to source space
End of explanation
n_vertices_sample, n_times = conditions[0].lh_data.shape
n_subjects = 7
print('Simulating data for %d subjects.' % n_subjects)
# Let's make sure our results replicate, so set the seed.
np.random.seed(0)
X = randn(n_vertices_sample, n_times, n_subjects, 4) * 10
for ii, condition in enumerate(conditions):
X[:, :, :, ii] += condition.lh_data[:, :, np.newaxis]
Explanation: Transform to common cortical space
Normally you would read in estimates across several subjects and morph them
to the same cortical space (e.g. fsaverage). For example purposes, we will
simulate this by just having each "subject" have the same response (just
noisy in source space) here.
We'll only consider the left hemisphere in this tutorial.
End of explanation
# Read the source space we are morphing to (just left hemisphere)
src = mne.read_source_spaces(src_fname)
fsave_vertices = [src[0]['vertno'], []]
morph_mat = mne.compute_source_morph(
src=inverse_operator['src'], subject_to='fsaverage',
spacing=fsave_vertices, subjects_dir=subjects_dir, smooth=20).morph_mat
morph_mat = morph_mat[:, :n_vertices_sample] # just left hemi from src
n_vertices_fsave = morph_mat.shape[0]
# We have to change the shape for the dot() to work properly
X = X.reshape(n_vertices_sample, n_times * n_subjects * 4)
print('Morphing data.')
X = morph_mat.dot(X) # morph_mat is a sparse matrix
X = X.reshape(n_vertices_fsave, n_times, n_subjects, 4)
Explanation: It's a good idea to spatially smooth the data, and for visualization
purposes, let's morph these to fsaverage, which is a grade 5 ICO source space
with vertices 0:10242 for each hemisphere. Usually you'd have to morph
each subject's data separately, but here since all estimates are on
'sample' we can use one morph matrix for all the heavy lifting.
End of explanation
X = np.transpose(X, [2, 1, 0, 3]) #
X = [np.squeeze(x) for x in np.split(X, 4, axis=-1)]
Explanation: Now we need to prepare the group matrix for the ANOVA statistic. To make the
clustering function work correctly with the ANOVA function X needs to be a
list of multi-dimensional arrays (one per condition) of shape: samples
(subjects) x time x space.
First we permute dimensions, then split the array into a list of conditions
and discard the empty dimension resulting from the split using numpy squeeze.
End of explanation
factor_levels = [2, 2]
Explanation: Prepare function for arbitrary contrast
As our ANOVA function is a multi-purpose tool we need to apply a few
modifications to integrate it with the clustering function. This
includes reshaping data, setting default arguments and processing
the return values. For this reason we'll write a tiny dummy function.
We will tell the ANOVA how to interpret the data matrix in terms of
factors. This is done via the factor levels argument which is a list
of the number factor levels for each factor.
End of explanation
effects = 'A:B'
# Tell the ANOVA not to compute p-values which we don't need for clustering
return_pvals = False
# a few more convenient bindings
n_times = X[0].shape[1]
n_conditions = 4
Explanation: Finally we will pick the interaction effect by passing 'A:B'.
(this notation is borrowed from the R formula language). Without this also
the main effects will be returned.
End of explanation
def stat_fun(*args):
# get f-values only.
return f_mway_rm(np.swapaxes(args, 1, 0), factor_levels=factor_levels,
effects=effects, return_pvals=return_pvals)[0]
Explanation: A stat_fun must deal with a variable number of input arguments.
Inside the clustering function each condition will be passed as flattened
array, necessitated by the clustering procedure. The ANOVA however expects an
input array of dimensions: subjects X conditions X observations (optional).
The following function catches the list input and swaps the first and the
second dimension, and finally calls ANOVA.
<div class="alert alert-info"><h4>Note</h4><p>For further details on this ANOVA function consider the
corresponding
`time-frequency tutorial <tut-timefreq-twoway-anova>`.</p></div>
End of explanation
# as we only have one hemisphere we need only need half the connectivity
print('Computing connectivity.')
connectivity = mne.spatial_src_connectivity(src[:1])
# Now let's actually do the clustering. Please relax, on a small
# notebook and one single thread only this will take a couple of minutes ...
pthresh = 0.0005
f_thresh = f_threshold_mway_rm(n_subjects, factor_levels, effects, pthresh)
# To speed things up a bit we will ...
n_permutations = 128 # ... run fewer permutations (reduces sensitivity)
print('Clustering.')
T_obs, clusters, cluster_p_values, H0 = clu = \
spatio_temporal_cluster_test(X, connectivity=connectivity, n_jobs=1,
threshold=f_thresh, stat_fun=stat_fun,
n_permutations=n_permutations,
buffer_size=None)
# Now select the clusters that are sig. at p < 0.05 (note that this value
# is multiple-comparisons corrected).
good_cluster_inds = np.where(cluster_p_values < 0.05)[0]
Explanation: Compute clustering statistic
To use an algorithm optimized for spatio-temporal clustering, we
just pass the spatial connectivity matrix (instead of spatio-temporal).
End of explanation
print('Visualizing clusters.')
# Now let's build a convenient representation of each cluster, where each
# cluster becomes a "time point" in the SourceEstimate
stc_all_cluster_vis = summarize_clusters_stc(clu, tstep=tstep,
vertices=fsave_vertices,
subject='fsaverage')
# Let's actually plot the first "time point" in the SourceEstimate, which
# shows all the clusters, weighted by duration
subjects_dir = op.join(data_path, 'subjects')
# The brighter the color, the stronger the interaction between
# stimulus modality and stimulus location
brain = stc_all_cluster_vis.plot(subjects_dir=subjects_dir, views='lat',
time_label='Duration significant (ms)',
clim=dict(kind='value', lims=[0, 1, 40]))
brain.save_image('cluster-lh.png')
brain.show_view('medial')
Explanation: Visualize the clusters
End of explanation
inds_t, inds_v = [(clusters[cluster_ind]) for ii, cluster_ind in
enumerate(good_cluster_inds)][0] # first cluster
times = np.arange(X[0].shape[1]) * tstep * 1e3
plt.figure()
colors = ['y', 'b', 'g', 'purple']
event_ids = ['l_aud', 'r_aud', 'l_vis', 'r_vis']
for ii, (condition, color, eve_id) in enumerate(zip(X, colors, event_ids)):
# extract time course at cluster vertices
condition = condition[:, :, inds_v]
# normally we would normalize values across subjects but
# here we use data from the same subject so we're good to just
# create average time series across subjects and vertices.
mean_tc = condition.mean(axis=2).mean(axis=0)
std_tc = condition.std(axis=2).std(axis=0)
plt.plot(times, mean_tc.T, color=color, label=eve_id)
plt.fill_between(times, mean_tc + std_tc, mean_tc - std_tc, color='gray',
alpha=0.5, label='')
ymin, ymax = mean_tc.min() - 5, mean_tc.max() + 5
plt.xlabel('Time (ms)')
plt.ylabel('Activation (F-values)')
plt.xlim(times[[0, -1]])
plt.ylim(ymin, ymax)
plt.fill_betweenx((ymin, ymax), times[inds_t[0]],
times[inds_t[-1]], color='orange', alpha=0.3)
plt.legend()
plt.title('Interaction between stimulus-modality and location.')
plt.show()
Explanation: Finally, let's investigate interaction effect by reconstructing the time
courses:
End of explanation |
6,142 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: WithTimestamps
<script type="text/javascript">
localStorage.setItem('language', 'language-py')
</script>
Assigns timestamps to all the elements of a collection.
Setup
To run a code cell, you can click the Run cell button at the top left of the cell,
or select it and press Shift+Enter.
Try modifying a code cell and re-running it to see what happens.
To learn more about Colab, see
Welcome to Colaboratory!.
First, let's install the apache-beam module.
Step2: Examples
In the following examples, we create a pipeline with a PCollection and attach a timestamp value to each of its elements.
When windowing and late data play an important role in streaming pipelines, timestamps are especially useful.
Example 1
Step3: <table align="left" style="margin-right
Step4: To convert from a
datetime.datetime
to unix_time you can use convert it to a time.struct_time first with
datetime.timetuple.
Step5: Example 2
Step6: <table align="left" style="margin-right | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License")
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
Explanation: <a href="https://colab.research.google.com/github/apache/beam/blob/master/examples/notebooks/documentation/transforms/python/elementwise/withtimestamps-py.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"/></a>
<table align="left"><td><a target="_blank" href="https://beam.apache.org/documentation/transforms/python/elementwise/withtimestamps"><img src="https://beam.apache.org/images/logos/full-color/name-bottom/beam-logo-full-color-name-bottom-100.png" width="32" height="32" />View the docs</a></td></table>
End of explanation
!pip install --quiet -U apache-beam
Explanation: WithTimestamps
<script type="text/javascript">
localStorage.setItem('language', 'language-py')
</script>
Assigns timestamps to all the elements of a collection.
Setup
To run a code cell, you can click the Run cell button at the top left of the cell,
or select it and press Shift+Enter.
Try modifying a code cell and re-running it to see what happens.
To learn more about Colab, see
Welcome to Colaboratory!.
First, let's install the apache-beam module.
End of explanation
import apache_beam as beam
class GetTimestamp(beam.DoFn):
def process(self, plant, timestamp=beam.DoFn.TimestampParam):
yield '{} - {}'.format(timestamp.to_utc_datetime(), plant['name'])
with beam.Pipeline() as pipeline:
plant_timestamps = (
pipeline
| 'Garden plants' >> beam.Create([
{'name': 'Strawberry', 'season': 1585699200}, # April, 2020
{'name': 'Carrot', 'season': 1590969600}, # June, 2020
{'name': 'Artichoke', 'season': 1583020800}, # March, 2020
{'name': 'Tomato', 'season': 1588291200}, # May, 2020
{'name': 'Potato', 'season': 1598918400}, # September, 2020
])
| 'With timestamps' >> beam.Map(
lambda plant: beam.window.TimestampedValue(plant, plant['season']))
| 'Get timestamp' >> beam.ParDo(GetTimestamp())
| beam.Map(print)
)
Explanation: Examples
In the following examples, we create a pipeline with a PCollection and attach a timestamp value to each of its elements.
When windowing and late data play an important role in streaming pipelines, timestamps are especially useful.
Example 1: Timestamp by event time
The elements themselves often already contain a timestamp field.
beam.window.TimestampedValue takes a value and a
Unix timestamp
in the form of seconds.
End of explanation
import time
time_tuple = time.strptime('2020-03-19 20:50:00', '%Y-%m-%d %H:%M:%S')
unix_time = time.mktime(time_tuple)
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/withtimestamps.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
To convert from a
time.struct_time
to unix_time you can use
time.mktime.
For more information on time formatting options, see
time.strftime.
End of explanation
import time
import datetime
now = datetime.datetime.now()
time_tuple = now.timetuple()
unix_time = time.mktime(time_tuple)
Explanation: To convert from a
datetime.datetime
to unix_time you can use convert it to a time.struct_time first with
datetime.timetuple.
End of explanation
import apache_beam as beam
class GetTimestamp(beam.DoFn):
def process(self, plant, timestamp=beam.DoFn.TimestampParam):
event_id = int(timestamp.micros / 1e6) # equivalent to seconds
yield '{} - {}'.format(event_id, plant['name'])
with beam.Pipeline() as pipeline:
plant_events = (
pipeline
| 'Garden plants' >> beam.Create([
{'name': 'Strawberry', 'event_id': 1},
{'name': 'Carrot', 'event_id': 4},
{'name': 'Artichoke', 'event_id': 2},
{'name': 'Tomato', 'event_id': 3},
{'name': 'Potato', 'event_id': 5},
])
| 'With timestamps' >> beam.Map(lambda plant: \
beam.window.TimestampedValue(plant, plant['event_id']))
| 'Get timestamp' >> beam.ParDo(GetTimestamp())
| beam.Map(print)
)
Explanation: Example 2: Timestamp by logical clock
If each element has a chronological number, these numbers can be used as a
logical clock.
These numbers have to be converted to a "seconds" equivalent, which can be especially important depending on your windowing and late data rules.
End of explanation
import apache_beam as beam
import time
class GetTimestamp(beam.DoFn):
def process(self, plant, timestamp=beam.DoFn.TimestampParam):
yield '{} - {}'.format(timestamp.to_utc_datetime(), plant['name'])
with beam.Pipeline() as pipeline:
plant_processing_times = (
pipeline
| 'Garden plants' >> beam.Create([
{'name': 'Strawberry'},
{'name': 'Carrot'},
{'name': 'Artichoke'},
{'name': 'Tomato'},
{'name': 'Potato'},
])
| 'With timestamps' >> beam.Map(lambda plant: \
beam.window.TimestampedValue(plant, time.time()))
| 'Get timestamp' >> beam.ParDo(GetTimestamp())
| beam.Map(print)
)
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/withtimestamps.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Example 3: Timestamp by processing time
If the elements do not have any time data available, you can also use the current processing time for each element.
Note that this grabs the local time of the worker that is processing each element.
Workers might have time deltas, so using this method is not a reliable way to do precise ordering.
By using processing time, there is no way of knowing if data is arriving late because the timestamp is attached when the element enters into the pipeline.
End of explanation |
6,143 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Loading the necessary libraries
Step1: Loading the dataset 0750-0805
Description of the dataset is at
Step2: What is the number of different vehicles for the 15 min
How many timestamps? Are the timestamps of the vehicles matched?
To transfor the distaces, veloc and acceleration to meters, m/s.
To compute the distances all to all.
Compute the time cycles.
Step3: 15min = 900 s = 9000 ms //
9529ms = 952.9s = 15min 52.9s
The actual temporal length of this dataset is 15min 52.9s. Looks like the timestamp of the vehicles is matches. Which make sense attending to the way the data is obtained. There is no GPS on the vehicles, but from cameras synchronized localized at different buildings.
For every time stamp, check how many vehicles are accelerating when the one behind is also or not...
Step4: def calculateDistance(x1,y1,x2,y2)
Step5: if i+1 > len(df)-1
Step6: This code works!!
NO TOCAR
Step7: Computing the GRAPH
IT WORKS DO NOT TOUCH!!
Step8: Using from_pandas_dataframe | Python Code:
%matplotlib inline
from pandas import Series, DataFrame
import pandas as pd
from itertools import *
import itertools
import numpy as np
import csv
import math
import matplotlib.pyplot as plt
from matplotlib import pylab
from scipy.signal import hilbert, chirp
import scipy
import networkx as nx
Explanation: Loading the necessary libraries
End of explanation
c_dataset = ['vID','fID', 'tF', 'Time', 'lX', 'lY', 'gX', 'gY', 'vLen', 'vWid', 'vType','vVel', 'vAcc', 'vLane', 'vPrec', 'vFoll', 'spac','headway' ]
dataset = pd.read_table('D:\\zzzLola\\PhD\\DataSet\\US101\\coding\\dataset_meters_sample.txt', sep=r"\s+",
header=None, names=c_dataset)
dataset
Explanation: Loading the dataset 0750-0805
Description of the dataset is at:
D:/zzzLola/PhD/DataSet/US101/US101_time_series/US-101-Main-Data/vehicle-trajectory-data/trajectory-data-dictionary.htm
End of explanation
numV = dataset['vID'].unique()
len(numV)
numTS = dataset['Time'].unique()
len(numTS)
Explanation: What is the number of different vehicles for the 15 min
How many timestamps? Are the timestamps of the vehicles matched?
To transfor the distaces, veloc and acceleration to meters, m/s.
To compute the distances all to all.
Compute the time cycles.
End of explanation
dataset['tF'].describe()
des_all = dataset.describe()
des_all
#des_all.to_csv('D:\\zzzLola\\PhD\\DataSet\\US101\\coding\\description_allDataset.csv', sep='\t', encoding='utf-8')
#dataset.to_csv('D:\\zzzLola\\PhD\\DataSet\\US101\\coding\\dataset_meters.txt', sep='\t', encoding='utf-8')
#table.groupby('YEARMONTH').CLIENTCODE.nunique()
v_num_lanes = dataset.groupby('vID').vLane.nunique()
v_num_lanes[v_num_lanes > 1].count()
v_num_lanes[v_num_lanes == 1].count()
dataset[:10]
Explanation: 15min = 900 s = 9000 ms //
9529ms = 952.9s = 15min 52.9s
The actual temporal length of this dataset is 15min 52.9s. Looks like the timestamp of the vehicles is matches. Which make sense attending to the way the data is obtained. There is no GPS on the vehicles, but from cameras synchronized localized at different buildings.
For every time stamp, check how many vehicles are accelerating when the one behind is also or not... :
- vehicle_acceleration vs precedin_vehicl_acceleration
- vehicle_acceleration vs follower_vehicl_acceleration
When is a vehicle changing lanes?
End of explanation
#len(dataTime)
Explanation: def calculateDistance(x1,y1,x2,y2):
dist = math.sqrt((x2 - x1)2 + (y2 - y1)2)
return dist
result = df1.append(df2)
count = 0
dist = 0
create an empty dataframe
index = pd.date_range(todays_date-datetime.timedelta(10), periods=10, freq='D')
columns_dist = ['vIDa','Timea', 'gXa', 'gYa', 'vTypea','vVela', 'vAcca', 'vLanea', 'vPreca', 'vFolla',
'vIDb','Timeb', 'gXb', 'gYb', 'vTypeb','vVelb', 'vAccb', 'vLaneb', 'vPrecb', 'vFollb']
df_ = pd.DataFrame(index=index, columns=columns)
df_dist = pd.DataFrame(columns=columns_dist)
df_dist = df_dist.fillna(0) # with 0s rather than NaNs
Fill the dataframe
df = df.append(data)
times = dataset['Time'].unique()
for time in times:
print 'Time %i ' %time
dataTime = dataset.loc[dataset['Time'] == time]
row_iterator = dataTime.iterrows()
for index, row in row_iterator:
if index+1 > len(dataTime)-1:
print 'The index is %i ' %index
print row['vID']
print dataTime.iloc[index+1]['vID']
#while row.notnull == True:
# last = row_iterator.next()
# print last
#if ((index+1)):
# j=index+1
# print 'The index+1 is: %i' %j
# for j, row in dataTime.iterrows():
# #dist = calculateDistance(dataTime[index,'gX'],dataTime[index,'gY'],dataTime[j,'gX'],dataTime[j,'gY'],)
# #i_data = array_data.tolist
# #dist_med = (array_data[i, 3], array_data[i, 0], array_data[j,0], dist, array_data[i, 10], array_data[i, 11],
# #array_data[i, 13],array_data[i, 14], array_data[i, 15])
# #dist_list.append(dist_med)
# count = len(dataTime)
#print ('The count is: %i' %count)
#count = 0
#dist = calculateDistance()
End of explanation
data = dataset.set_index("vID")
data[:13]
#Must be before, I guess.
dataset = dataset.drop(['fID','tF','lX','lY','vLen','vWid','spac','headway'], axis=1)
dataset
Explanation: if i+1 > len(df)-1:
pass
elif (df.loc[i+1,'a_d'] == df.loc [i,'a_d']):
pass
elif (df.loc [i+2,'station'] == df.loc [i,'station'] and (df.loc [i+2,'direction'] == df.loc [i,'direction'])):
pass
else:
df.loc[i,'value_id'] = value_id
import pandas as pd
from itertools import izip
df = pd.DataFrame(['AA', 'BB', 'CC'], columns = ['value'])
for id1, id2 in izip(df.iterrows(),df.ix[1:].iterrows()):
print id1[1]['value']
print id2[1]['value']
https://docs.python.org/3.1/library/itertools.html
http://stackoverflow.com/questions/25715627/itertools-selecting-in-pandas-based-on-previous-three-rows-or-previous-element
https://pymotw.com/2/itertools/
Calculation of DISTANCES
End of explanation
times = dataset['Time'].unique()
data = pd.DataFrame()
data = data.fillna(0) # with 0s rather than NaNs
dTime = pd.DataFrame()
for time in times:
print 'Time %i ' %time
dataTime0 = dataset.loc[dataset['Time'] == time]
list_vIDs = dataTime0.vID.tolist()
#print list_vIDs
dataTime = dataTime0.set_index("vID")
#index_dataTime = dataTime.index.values
#print dataTime
perm = list(permutations(list_vIDs,2))
#print perm
dist = pd.DataFrame([((((dataTime.loc[p[0],'gX'] - dataTime.loc[p[1],'gX']))**2) +
(((dataTime.loc[p[0],'gY'] - dataTime.loc[p[1],'gY']))**2))**0.5
for p in perm] , index=perm, columns = {'dist'})
#dist['time'] = time ##Matrix with dist and time
#merge dataTime with distances
dist['FromTo'] = dist.index
dist['vID'] = dist.FromTo.str[0]
dist['To'] = dist.FromTo.str[1]
dataTimeDist = pd.merge(dataTime0,dist, on = 'vID')
dataTimeDist = dataTimeDist.drop(['gX','gY'], axis=1)
print dataTimeDist
data = data.append(dataTimeDist)
data
Explanation: This code works!!
NO TOCAR
End of explanation
def save_graph(graph,file_name):
#initialze Figure
plt.figure(num=None, figsize=(20, 20), dpi=80)
plt.axis('off')
fig = plt.figure(1)
pos = nx.spring_layout(graph)
nx.draw_networkx_nodes(graph,pos)
nx.draw_networkx_edges(graph,pos)
nx.draw_networkx_labels(graph,pos)
#cut = 1.00
#xmax = cut * max(xx for xx, yy in pos.values())
#ymax = cut * max(yy for xx, yy in pos.values())
#plt.xlim(0, xmax)
#plt.ylim(0, ymax)
plt.savefig(file_name,bbox_inches="tight")
pylab.close()
del fig
times = dataset['Time'].unique()
data = pd.DataFrame()
data = data.fillna(0) # with 0s rather than NaNs
data_graph = pd.DataFrame()
data_graph = data.fillna(0)
dTime = pd.DataFrame()
for time in times:
#print 'Time %i ' %time
dataTime0 = dataset.loc[dataset['Time'] == time]
list_vIDs = dataTime0.vID.tolist()
#print list_vIDs
dataTime = dataTime0.set_index("vID")
#index_dataTime = dataTime.index.values
#print dataTime
perm = list(permutations(list_vIDs,2))
#print perm
dist = [((((dataTime.loc[p[0],'gX'] - dataTime.loc[p[1],'gX']))**2) +
(((dataTime.loc[p[0],'gY'] - dataTime.loc[p[1],'gY']))**2))**0.5 for p in perm]
dataDist = pd.DataFrame(dist , index=perm, columns = {'dist'})
#Convert the matrix into a square matrix
#Create the fields vID and To
dataDist['FromTo'] = dataDist.index
dataDist['vID'] = dataDist.FromTo.str[0]
dataDist['To'] = dataDist.FromTo.str[1]
#I multi
dataDist['inv_dist'] = (1/dataDist.dist)*100
#Delete the intermediate FromTo field
dataDist = dataDist.drop('FromTo', 1)
#With pivot and the 3 columns I can generate the square matrix
#Here is where I should have the condition of the max distance: THRESHOLD
dataGraph = dataDist.pivot(index='vID', columns='To', values = 'inv_dist').fillna(0)
print dataDist
#graph = nx.from_numpy_matrix(dataGraph.values)
#graph = nx.relabel_nodes(graph, dict(enumerate(dataGraph.columns)))
#save_graph(graph,'my_graph+%i.png' %time)
#print dataDist
#data = data.append(dist)
Explanation: Computing the GRAPH
IT WORKS DO NOT TOUCH!!
End of explanation
def save_graph(graph,my_weight,file_name):
#initialze Figure
plt.figure(num=None, figsize=(20, 20), dpi=80)
plt.axis('off')
fig = plt.figure(1)
pos = nx.spring_layout(graph,weight='my_weight') #spring_layout(graph)
nx.draw_networkx_nodes(graph,pos)
nx.draw_networkx_edges(graph,pos)
nx.draw_networkx_labels(graph,pos)
#cut = 1.00
#xmax = cut * max(xx for xx, yy in pos.values())
#ymax = cut * max(yy for xx, yy in pos.values())
#plt.xlim(0, xmax)
#plt.ylim(0, ymax)
plt.savefig(file_name,bbox_inches="tight")
pylab.close()
del fig
times = dataset['Time'].unique()
data = pd.DataFrame()
data = data.fillna(0) # with 0s rather than NaNs
dTime = pd.DataFrame()
for time in times:
#print 'Time %i ' %time
dataTime0 = dataset.loc[dataset['Time'] == time]
list_vIDs = dataTime0.vID.tolist()
#print list_vIDs
dataTime = dataTime0.set_index("vID")
#index_dataTime = dataTime.index.values
#print dataTime
perm = list(permutations(list_vIDs,2))
#print perm
dist = [((((dataTime.loc[p[0],'gX'] - dataTime.loc[p[1],'gX']))**2) +
(((dataTime.loc[p[0],'gY'] - dataTime.loc[p[1],'gY']))**2))**0.5 for p in perm]
dataDist = pd.DataFrame(dist , index=perm, columns = {'dist'})
#Create the fields vID and To
dataDist['FromTo'] = dataDist.index
dataDist['From'] = dataDist.FromTo.str[0]
dataDist['To'] = dataDist.FromTo.str[1]
#I multiply by 100 in order to scale the number
dataDist['weight'] = (1/dataDist.dist)*100
#Delete the intermediate FromTo field
dataDist = dataDist.drop('FromTo', 1)
graph = nx.from_pandas_dataframe(dataDist, 'From','To',['weight'])
save_graph(graph,'weight','000_my_graph+%i.png' %time)
dataDist
graph[1917][1919]['weight']
Explanation: Using from_pandas_dataframe
End of explanation |
6,144 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Class 7
Step1: The variable y1 in the preceding example stores the computed value for $y_1$. We can continue to iterate on Equation (4) to compute $y_2$, $y_3$, and so on. For example
Step2: We can do this as many times as necessary to reach the desired value of $t$. Note that iteration is necesary. Even though $y_t$ is apparently a function of $t$, we could not, for example, compute $y_{20}$ directly. Rather we'd have to compute $y_1, y_2, y_3, \ldots, y_{19}$ first. The linear first-order difference equation is an example of a recursive model and iteration is necessary for computing recursive models in general.
Of course, there is a better way. Let's define a function called diff1_example()that takes as arguments $\rho$, an array of values for $w$, and $y_0$.
Step3: Exercise
Step4: Exercise | Python Code:
# Initialize parameter values
y0 = 0
rho = 0.5
w1 = 1
# Compute the period 1 value of y
y1 = rho*y0 + w1
# Print the result
print('y1 =',y1)
Explanation: Class 7: Deterministic Time Series Models
Time series models are at the foundatation of dynamic macroeconomic theory. A time series model is an equation or system of equations that describes how the variables in the model change with time. Here, we examine some theory about deterministic, i.e., non-random, time series models and we explore methods for simulating them. Leter, we'll examine the properties of stochastic time series models by introducing random variables to the discrete time models covered below.
Discrete Versus Continuous Time
To begin, suppose that we are interested in a variable $y$ that takes on the value $y_t$ at date $t$. The date index $t$ is a real number. We'll say that $y_t$ is a discrete time variable if $t$ takes on values from a countable sequence; e.g. $t = 1, 2, 3 \ldots$ and so on. Otherwise, if $t$ takes on values from an uncountable sequence; e.g. $t\in[0,\infty)$, then we'll say that $y_t$ is a continuous time variable. Discrete and continuous time models both have important places in macroeconomic theory, but we're going to focus on understanding discrete time models.
First-Order Difference Equations
Now, suppose that the variable $y_t$ is determined by a linear function of $y_{t-1}$ and some other exogenously given variable $w_t$
\begin{align}
y_{t} & = (1- \rho) \mu + \rho y_{t-1} + w_t, \tag{1}\
\end{align}
where $\rho$ and $\mu$ are constants. Equation (1) is an example of a linear first-order difference equation. As a difference equation, it specifies how $y_t$ is related to past values of $y$. The equation is a first-order difference equation because it specifies that $y_t$ depends only on $y_{t-1}$ and not $y_{t-2}$ or $y_{t-3}$.
Example: Compounding Interest
Suppose that you have an initial balance of $b_0$ dollars in a savings account that pays an interest rate $i$ per compounding period. Then, after the first compounding, your account will have $b_1 = (1+i)b_0$ dollars in it. Assuming that you never withdraw funds from the account, then your account balance in any subsequent period $t$ is given by the following difference equation:
\begin{align}
b_{t} & = \left(1+i\right) b_{t-1}. \tag{2}
\end{align}
Equation (2) is linear first-order difference equation in the same form as Equation (1). You can see this by setting $y_t = b_t$, $\rho=1+i$, $\mu=0$, and $w_t=0$ in Equation (1).
Example: Capital Accumulation
Let $K_t$ denote the amont of physical capital in a country at date $t$, let $\delta$ denote the rate at which the capital stock depreciates each period, and let $I_t$ denote the country's investment in new capital in date $t$. Then the law of motion for the stock of physical capital is:
\begin{align}
K_{t+1} & = I_t + (1-\delta)K_t. \tag{3}
\end{align}
This standard expression for the law of motion for the capital stock is a linear first-order difference equation. To reconcile Equation (3) with Equation (1), set $y_t = K_{t+1}$, $\rho=1-\delta$, $\mu=0$, and $w_t=I_t$.
Note: There is a potentially confusing way in which we identified the $t+1$-dated variable $K_{t+1}$ with the $t$-dated variable $y_t$ in this example. We can do this because the value of $K_{t+1}$ truly is determined at date $t$ even though the capital isn't used for production until the next period.
Computation
From Equation (1), it's easy to compute the value of $y_t$ as long as you know the values of the constants $\rho$ and $\mu$ and the variables $y_{t-1}$ and $w_t$. To begin, let's suppose that the values of the constants are $\mu=0$, $\rho=0.5$. Then Equation (1) in our example looks like this:
\begin{align}
y_{t} & = 0.5 y_{t-1} + w_t. \tag{4}\
\end{align}
Now, suppose that the initial value of $y$ is $y_0=0$ and that $w$ is equal to 1 in the first period and equal to zero in subsequent periods. That is: $w_1=1$ and $w_2=w_3=\cdots =0$. Now, with what we have, we can compute $y_1$. Here's how:
End of explanation
# Compute the period 2 value of y
w2=0
y2 = rho*y1 + w2
# Print the result
print('y2 =',y2)
Explanation: The variable y1 in the preceding example stores the computed value for $y_1$. We can continue to iterate on Equation (4) to compute $y_2$, $y_3$, and so on. For example:
End of explanation
# Compute
# Initialize the variables T and w
T = 10
w = np.zeros(T)
w[0]=1
# Define a function that returns an arrary of y-values given rho, y0, and an array of w values.
def diff1_example(rho,w,y0):
T = len(w)
y = np.zeros(T+1)
y[0] = y0
for t in range(T):
y[t+1]=rho*y[t]+w[t]
return y
fig = plt.figure()
y = diff1_example(0.5,w,0)
plt.plot(y,'-',lw=5,alpha = 0.75)
ax1.set_title('$\\rho=0.5$')
plt.ylabel('y')
plt.xlabel('t')
plt.grid()
Explanation: We can do this as many times as necessary to reach the desired value of $t$. Note that iteration is necesary. Even though $y_t$ is apparently a function of $t$, we could not, for example, compute $y_{20}$ directly. Rather we'd have to compute $y_1, y_2, y_3, \ldots, y_{19}$ first. The linear first-order difference equation is an example of a recursive model and iteration is necessary for computing recursive models in general.
Of course, there is a better way. Let's define a function called diff1_example()that takes as arguments $\rho$, an array of values for $w$, and $y_0$.
End of explanation
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(2,2,1)
y = diff1_example(0.5,w,0)
ax1.plot(y,'-',lw=5,alpha = 0.75)
ax1.set_title('$\\rho=0.5$')
ax1.set_ylabel('y')
ax1.set_xlabel('t')
ax1.grid()
ax2 = fig.add_subplot(2,2,2)
y = diff1_example(-0.5,w,0)
ax2.plot(y,'-',lw=5,alpha = 0.75)
ax2.set_title('$\\rho=-0.5$')
ax2.set_ylabel('y')
ax2.set_xlabel('t')
ax2.grid()
ax3 = fig.add_subplot(2,2,3)
y = diff1_example(1,w,0)
ax3.plot(y,'-',lw=5,alpha = 0.75)
ax3.set_title('$\\rho=1$')
ax3.set_ylabel('y')
ax3.set_xlabel('t')
ax3.grid()
ax4 = fig.add_subplot(2,2,4)
y = diff1_example(1.25,w,0)
ax4.plot(y,'-',lw=5,alpha = 0.75)
ax4.set_title('$\\rho=1.25$')
ax4.set_ylabel('y')
ax4.set_xlabel('t')
ax4.grid()
plt.tight_layout()
Explanation: Exercise:
Use the function diff1_example() to make a $2\times2$ grid of plots just like the previous exercise but with with $\rho = 0.5$, $-0.5$, $1$, and $1.25$. For each, set $T = 10$, $y_0 = 1$, $w_0 = 1$, and $w_1 = w_2 = \cdots 0$.
End of explanation
# Initialize the variables T and w
T = 25
w = np.zeros(T)
w[0]=1
y1 = diff1_example(0.25,w,0)
y2 = diff1_example(0.75,w,0)
y3 = diff1_example(0.5,w,0)
y4 = diff1_example(0.95,w,0)
fig = plt.figure()
plt.plot(y1,'-',lw=5,alpha = 0.75,label='$\\rho=0.25$')
plt.plot(y2,'-',lw=5,alpha = 0.75,label='$\\rho=0.50$')
plt.plot(y3,'-',lw=5,alpha = 0.75,label='$\\rho=0.75$')
plt.plot(y4,'-',lw=5,alpha = 0.75,label='$\\rho=0.95$')
ax1.set_title('$\\rho=0.5$')
plt.ylabel('y')
plt.xlabel('t')
plt.legend(loc='upper right')
plt.grid()
Explanation: Exercise:
Use the function diff1_example() to make a single plot with 4 lines reflecting $\rho = 0.25$, $0.5$, $0.75$, and $0.95$. As before, $y_0 = 1$, $w_0 = 1$, and $w_1 = w_2 = \cdots 0$ for each but this time set $T=20$. Add a legend to clearly identify which line has which $\rho$ value.
End of explanation |
6,145 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Piscataway Machine Learning Meetup
Introduction to Python 01
Python is a scripting language which is very easy to learn. The syntax is very easy to grasp, which makes it very popular for programming novices. Thanks to the SciPy project, it is also very easy to solve machine learning problems with it!
Data Types
All programming languages classify data into types. Understanding the differences between these data types will make programming MUCH easier.
- Integer
Step1: Variables and assignment
Variables are containers that can hold any values. As the name suggests, the value that they hold can change. When you make the variable hold a value, we say that you assign a value to the variable.
Unlike in many other programming languages, variables are "untyped", meaning that you are allowed to assign different data types to the variable over the course of your program. Below, we assign an integer to myVariable, then assign a string to it.
Step2: Functions
Functions are pieces of code that are "saved" and can be re-run whenever you need to. They help to organize your code so that you don't have to re-write the same thing over and over again. They can also be used for some advanced techniques such as recursion. You can create a function using the def keyword. You may also see the lambda keyword used to create small one-line functions.
When you use a function in your code, we say that you call the function. To call a function, place a pair of parentheses after the function name. Functions can take values as input. Those values are called parameters, or arguments. When you provide a value to the function, we say that you pass a value to the function. In general, functions are only allowed to work with data that are passed in. To pass a parameter into a function, place it in the parentheses. There are rules surrounding what values you can pass and when, but it's easiest to learn these rules by imitating other peoples' code.
Step3: Classes, modules, and packages
In Python, you are allowed to create new data types. You can do this by creating a class. We won't actually create any classes, but just be aware that we will be using classes that other people have written. That means that we will be using more data types than just the integers, strings, booleans, etc. that we covered earlier.
Packages are large collections of code which are designed to add features to a programming language. Packages are made up of smaller components caled modules. We will be using 6 main packages in our exploration of data science | Python Code:
# COMMENTS begin with a pound sign (#) and extend to the end of the line.
# Comments are ignored by the computer. They are used to explain what the code is supposed to do.
# It is best practice to use LOTS of comments.
# That way, when you look back at your code, you can more quickly understand what you meant to do.
# The type function tells us the data type of whatever value is passed to it.
# The print function displays the output of whatever value is passed to it.
# When used together, they will display the data type of whatever value is inside all of those parentheses.
# We'll talk about functions in just a little bit.
# Integer
print(type(0))
# Float
print(type(1.2))
# String
print(type("Hello!"))
# Boolean
print(type(True))
# List
print(type([1,2,3,4]))
# Tuple
print(type((1,"b",False)))
# None
print(type(None))
Explanation: Piscataway Machine Learning Meetup
Introduction to Python 01
Python is a scripting language which is very easy to learn. The syntax is very easy to grasp, which makes it very popular for programming novices. Thanks to the SciPy project, it is also very easy to solve machine learning problems with it!
Data Types
All programming languages classify data into types. Understanding the differences between these data types will make programming MUCH easier.
- Integer: Just like in arithmetic, integers are pretty much any number without a decimal point.
- Float: Short for "floating-point number". Floats are pretty much any number WITH a decimal point.
- String: Short for "a string of characters". Strings are any kind of text.
- Boolean: Named after the mathematician George Boole, Boolean values are either True or False.
- List: This is a list of values. Lists may only contain one type of data (you can have a list of integers, but not a list with a mix of integers and strings).
- Tuple: This is a collection of values. It's like a list, but it can contain different data types, and it cannot be altered once created.
- None: This signifies a missing value. It's equivalent to NULL in databases.
End of explanation
myVariable = 4
print(myVariable)
# Can you take the code from the previous cell and show what the type of "myVariable" is?
myVariable = 5
print(myVariable)
# Let's check the data type of myVariable again
myVariable = "Hello!"
print(myVariable)
# Please check the data type of myVariable one more time
# To run the code you've just written, press the "Play" button on the toolbar.
# If we were not allowed to assign these values to myVariable, we would have seen an error below.
Explanation: Variables and assignment
Variables are containers that can hold any values. As the name suggests, the value that they hold can change. When you make the variable hold a value, we say that you assign a value to the variable.
Unlike in many other programming languages, variables are "untyped", meaning that you are allowed to assign different data types to the variable over the course of your program. Below, we assign an integer to myVariable, then assign a string to it.
End of explanation
# Earlier in the tutorial, we covered the print function. Print the String "Hello, world!"
# We also covered the type function. Find the type of the String "Hello, world!"
# Daisy-chaining functions in that manner is extremely useful when working with data.
# We can also define our own functions. Here, I've written a simple 3-number added
def ThreeNumberAdder (n1, n2, n3):
return n1 + n2 + n3
# And now I can test it out:
print(ThreeNumberAdder(1, 2, 3))
# Copy that function below, and see if you can modify it to multiply 3 numbers instead.
# You can test your ThreeNumberMultiplier below here.
Explanation: Functions
Functions are pieces of code that are "saved" and can be re-run whenever you need to. They help to organize your code so that you don't have to re-write the same thing over and over again. They can also be used for some advanced techniques such as recursion. You can create a function using the def keyword. You may also see the lambda keyword used to create small one-line functions.
When you use a function in your code, we say that you call the function. To call a function, place a pair of parentheses after the function name. Functions can take values as input. Those values are called parameters, or arguments. When you provide a value to the function, we say that you pass a value to the function. In general, functions are only allowed to work with data that are passed in. To pass a parameter into a function, place it in the parentheses. There are rules surrounding what values you can pass and when, but it's easiest to learn these rules by imitating other peoples' code.
End of explanation
# Usually we put these imports at the top of the file, so that it's easy to tell what packages are required
import numpy as np # This imports the entire package and lets us refer to it as np.
import matplotlib.pyplot as plt # This imports just pyplot from matplotlib and calls it plt.
# Most tutorials will use these default import statements, so it's a good idea to always use these aliases.
# Now try importing the pandas package and calling it pd.
Explanation: Classes, modules, and packages
In Python, you are allowed to create new data types. You can do this by creating a class. We won't actually create any classes, but just be aware that we will be using classes that other people have written. That means that we will be using more data types than just the integers, strings, booleans, etc. that we covered earlier.
Packages are large collections of code which are designed to add features to a programming language. Packages are made up of smaller components caled modules. We will be using 6 main packages in our exploration of data science: NumPy, Pandas, SciPy, MatPlotLib, SciKit-Learn, and TensorFlow. These packages are all interrelated, so it can be hard to figure out which ones you need. Don't worry too much about the details right now. As you gain more experience, you will understand when to use what package. To use a package or module, you need to import it.
End of explanation |
6,146 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csiro-bom', 'sandbox-1', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: CSIRO-BOM
Source ID: SANDBOX-1
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:56
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
6,147 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Source localization with single dipole fit
This shows how to fit a dipole using mne-python.
For a comparison of fits between MNE-C and mne-python, see
Step1: Let's localize the N100m (using MEG only)
Step2: Calculate and visualise magnetic field predicted by dipole with maximum GOF
and compare to the measured data, highlighting the ipsilateral (right) source
Step3: Estimate the time course of a single dipole with fixed position and
orientation (the one that maximized GOF)over the entire interval | Python Code:
from os import path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.forward import make_forward_dipole
from mne.evoked import combine_evoked
from mne.simulation import simulate_evoked
data_path = mne.datasets.sample.data_path()
subjects_dir = op.join(data_path, 'subjects')
fname_ave = op.join(data_path, 'MEG', 'sample', 'sample_audvis-ave.fif')
fname_cov = op.join(data_path, 'MEG', 'sample', 'sample_audvis-cov.fif')
fname_bem = op.join(subjects_dir, 'sample', 'bem', 'sample-5120-bem-sol.fif')
fname_trans = op.join(data_path, 'MEG', 'sample',
'sample_audvis_raw-trans.fif')
fname_surf_lh = op.join(subjects_dir, 'sample', 'surf', 'lh.white')
Explanation: Source localization with single dipole fit
This shows how to fit a dipole using mne-python.
For a comparison of fits between MNE-C and mne-python, see:
https://gist.github.com/Eric89GXL/ca55f791200fe1dc3dd2
Note that for 3D graphics you may need to choose a specific IPython
backend, such as:
%matplotlib qt or %matplotlib wx
End of explanation
evoked = mne.read_evokeds(fname_ave, condition='Right Auditory',
baseline=(None, 0))
evoked.pick_types(meg=True, eeg=False)
evoked_full = evoked.copy()
evoked.crop(0.07, 0.08)
# Fit a dipole
dip = mne.fit_dipole(evoked, fname_cov, fname_bem, fname_trans)[0]
# Plot the result in 3D brain with the MRI image.
dip.plot_locations(fname_trans, 'sample', subjects_dir, mode='orthoview')
Explanation: Let's localize the N100m (using MEG only)
End of explanation
fwd, stc = make_forward_dipole(dip, fname_bem, evoked.info, fname_trans)
pred_evoked = simulate_evoked(fwd, stc, evoked.info, None, snr=np.inf)
# find time point with highes GOF to plot
best_idx = np.argmax(dip.gof)
best_time = dip.times[best_idx]
# rememeber to create a subplot for the colorbar
fig, axes = plt.subplots(nrows=1, ncols=4, figsize=[10., 3.4])
vmin, vmax = -400, 400 # make sure each plot has same colour range
# first plot the topography at the time of the best fitting (single) dipole
plot_params = dict(times=best_time, ch_type='mag', outlines='skirt',
colorbar=False)
evoked.plot_topomap(time_format='Measured field', axes=axes[0], **plot_params)
# compare this to the predicted field
pred_evoked.plot_topomap(time_format='Predicted field', axes=axes[1],
**plot_params)
# Subtract predicted from measured data (apply equal weights)
diff = combine_evoked([evoked, -pred_evoked], weights='equal')
plot_params['colorbar'] = True
diff.plot_topomap(time_format='Difference', axes=axes[2], **plot_params)
plt.suptitle('Comparison of measured and predicted fields '
'at {:.0f} ms'.format(best_time * 1000.), fontsize=16)
Explanation: Calculate and visualise magnetic field predicted by dipole with maximum GOF
and compare to the measured data, highlighting the ipsilateral (right) source
End of explanation
dip_fixed = mne.fit_dipole(evoked_full, fname_cov, fname_bem, fname_trans,
pos=dip.pos[best_idx], ori=dip.ori[best_idx])[0]
dip_fixed.plot()
Explanation: Estimate the time course of a single dipole with fixed position and
orientation (the one that maximized GOF)over the entire interval
End of explanation |
6,148 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TRAPpy custom events
Detailed information on Trappy can be found at examples/trappy/trappy_example.ipynb.
Step1: Test environment setup
For more details on this please check out examples/utils/testenv_example.ipynb.
Step2: Example of custom event definition
Step3: Inspection of the generated TRAPpy FTrace object
Step4: Plotting tracepoint and/or custom events | Python Code:
import logging
from conf import LisaLogging
LisaLogging.setup()
# Generate plots inline
%matplotlib inline
import copy
import json
import os
import time
import math
import logging
# Support to access the remote target
import devlib
from env import TestEnv
# Support to configure and run RTApp based workloads
from wlgen import RTA
# Support for performance analysis of RTApp workloads
from perf_analysis import PerfAnalysis
# Support for trace events analysis
from trace import Trace
# Suport for FTrace events parsing and visualization
import trappy
Explanation: TRAPpy custom events
Detailed information on Trappy can be found at examples/trappy/trappy_example.ipynb.
End of explanation
# Setup a target configuration
my_target_conf = {
# Define the kind of target platform to use for the experiments
"platform" : 'linux', # Linux system, valid other options are:
# android - access via ADB
# linux - access via SSH
# host - direct access
# Preload settings for a specific target
"board" : 'juno', # juno - JUNO board with mainline hwmon
# Define devlib module to load
"modules" : [
'bl', # enable big.LITTLE support
'cpufreq' # enable CPUFreq support
],
# Account to access the remote target
"host" : '192.168.0.1',
"username" : 'root',
"password" : 'juno',
# Comment the following line to force rt-app calibration on your target
"rtapp-calib" : {
'0': 361, '1': 138, '2': 138, '3': 352, '4': 360, '5': 353
}
}
# Setup the required Test Environment supports
my_tests_conf = {
# Binary tools required to run this experiment
# These tools must be present in the tools/ folder for the architecture
"tools" : ['trace-cmd'],
# FTrace events buffer configuration
# events listed here MUST be
"ftrace" : {
##############################################################################
# EVENTS SPECIFICATIPON
##############################################################################
# Here is where we specify the list of events we are interested into:
# Events are of two types:
# 1. FTrace tracepoints that _must_ be supported by the target's kernel in use.
# These events will be enabled at ftrace start time, thus if the kernel does
# not support one of them, ftrace starting will fails.
"events" : [
"sched_switch",
"cpu_frequency",
],
# 2. FTrace events generated via trace_printk, from either kernel or user
# space. These events are different from the previous because they do not
# need to be explicitely enabled at ftrace start time.
# It's up to the user to ensure that the generated events satisfies these
# formatting requirements:
# a) the name must be a unique word into the trace
# b) values must be reported as a sequence of key=value paires
# For example, a valid custom event string is:
# my_math_event: kay1=val1 key2=val2 key3=val3
"custom" : [
"my_math_event",
],
# For each of these events, TRAPpy will generate a Pandas dataframe accessible
# via a TRAPpy::FTrace object, whith the same name of the event.
# Thus for example, ftrace.my_math_event will be the object exposing the
# dataframe with all the event matching the "my_math_event" unique word.
##############################################################################
"buffsize" : 10240,
},
}
# Initialize a test environment using:
# - the provided target configuration (my_target_conf)
# - the provided test configuration (my_test_conf)
te = TestEnv(target_conf=my_target_conf, test_conf=my_tests_conf)
target = te.target
logging.info("Target ABI: %s, CPus: %s",
target.abi,
target.cpuinfo.cpu_names)
Explanation: Test environment setup
For more details on this please check out examples/utils/testenv_example.ipynb.
End of explanation
# Define the format string for the custom events we will inject from user-space
my_math_event_fmt = "my_math_event: sin={} cos={}"
# Start FTrace
te.ftrace.start()
# Let's generate some interesting "custom" events from userspace
logging.info('Generating events from user-space (will take ~140[s])...')
for angle in range(360):
v_sin = int(1e6 * math.sin(math.radians(angle)))
v_cos = int(1e6 * math.cos(math.radians(angle)))
my_math_event = my_math_event_fmt.format(v_sin, v_cos)
# custom events can be generated either from userspace, like in this
# example, or also from kernelspace (using a trace_printk call)
target.execute('echo {} > /sys/kernel/debug/tracing/trace_marker'\
.format(my_math_event))
# Stop FTrace
te.ftrace.stop()
# Collect the generate trace
trace_file = '/tmp/trace.dat'
te.ftrace.get_trace(trace_file)
# Parse trace
events_to_parse = my_tests_conf['ftrace']['events'] + my_tests_conf['ftrace']['custom']
trace = Trace(te.platform, '/tmp', events_to_parse)
Explanation: Example of custom event definition
End of explanation
# Get the TRAPpy FTrace object which has been generated from the trace parsing
ftrace = trace.ftrace
# The FTrace object allows to verify which (of the registered) events have been
# identified into the trace
logging.info("List of events identified in the trace:\n%s",
ftrace.class_definitions.keys())
# Each event identified in the trace is appended to a table (i.e. data_frame)
# which has the same name of the event
logging.info("First 10 events of our 'my_math_event' custom event:")
ftrace.my_math_event.data_frame.head(10)
logging.info("First 10 events of our 'cpu_frequency' tracepoint:")
ftrace.cpu_frequency.data_frame.head(10)
Explanation: Inspection of the generated TRAPpy FTrace object
End of explanation
# It is possible to mix in the same plot tracepoints and custom events
# The LinePlot module requires to specify a list of signals to plot.
# Each signal is defined as:
# <event>:<column>
# where:
# <event> is one of the events collected from the trace by the FTrace object
# <column> is one of the column of the previously defined event
my_signals = [
'cpu_frequency:frequency',
'my_math_event:sin',
'my_math_event:cos'
]
# These two paramatere are passed to the LinePlot call as long with the
# TRAPpy FTrace object
trappy.LinePlot(
ftrace, # FTrace object
signals=my_signals, # Signals to be plotted
drawstyle='steps-post', # Plot style options
marker = '+'
).view()
Explanation: Plotting tracepoint and/or custom events
End of explanation |
6,149 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Random Sampling
Copyright 2016 Allen Downey
License
Step1: Part One
Suppose we want to estimate the average weight of men and women in the U.S.
And we want to quantify the uncertainty of the estimate.
One approach is to simulate many experiments and see how much the results vary from one experiment to the next.
I'll start with the unrealistic assumption that we know the actual distribution of weights in the population. Then I'll show how to solve the problem without that assumption.
Based on data from the BRFSS, I found that the distribution of weight in kg for women in the U.S. is well modeled by a lognormal distribution with the following parameters
Step2: Here's what that distribution looks like
Step3: make_sample draws a random sample from this distribution. The result is a NumPy array.
Step4: Here's an example with n=100. The mean and std of the sample are close to the mean and std of the population, but not exact.
Step5: We want to estimate the average weight in the population, so the "sample statistic" we'll use is the mean
Step6: One iteration of "the experiment" is to collect a sample of 100 women and compute their average weight.
We can simulate running this experiment many times, and collect a list of sample statistics. The result is a NumPy array.
Step7: The next line runs the simulation 1000 times and puts the results in
sample_means
Step8: Let's look at the distribution of the sample means. This distribution shows how much the results vary from one experiment to the next.
Remember that this distribution is not the same as the distribution of weight in the population. This is the distribution of results across repeated imaginary experiments.
Step9: The mean of the sample means is close to the actual population mean, which is nice, but not actually the important part.
Step10: The standard deviation of the sample means quantifies the variability from one experiment to the next, and reflects the precision of the estimate.
This quantity is called the "standard error".
Step11: We can also use the distribution of sample means to compute a "90% confidence interval", which contains 90% of the experimental results
Step12: The following function takes an array of sample statistics and prints the SE and CI
Step13: And here's what that looks like
Step14: Now we'd like to see what happens as we vary the sample size, n. The following function takes n, runs 1000 simulated experiments, and summarizes the results.
Step15: Here's a test run with n=100
Step16: Now we can use interact to run plot_sample_stats with different values of n. Note
Step17: Other sample statistics
This framework works with any other quantity we want to estimate. By changing sample_stat, you can compute the SE and CI for any sample statistic.
Exercise 1
Step24: STOP HERE
We will regroup and discuss before going on.
Part Two
So far we have shown that if we know the actual distribution of the population, we can compute the sampling distribution for any sample statistic, and from that we can compute SE and CI.
But in real life we don't know the actual distribution of the population. If we did, we wouldn't need to estimate it!
In real life, we use the sample to build a model of the population distribution, then use the model to generate the sampling distribution. A simple and popular way to do that is "resampling," which means we use the sample itself as a model of the population distribution and draw samples from it.
Before we go on, I want to collect some of the code from Part One and organize it as a class. This class represents a framework for computing sampling distributions.
Step25: The following function instantiates a Resampler and runs it.
Step26: Here's a test run with n=100
Step27: Now we can use plot_resampled_stats in an interaction
Step30: Exercise 2
Step31: Test your code using the cell below
Step32: When your StdResampler is working, you should be able to interact with it
Step33: STOP HERE
We will regroup and discuss before going on.
Part Three
We can extend this framework to compute SE and CI for a difference in means.
For example, men are heavier than women on average. Here's the women's distribution again (from BRFSS data)
Step34: And here's the men's distribution
Step35: I'll simulate a sample of 100 men and 100 women
Step36: The difference in means should be about 17 kg, but will vary from one random sample to the next
Step38: Here's the function that computes Cohen's $d$ again
Step39: The difference in weight between men and women is about 1 standard deviation
Step40: Now we can write a version of the Resampler that computes the sampling distribution of $d$.
Step41: Now we can instantiate a CohenResampler and plot the sampling distribution. | Python Code:
from __future__ import print_function, division
import numpy
import scipy.stats
import matplotlib.pyplot as pyplot
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
# seed the random number generator so we all get the same results
numpy.random.seed(18)
# some nicer colors from http://colorbrewer2.org/
COLOR1 = '#7fc97f'
COLOR2 = '#beaed4'
COLOR3 = '#fdc086'
COLOR4 = '#ffff99'
COLOR5 = '#386cb0'
%matplotlib inline
Explanation: Random Sampling
Copyright 2016 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
weight = scipy.stats.lognorm(0.23, 0, 70.8)
weight.mean(), weight.std()
Explanation: Part One
Suppose we want to estimate the average weight of men and women in the U.S.
And we want to quantify the uncertainty of the estimate.
One approach is to simulate many experiments and see how much the results vary from one experiment to the next.
I'll start with the unrealistic assumption that we know the actual distribution of weights in the population. Then I'll show how to solve the problem without that assumption.
Based on data from the BRFSS, I found that the distribution of weight in kg for women in the U.S. is well modeled by a lognormal distribution with the following parameters:
End of explanation
xs = numpy.linspace(20, 160, 100)
ys = weight.pdf(xs)
pyplot.plot(xs, ys, linewidth=4, color=COLOR1)
pyplot.xlabel('weight (kg)')
pyplot.ylabel('PDF')
None
Explanation: Here's what that distribution looks like:
End of explanation
def make_sample(n=100):
sample = weight.rvs(n)
return sample
Explanation: make_sample draws a random sample from this distribution. The result is a NumPy array.
End of explanation
sample = make_sample(n=100)
sample.mean(), sample.std()
Explanation: Here's an example with n=100. The mean and std of the sample are close to the mean and std of the population, but not exact.
End of explanation
def sample_stat(sample):
return sample.mean()
Explanation: We want to estimate the average weight in the population, so the "sample statistic" we'll use is the mean:
End of explanation
def compute_sample_statistics(n=100, iters=1000):
stats = [sample_stat(make_sample(n)) for i in range(iters)]
return numpy.array(stats)
Explanation: One iteration of "the experiment" is to collect a sample of 100 women and compute their average weight.
We can simulate running this experiment many times, and collect a list of sample statistics. The result is a NumPy array.
End of explanation
sample_means = compute_sample_statistics(n=100, iters=1000)
Explanation: The next line runs the simulation 1000 times and puts the results in
sample_means:
End of explanation
pyplot.hist(sample_means, color=COLOR5)
pyplot.xlabel('sample mean (n=100)')
pyplot.ylabel('count')
None
Explanation: Let's look at the distribution of the sample means. This distribution shows how much the results vary from one experiment to the next.
Remember that this distribution is not the same as the distribution of weight in the population. This is the distribution of results across repeated imaginary experiments.
End of explanation
sample_means.mean()
Explanation: The mean of the sample means is close to the actual population mean, which is nice, but not actually the important part.
End of explanation
std_err = sample_means.std()
std_err
Explanation: The standard deviation of the sample means quantifies the variability from one experiment to the next, and reflects the precision of the estimate.
This quantity is called the "standard error".
End of explanation
conf_int = numpy.percentile(sample_means, [5, 95])
conf_int
Explanation: We can also use the distribution of sample means to compute a "90% confidence interval", which contains 90% of the experimental results:
End of explanation
def summarize_sampling_distribution(sample_stats):
print('SE', sample_stats.std())
print('90% CI', numpy.percentile(sample_stats, [5, 95]))
Explanation: The following function takes an array of sample statistics and prints the SE and CI:
End of explanation
summarize_sampling_distribution(sample_means)
Explanation: And here's what that looks like:
End of explanation
def plot_sample_stats(n, xlim=None):
sample_stats = compute_sample_statistics(n, iters=1000)
summarize_sampling_distribution(sample_stats)
pyplot.hist(sample_stats, color=COLOR2)
pyplot.xlabel('sample statistic')
pyplot.xlim(xlim)
Explanation: Now we'd like to see what happens as we vary the sample size, n. The following function takes n, runs 1000 simulated experiments, and summarizes the results.
End of explanation
plot_sample_stats(100)
Explanation: Here's a test run with n=100:
End of explanation
def sample_stat(sample):
return sample.mean()
slider = widgets.IntSlider(min=10, max=1000, value=100)
interact(plot_sample_stats, n=slider, xlim=fixed([55, 95]))
None
Explanation: Now we can use interact to run plot_sample_stats with different values of n. Note: xlim sets the limits of the x-axis so the figure doesn't get rescaled as we vary n.
End of explanation
def sample_stat(sample):
# TODO: replace the following line with another sample statistic
return sample.mean()
slider = widgets.IntSlider(min=10, max=1000, value=100)
interact(plot_sample_stats, n=slider, xlim=fixed([0, 100]))
None
Explanation: Other sample statistics
This framework works with any other quantity we want to estimate. By changing sample_stat, you can compute the SE and CI for any sample statistic.
Exercise 1: Fill in sample_stat below with any of these statistics:
Standard deviation of the sample.
Coefficient of variation, which is the sample standard deviation divided by the sample standard mean.
Min or Max
Median (which is the 50th percentile)
10th or 90th percentile.
Interquartile range (IQR), which is the difference between the 75th and 25th percentiles.
NumPy array methods you might find useful include std, min, max, and percentile.
Depending on the results, you might want to adjust xlim.
End of explanation
class Resampler(object):
Represents a framework for computing sampling distributions.
def __init__(self, sample, xlim=None):
Stores the actual sample.
self.sample = sample
self.n = len(sample)
self.xlim = xlim
def resample(self):
Generates a new sample by choosing from the original
sample with replacement.
new_sample = numpy.random.choice(self.sample, self.n, replace=True)
return new_sample
def sample_stat(self, sample):
Computes a sample statistic using the original sample or a
simulated sample.
return sample.mean()
def compute_sample_statistics(self, iters=1000):
Simulates many experiments and collects the resulting sample
statistics.
stats = [self.sample_stat(self.resample()) for i in range(iters)]
return numpy.array(stats)
def plot_sample_stats(self):
Runs simulated experiments and summarizes the results.
sample_stats = self.compute_sample_statistics()
summarize_sampling_distribution(sample_stats)
pyplot.hist(sample_stats, color=COLOR2)
pyplot.xlabel('sample statistic')
pyplot.xlim(self.xlim)
Explanation: STOP HERE
We will regroup and discuss before going on.
Part Two
So far we have shown that if we know the actual distribution of the population, we can compute the sampling distribution for any sample statistic, and from that we can compute SE and CI.
But in real life we don't know the actual distribution of the population. If we did, we wouldn't need to estimate it!
In real life, we use the sample to build a model of the population distribution, then use the model to generate the sampling distribution. A simple and popular way to do that is "resampling," which means we use the sample itself as a model of the population distribution and draw samples from it.
Before we go on, I want to collect some of the code from Part One and organize it as a class. This class represents a framework for computing sampling distributions.
End of explanation
def plot_resampled_stats(n=100):
sample = weight.rvs(n)
resampler = Resampler(sample, xlim=[55, 95])
resampler.plot_sample_stats()
Explanation: The following function instantiates a Resampler and runs it.
End of explanation
plot_resampled_stats(100)
Explanation: Here's a test run with n=100
End of explanation
slider = widgets.IntSlider(min=10, max=1000, value=100)
interact(plot_resampled_stats, n=slider, xlim=fixed([1, 15]))
None
Explanation: Now we can use plot_resampled_stats in an interaction:
End of explanation
# Solution goes here
class StdResampler(Resampler):
Computes the sampling distribution of the standard deviation.
def sample_stat(self, sample):
Computes a sample statistic using the original sample or a
simulated sample.
return sample.std()
Explanation: Exercise 2: write a new class called StdResampler that inherits from Resampler and overrides sample_stat so it computes the standard deviation of the resampled data.
End of explanation
def plot_resampled_stats(n=100):
sample = weight.rvs(n)
resampler = StdResampler(sample, xlim=[0, 100])
resampler.plot_sample_stats()
plot_resampled_stats()
Explanation: Test your code using the cell below:
End of explanation
slider = widgets.IntSlider(min=10, max=1000, value=100)
interact(plot_resampled_stats, n=slider)
None
Explanation: When your StdResampler is working, you should be able to interact with it:
End of explanation
female_weight = scipy.stats.lognorm(0.23, 0, 70.8)
female_weight.mean(), female_weight.std()
Explanation: STOP HERE
We will regroup and discuss before going on.
Part Three
We can extend this framework to compute SE and CI for a difference in means.
For example, men are heavier than women on average. Here's the women's distribution again (from BRFSS data):
End of explanation
male_weight = scipy.stats.lognorm(0.20, 0, 87.3)
male_weight.mean(), male_weight.std()
Explanation: And here's the men's distribution:
End of explanation
female_sample = female_weight.rvs(100)
male_sample = male_weight.rvs(100)
Explanation: I'll simulate a sample of 100 men and 100 women:
End of explanation
male_sample.mean() - female_sample.mean()
Explanation: The difference in means should be about 17 kg, but will vary from one random sample to the next:
End of explanation
def CohenEffectSize(group1, group2):
Compute Cohen's d.
group1: Series or NumPy array
group2: Series or NumPy array
returns: float
diff = group1.mean() - group2.mean()
n1, n2 = len(group1), len(group2)
var1 = group1.var()
var2 = group2.var()
pooled_var = (n1 * var1 + n2 * var2) / (n1 + n2)
d = diff / numpy.sqrt(pooled_var)
return d
Explanation: Here's the function that computes Cohen's $d$ again:
End of explanation
CohenEffectSize(male_sample, female_sample)
Explanation: The difference in weight between men and women is about 1 standard deviation:
End of explanation
class CohenResampler(Resampler):
def __init__(self, group1, group2, xlim=None):
self.group1 = group1
self.group2 = group2
self.xlim = xlim
def resample(self):
group1 = numpy.random.choice(self.group1, len(self.group1), replace=True)
group2 = numpy.random.choice(self.group2, len(self.group2), replace=True)
return group1, group2
def sample_stat(self, groups):
group1, group2 = groups
return CohenEffectSize(group1, group2)
# NOTE: The following functions are the same as the ones in Resampler,
# so I could just inherit them, but I'm including them for readability
def compute_sample_statistics(self, iters=1000):
stats = [self.sample_stat(self.resample()) for i in range(iters)]
return numpy.array(stats)
def plot_sample_stats(self):
sample_stats = self.compute_sample_statistics()
summarize_sampling_distribution(sample_stats)
pyplot.hist(sample_stats, color=COLOR2)
pyplot.xlabel('sample statistic')
pyplot.xlim(self.xlim)
Explanation: Now we can write a version of the Resampler that computes the sampling distribution of $d$.
End of explanation
resampler = CohenResampler(male_sample, female_sample)
resampler.plot_sample_stats()
Explanation: Now we can instantiate a CohenResampler and plot the sampling distribution.
End of explanation |
6,150 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> Text Classification using Native Tensorflow Pre-processing</h1>
This notebook continues from <a href="text_classification.ipynb">text_classification.ipynb</a> -- in particular, we assume that you have run the "Create Dataset from BigQuery" and "Rerun with Pre-trained embedding" sections of that notebook.
Step1: Pre-requisites
Ensure you have the data files and pre-trained embedding file in your GCS bucket
Step2: If you don't, go back and run the <a href="text_classification.ipynb">text_classification</a> notebook first. Specifically, run the "Create Dataset from BigQuery" and "Rerun with Pre-trained embedding" sections of that notebook.
Native Tensorflow Predictions
Why Native?
Up until now we've been using python functions to do our data pre-processing. This is fine during training, but during serving it adds the limitation that we need a python client in the prediction pipeline.
This limits your serving flexibility. For example, lets say you want to be able to serve this model locally (offline) on a mobile phone. How would you do it? It's non trivial to execute python code on Android. Plus if your vocabulary tokenization ever changed you'd need to update all the clients or else they would be out of sync with the model.
A better way would be to have all of our serving pre-processing to be done using native Tensorflow operations. This way we can take advantage of Tensorflow's hardware agnostic execution engine, and leverage the engineering efforts the Tensorflow team put into making sure our code works whether we're running on a server, mobile, or an embedded device!
TensorFlow/Keras Code
Please explore the code in this <a href="txtclsmodel/trainer">directory</a>
Step3: Deploy trained model
Step4: Sample prediction instances
Here are some actual hacker news headlines gathered from July 2018. These titles were not part of the training or evaluation datasets.
Step5: Get Predictions
Note how we can now feed the titles directly to the model! No need for the client to load a vocabulary tokenization mapping. The text tokenization is done for us inside of the Tensorflow model's serving_input_fn. | Python Code:
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '1.8'
if 'COLAB_GPU' in os.environ: # this is always set on Colab, the value is 0 or 1 depending on whether a GPU is attached
from google.colab import auth
auth.authenticate_user()
# download "sidecar files" since on Colab, this notebook will be on Drive
!rm -rf txtclsmodel
!git clone --depth 1 https://github.com/GoogleCloudPlatform/training-data-analyst
!mv training-data-analyst/courses/machine_learning/deepdive/09_sequence/txtclsmodel/ .
!rm -rf training-data-analyst
# downgrade TensorFlow to the version this notebook has been tested with
!pip install --upgrade tensorflow==$TFVERSION
import tensorflow as tf
print(tf.__version__)
Explanation: <h1> Text Classification using Native Tensorflow Pre-processing</h1>
This notebook continues from <a href="text_classification.ipynb">text_classification.ipynb</a> -- in particular, we assume that you have run the "Create Dataset from BigQuery" and "Rerun with Pre-trained embedding" sections of that notebook.
End of explanation
%%bash
gsutil ls gs://$BUCKET/txtcls/eval.tsv
gsutil ls gs://$BUCKET/txtcls/train.tsv
gsutil ls gs://$BUCKET/txtcls/glove.6B.200d.txt
Explanation: Pre-requisites
Ensure you have the data files and pre-trained embedding file in your GCS bucket:
End of explanation
%%bash
OUTDIR=gs://${BUCKET}/txtcls/trained_finetune_native
JOBNAME=txtcls_$(date -u +%y%m%d_%H%M%S)
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${PWD}/txtclsmodel/trainer \
--job-dir=$OUTDIR \
--scale-tier=BASIC_GPU \
--runtime-version=$TFVERSION \
-- \
--output_dir=$OUTDIR \
--train_data_path=gs://${BUCKET}/txtcls/train.tsv \
--eval_data_path=gs://${BUCKET}/txtcls/eval.tsv \
--embedding_path=gs://${BUCKET}/txtcls/glove.6B.200d.txt \
--native \
--num_epochs=5
Explanation: If you don't, go back and run the <a href="text_classification.ipynb">text_classification</a> notebook first. Specifically, run the "Create Dataset from BigQuery" and "Rerun with Pre-trained embedding" sections of that notebook.
Native Tensorflow Predictions
Why Native?
Up until now we've been using python functions to do our data pre-processing. This is fine during training, but during serving it adds the limitation that we need a python client in the prediction pipeline.
This limits your serving flexibility. For example, lets say you want to be able to serve this model locally (offline) on a mobile phone. How would you do it? It's non trivial to execute python code on Android. Plus if your vocabulary tokenization ever changed you'd need to update all the clients or else they would be out of sync with the model.
A better way would be to have all of our serving pre-processing to be done using native Tensorflow operations. This way we can take advantage of Tensorflow's hardware agnostic execution engine, and leverage the engineering efforts the Tensorflow team put into making sure our code works whether we're running on a server, mobile, or an embedded device!
TensorFlow/Keras Code
Please explore the code in this <a href="txtclsmodel/trainer">directory</a>: model_native.py contains the TensorFlow model and task.py parses command line arguments and launches off the training job.
In particular look for the follwing:
tf.keras.preprocessing.text.Tokenizer.fit_on_texts() to generate a mapping from our word vocabulary to integers
tf.gfile to write the vocabulary mapping to disk
tf.contrib.lookup.index_table_from_file() to encode our sentences into a tensor of their respective word-integers, based on the vocabulary mapping written to disk in the previous step
tf.pad to pad all sequences to be the same length
Train on the Cloud
Note the new --native parameter. This tells task.py to call model_native.py instead of model.py
End of explanation
%%bash
MODEL_NAME="txtcls"
MODEL_VERSION="v1_finetune_native"
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/txtcls/trained_finetune_native/export/exporter/ | tail -1)
#gcloud ml-engine versions delete ${MODEL_VERSION} --model ${MODEL_NAME} --quiet
#gcloud ml-engine models delete ${MODEL_NAME}
#gcloud ml-engine models create ${MODEL_NAME} --regions $REGION
gcloud ml-engine versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version=$TFVERSION
Explanation: Deploy trained model
End of explanation
techcrunch=[
'Uber shuts down self-driving trucks unit',
'Grover raises €37M Series A to offer latest tech products as a subscription',
'Tech companies can now bid on the Pentagon’s $10B cloud contract'
]
nytimes=[
'‘Lopping,’ ‘Tips’ and the ‘Z-List’: Bias Lawsuit Explores Harvard’s Admissions',
'A $3B Plan to Turn Hoover Dam into a Giant Battery',
'A MeToo Reckoning in China’s Workplace Amid Wave of Accusations'
]
github=[
'Show HN: Moon – 3kb JavaScript UI compiler',
'Show HN: Hello, a CLI tool for managing social media',
'Firefox Nightly added support for time-travel debugging'
]
Explanation: Sample prediction instances
Here are some actual hacker news headlines gathered from July 2018. These titles were not part of the training or evaluation datasets.
End of explanation
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
import json
# JSON format the requests
requests = (techcrunch+nytimes+github)
requests = [x.lower() for x in requests]
request_data = {'instances': requests}
# Authenticate and call CMLE prediction API
credentials = GoogleCredentials.get_application_default()
api = discovery.build('ml', 'v1', credentials=credentials,
discoveryServiceUrl='https://storage.googleapis.com/cloud-ml/discovery/ml_v1_discovery.json')
parent = 'projects/%s/models/%s/versions/%s' % (PROJECT, 'txtcls', 'v1_finetune_native')
response = api.projects().predict(body=request_data, name=parent).execute()
# Format and print response
for i in range(len(requests)):
print('\n{}'.format(requests[i]))
print(' github : {}'.format(response['predictions'][i]['dense_1'][0]))
print(' nytimes : {}'.format(response['predictions'][i]['dense_1'][1]))
print(' techcrunch: {}'.format(response['predictions'][i]['dense_1'][2]))
Explanation: Get Predictions
Note how we can now feed the titles directly to the model! No need for the client to load a vocabulary tokenization mapping. The text tokenization is done for us inside of the Tensorflow model's serving_input_fn.
End of explanation |
6,151 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>
Challenge Notebook
Problem
Step1: Unit Test
The following unit test is expected to fail until you solve the challenge. | Python Code:
%run ../linked_list/linked_list.py
%load ../linked_list/linked_list.py
class MyLinkedList(LinkedList):
def find_loop_start(self):
# TODO: Implement me
pass
Explanation: <small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>
Challenge Notebook
Problem: Find the start of a linked list loop.
Constraints
Test Cases
Algorithm
Code
Unit Test
Solution Notebook
Constraints
Is this a singly linked list?
Yes
Can we assume we are always passed a circular linked list?
No
Can we assume we already have a linked list class that can be used for this problem?
Yes
Test Cases
Empty list -> None
Not a circular linked list -> None
One element
Two or more elements
Circular linked list general case
Algorithm
Refer to the Solution Notebook. If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start.
Code
End of explanation
# %load test_find_loop_start.py
from nose.tools import assert_equal
class TestFindLoopStart(object):
def test_find_loop_start(self):
print('Test: Empty list')
linked_list = MyLinkedList()
assert_equal(linked_list.find_loop_start(), None)
print('Test: Not a circular linked list: One element')
head = Node(1)
linked_list = MyLinkedList(head)
assert_equal(linked_list.find_loop_start(), None)
print('Test: Not a circular linked list: Two elements')
linked_list.append(2)
assert_equal(linked_list.find_loop_start(), None)
print('Test: Not a circular linked list: Three or more elements')
linked_list.append(3)
assert_equal(linked_list.find_loop_start(), None)
print('Test: General case: Circular linked list')
node10 = Node(10)
node9 = Node(9, node10)
node8 = Node(8, node9)
node7 = Node(7, node8)
node6 = Node(6, node7)
node5 = Node(5, node6)
node4 = Node(4, node5)
node3 = Node(3, node4)
node2 = Node(2, node3)
node1 = Node(1, node2)
node0 = Node(0, node1)
node10.next = node3
linked_list = MyLinkedList(node0)
assert_equal(linked_list.find_loop_start(), 3)
print('Success: test_find_loop_start')
def main():
test = TestFindLoopStart()
test.test_find_loop_start()
if __name__ == '__main__':
main()
Explanation: Unit Test
The following unit test is expected to fail until you solve the challenge.
End of explanation |
6,152 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python Numpy
<img src="images/numpylogo.svg" alt="matplotlib" style="width
Step1: A numpy array
A numpy array is a grid of values, all of the same type. The number of dimensions give the rank of the array.
To initilze a 1D array, we will do
Step2: To call or change the element in the array, we can apply similar operation as a list
Step3: Universal functions (ufunc)
A universal function (or ufunc for short) is a function that operates on numpy ndarrays in an element-by-element fashion that has been written in compiled C code. That is, a ufunc is a "vectorized" wrapper in high performance code.
Step4: Initlize arrays
Step5: Pseudo-random number generators
Step6: Array indexing | Python Code:
import numpy as np
Explanation: Python Numpy
<img src="images/numpylogo.svg" alt="matplotlib" style="width: 400px;"/>
Numpy is a numerical package used extensively in python coding. You can call the install the numpy package by
pip install numpy
When you import a module, you can choose to bound an alias to the package. In python communities, we usually import the numpy module like this:
<a href="https://colab.research.google.com/github/ryan-leung/PHYS4650_Python_Tutorial/blob/master/notebooks/03-Python-Numpy-Array.ipynb"><img align="right" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory">
</a>
End of explanation
# 1D array
a = np.array([2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97])
print(a.shape) # Return the ``shape`` of the array
print(a)
# 2D array
b = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
print(b.shape) # Return the ``shape`` of the array
print(b)
Explanation: A numpy array
A numpy array is a grid of values, all of the same type. The number of dimensions give the rank of the array.
To initilze a 1D array, we will do:
End of explanation
print(a[0], a[3], a[5])
print(b[0,0],b[1,1],b[2,2])
Explanation: To call or change the element in the array, we can apply similar operation as a list
End of explanation
x = range(100000)
sum(x)
y = np.array(x)
np.sum(y)
%timeit sum(x)
%timeit np.sum(y)
Explanation: Universal functions (ufunc)
A universal function (or ufunc for short) is a function that operates on numpy ndarrays in an element-by-element fashion that has been written in compiled C code. That is, a ufunc is a "vectorized" wrapper in high performance code.
End of explanation
# Create an array of all zeros
np.zeros((5, 5))
# Create an array of all ones
np.ones((5,5))
# Create a constant array
np.ones((5,5)) * 7
np.full((5,5), 7)
# Create a 3x3 identity matrix
np.eye(3)
Explanation: Initlize arrays
End of explanation
# Create an array filled with random values from 0 to 1
np.random.random((3,2))
# Create an 1D array filled with random integer from 1 to n
np.random.randint(1, 1000, 10)
# Seeding the random values will always give you the same "random" numbers on next run
# We put the answer to the Ultimate Question of Life, the Universe, and Everything to the seed integer
np.random.seed(42)
# Create an 1D array filled with random integer from 1 to n
np.random.randint(1, 1000, 10)
Explanation: Pseudo-random number generators
End of explanation
### Slicing
e = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
e
# Each dimensions slice similar to a list, here it means 1st dimension select all,
# 2nd dimension select from one till end.
e[:,1:]
# Here it means 1st dimension and 2nd dimension select from start till 2,
# i.e. the upper left part of the array.
e[:2,:2]
# An operation like numpy array > than a value return a boolean array
e > 5
# If we put the boolean array into the same array, it will select all element that satisfy the conditions
e[e > 5]
Explanation: Array indexing
End of explanation |
6,153 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced Functions Test
For this test, you should use the built-in functions to be able to write the requested functions in one line.
Problem 1
Use map to create a function which finds the length of each word in the phrase
(broken by spaces) and return the values in a list.
The function will have an input of a string, and output a list of integers.
Step1: Problem 2
Use reduce to take a list of digits and return the number that they
correspond to. Do not convert the integers to strings!
Step2: Problem 3
Use filter to return the words from a list of words which start with a target letter.
Step3: Problem 4
Use zip and list comprehension to return a list of the same length where each value is the two strings from
L1 and L2 concatenated together with connector between them. Look at the example output below
Step4: Problem 5
Use enumerate and other skills to return a dictionary which has the values of the list as keys and the index as the value. You may assume that a value will only appear once in the given list.
Step5: Problem 6
Use enumerate and other skills from above to return the count of the number of items in the list whose value equals its index. | Python Code:
def word_lengths(phrase):
# return map(lambda word: len(word), [word for word in phrase.split()])
return map(lambda word: len(word), phrase.split())
word_lengths('How long are the words in this phrase')
Explanation: Advanced Functions Test
For this test, you should use the built-in functions to be able to write the requested functions in one line.
Problem 1
Use map to create a function which finds the length of each word in the phrase
(broken by spaces) and return the values in a list.
The function will have an input of a string, and output a list of integers.
End of explanation
def digits_to_num(digits):
return reduce(lambda x,y: 10*x+y, digits)
digits_to_num([3,4,3,2,1])
Explanation: Problem 2
Use reduce to take a list of digits and return the number that they
correspond to. Do not convert the integers to strings!
End of explanation
def filter_words(word_list, letter):
# first attempt
#return filter(lambda x: x == letter, [word[0] for word in word_list])
return filter(lambda x: x[0] == letter, [word for word in word_list])
# not this is reduntant. the following will just do
# return filter(lambda word: word[0]==letter,word_list)
l = ['hello','are','cat','dog','ham','hi','go','to','heart']
filter_words(l,'h')
Explanation: Problem 3
Use filter to return the words from a list of words which start with a target letter.
End of explanation
def concatenate(L1, L2, connector):
return [x+connector+y for x,y in zip(L1,L2)]
concatenate(['A','B'],['a','b'],'-')
Explanation: Problem 4
Use zip and list comprehension to return a list of the same length where each value is the two strings from
L1 and L2 concatenated together with connector between them. Look at the example output below:
End of explanation
def d_list(L):
dout = {}
for item,val in enumerate(L):
dout[val] = item
return dout
# first attempt return list
#return [{val:item} for item,val in enumerate(L)]
# or simply
# return {key:value for value,key in enumerate(L)}
# use of dictionary comprehension is surprising
d_list(['a','b','c'])
Explanation: Problem 5
Use enumerate and other skills to return a dictionary which has the values of the list as keys and the index as the value. You may assume that a value will only appear once in the given list.
End of explanation
def count_match_index(L):
return len(filter(lambda x: x[0]==x[1], [pair for pair in enumerate(L)]))
# attempts
# return [pair for pair in enumerate(L)]
#return filter(lambda[pair for pair in enumerate(L)])
# portilla's answer
# return len([num for count,num in enumerate(L) if num == count])
count_match_index([0,2,2,1,5,5,6,10])
Explanation: Problem 6
Use enumerate and other skills from above to return the count of the number of items in the list whose value equals its index.
End of explanation |
6,154 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inm', 'inm-cm5-0', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: INM
Source ID: INM-CM5-0
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:04
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
6,155 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DeepDreaming with TensorFlow
Loading and displaying the model graph
Naive feature visualization
Multiscale image generation
Laplacian Pyramid Gradient Normalization
Playing with feature visualzations
DeepDream
This notebook demonstrates a number of Convolutional Neural Network image generation techniques implemented with TensorFlow for fun and science
Step1: <a id='loading'></a>
Loading and displaying the model graph
The pretrained network can be downloaded here. Unpack the tensorflow_inception_graph.pb file from the archive and set its path to model_fn variable. Alternatively you can uncomment and run the following cell to download the network
Step6: To take a glimpse into the kinds of patterns that the network learned to recognize, we will try to generate images that maximize the sum of activations of particular channel of a particular convolutional layer of the neural network. The network we explore contains many convolutional layers, each of which outputs tens to hundreds of feature channels, so we have plenty of patterns to explore.
Step7: <a id='naive'></a>
Naive feature visualization
Let's start with a naive way of visualizing these. Image-space gradient ascent!
Step8: <a id="multiscale"></a>
Multiscale image generation
Looks like the network wants to show us something interesting! Let's help it. We are going to apply gradient ascent on multiple scales. Details formed on smaller scale will be upscaled and augmented with additional details on the next scale.
With multiscale image generation it may be tempting to set the number of octaves to some high value to produce wallpaper-sized images. Storing network activations and backprop values will quickly run out of GPU memory in this case. There is a simple trick to avoid this
Step9: <a id="laplacian"></a>
Laplacian Pyramid Gradient Normalization
This looks better, but the resulting images mostly contain high frequencies. Can we improve it? One way is to add a smoothness prior into the optimization objective. This will effectively blur the image a little every iteration, suppressing the higher frequencies, so that the lower frequencies can catch up. This will require more iterations to produce a nice image. Why don't we just boost lower frequencies of the gradient instead? One way to achieve this is through the Laplacian pyramid decomposition. We call the resulting technique Laplacian Pyramid Gradient Normailzation.
Step10: <a id="playing"></a>
Playing with feature visualizations
We got a nice smooth image using only 10 iterations per octave. In case of running on GPU this takes just a few seconds. Let's try to visualize another channel from the same layer. The network can generate wide diversity of patterns.
Step11: Lower layers produce features of lower complexity.
Step12: There are many interesting things one may try. For example, optimizing a linear combination of features often gives a "mixture" pattern.
Step13: <a id="deepdream"></a>
DeepDream
Now let's reproduce the DeepDream algorithm with TensorFlow.
Step14: Let's load some image and populate it with DogSlugs (in case you've missed them).
Step15: Note that results can differ from the Caffe's implementation, as we are using an independently trained network. Still, the network seems to like dogs and animal-like features due to the nature of the ImageNet dataset.
Using an arbitrary optimization objective still works | Python Code:
# boilerplate code
import os
from cStringIO import StringIO
import numpy as np
from functools import partial
import PIL.Image
from IPython.display import clear_output, Image, display, HTML
import tensorflow as tf
Explanation: DeepDreaming with TensorFlow
Loading and displaying the model graph
Naive feature visualization
Multiscale image generation
Laplacian Pyramid Gradient Normalization
Playing with feature visualzations
DeepDream
This notebook demonstrates a number of Convolutional Neural Network image generation techniques implemented with TensorFlow for fun and science:
visualize individual feature channels and their combinations to explore the space of patterns learned by the neural network (see GoogLeNet and VGG16 galleries)
embed TensorBoard graph visualizations into Jupyter notebooks
produce high-resolution images with tiled computation (example)
use Laplacian Pyramid Gradient Normalization to produce smooth and colorful visuals at low cost
generate DeepDream-like images with TensorFlow (DogSlugs included)
The network under examination is the GoogLeNet architecture, trained to classify images into one of 1000 categories of the ImageNet dataset. It consists of a set of layers that apply a sequence of transformations to the input image. The parameters of these transformations were determined during the training process by a variant of gradient descent algorithm. The internal image representations may seem obscure, but it is possible to visualize and interpret them. In this notebook we are going to present a few tricks that allow to make these visualizations both efficient to generate and even beautiful. Impatient readers can start with exploring the full galleries of images generated by the method described here for GoogLeNet and VGG16 architectures.
End of explanation
#!wget https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip && unzip inception5h.zip
model_fn = 'tensorflow_inception_graph.pb'
# creating TensorFlow session and loading the model
graph = tf.Graph()
sess = tf.InteractiveSession(graph=graph)
graph_def = tf.GraphDef.FromString(open(model_fn).read())
t_input = tf.placeholder(np.float32, name='input') # define the input tensor
imagenet_mean = 117.0
t_preprocessed = tf.expand_dims(t_input-imagenet_mean, 0)
tf.import_graph_def(graph_def, {'input':t_preprocessed})
Explanation: <a id='loading'></a>
Loading and displaying the model graph
The pretrained network can be downloaded here. Unpack the tensorflow_inception_graph.pb file from the archive and set its path to model_fn variable. Alternatively you can uncomment and run the following cell to download the network:
End of explanation
layers = [op.name for op in graph.get_operations() if op.type=='Conv2D' and 'import/' in op.name]
feature_nums = [int(graph.get_tensor_by_name(name+':0').get_shape()[-1]) for name in layers]
print 'Number of layers', len(layers)
print 'Total number of feature channels:', sum(feature_nums)
# Helper functions for TF Graph visualization
def strip_consts(graph_def, max_const_size=32):
Strip large constant values from graph_def.
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = "<stripped %d bytes>"%size
return strip_def
def rename_nodes(graph_def, rename_func):
res_def = tf.GraphDef()
for n0 in graph_def.node:
n = res_def.node.add()
n.MergeFrom(n0)
n.name = rename_func(n.name)
for i, s in enumerate(n.input):
n.input[i] = rename_func(s) if s[0]!='^' else '^'+rename_func(s[1:])
return res_def
def show_graph(graph_def, max_const_size=32):
Visualize TensorFlow graph.
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code =
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
.format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe =
<iframe seamless style="width:800px;height:620px;border:0" srcdoc="{}"></iframe>
.format(code.replace('"', '"'))
display(HTML(iframe))
# Visualizing the network graph. Be sure expand the "mixed" nodes to see their
# internal structure. We are going to visualize "Conv2D" nodes.
tmp_def = rename_nodes(graph_def, lambda s:"/".join(s.split('_',1)))
show_graph(tmp_def)
Explanation: To take a glimpse into the kinds of patterns that the network learned to recognize, we will try to generate images that maximize the sum of activations of particular channel of a particular convolutional layer of the neural network. The network we explore contains many convolutional layers, each of which outputs tens to hundreds of feature channels, so we have plenty of patterns to explore.
End of explanation
# Picking some internal layer. Note that we use outputs before applying the ReLU nonlinearity
# to have non-zero gradients for features with negative initial activations.
layer = 'mixed4d_3x3_bottleneck_pre_relu'
channel = 139 # picking some feature channel to visualize
# start with a gray image with a little noise
img_noise = np.random.uniform(size=(224,224,3)) + 100.0
def showarray(a, fmt='jpeg'):
a = np.uint8(np.clip(a, 0, 1)*255)
f = StringIO()
PIL.Image.fromarray(a).save(f, fmt)
display(Image(data=f.getvalue()))
def visstd(a, s=0.1):
'''Normalize the image range for visualization'''
return (a-a.mean())/max(a.std(), 1e-4)*s + 0.5
def T(layer):
'''Helper for getting layer output tensor'''
return graph.get_tensor_by_name("import/%s:0"%layer)
def render_naive(t_obj, img0=img_noise, iter_n=20, step=1.0):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
img = img0.copy()
for i in xrange(iter_n):
g, score = sess.run([t_grad, t_score], {t_input:img})
# normalizing the gradient, so the same step size should work
g /= g.std()+1e-8 # for different layers and networks
img += g*step
print score,
clear_output()
showarray(visstd(img))
render_naive(T(layer)[:,:,:,channel])
Explanation: <a id='naive'></a>
Naive feature visualization
Let's start with a naive way of visualizing these. Image-space gradient ascent!
End of explanation
def tffunc(*argtypes):
'''Helper that transforms TF-graph generating function into a regular one.
See "resize" function below.
'''
placeholders = map(tf.placeholder, argtypes)
def wrap(f):
out = f(*placeholders)
def wrapper(*args, **kw):
return out.eval(dict(zip(placeholders, args)), session=kw.get('session'))
return wrapper
return wrap
# Helper function that uses TF to resize an image
def resize(img, size):
img = tf.expand_dims(img, 0)
return tf.image.resize_bilinear(img, size)[0,:,:,:]
resize = tffunc(np.float32, np.int32)(resize)
def calc_grad_tiled(img, t_grad, tile_size=512):
'''Compute the value of tensor t_grad over the image in a tiled way.
Random shifts are applied to the image to blur tile boundaries over
multiple iterations.'''
sz = tile_size
h, w = img.shape[:2]
sx, sy = np.random.randint(sz, size=2)
img_shift = np.roll(np.roll(img, sx, 1), sy, 0)
grad = np.zeros_like(img)
for y in xrange(0, max(h-sz//2, sz),sz):
for x in xrange(0, max(w-sz//2, sz),sz):
sub = img_shift[y:y+sz,x:x+sz]
g = sess.run(t_grad, {t_input:sub})
grad[y:y+sz,x:x+sz] = g
return np.roll(np.roll(grad, -sx, 1), -sy, 0)
def render_multiscale(t_obj, img0=img_noise, iter_n=10, step=1.0, octave_n=3, octave_scale=1.4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
img = img0.copy()
for octave in xrange(octave_n):
if octave>0:
hw = np.float32(img.shape[:2])*octave_scale
img = resize(img, np.int32(hw))
for i in xrange(iter_n):
g = calc_grad_tiled(img, t_grad)
# normalizing the gradient, so the same step size should work
g /= g.std()+1e-8 # for different layers and networks
img += g*step
print '.',
clear_output()
showarray(visstd(img))
render_multiscale(T(layer)[:,:,:,channel])
Explanation: <a id="multiscale"></a>
Multiscale image generation
Looks like the network wants to show us something interesting! Let's help it. We are going to apply gradient ascent on multiple scales. Details formed on smaller scale will be upscaled and augmented with additional details on the next scale.
With multiscale image generation it may be tempting to set the number of octaves to some high value to produce wallpaper-sized images. Storing network activations and backprop values will quickly run out of GPU memory in this case. There is a simple trick to avoid this: split the image into smaller tiles and compute each tile gradient independently. Applying random shifts to the image before every iteration helps avoid tile seams and improves the overall image quality.
End of explanation
k = np.float32([1,4,6,4,1])
k = np.outer(k, k)
k5x5 = k[:,:,None,None]/k.sum()*np.eye(3, dtype=np.float32)
def lap_split(img):
'''Split the image into lo and hi frequency components'''
with tf.name_scope('split'):
lo = tf.nn.conv2d(img, k5x5, [1,2,2,1], 'SAME')
lo2 = tf.nn.conv2d_transpose(lo, k5x5*4, tf.shape(img), [1,2,2,1])
hi = img-lo2
return lo, hi
def lap_split_n(img, n):
'''Build Laplacian pyramid with n splits'''
levels = []
for i in xrange(n):
img, hi = lap_split(img)
levels.append(hi)
levels.append(img)
return levels[::-1]
def lap_merge(levels):
'''Merge Laplacian pyramid'''
img = levels[0]
for hi in levels[1:]:
with tf.name_scope('merge'):
img = tf.nn.conv2d_transpose(img, k5x5*4, tf.shape(hi), [1,2,2,1]) + hi
return img
def normalize_std(img, eps=1e-10):
'''Normalize image by making its standard deviation = 1.0'''
with tf.name_scope('normalize'):
std = tf.sqrt(tf.reduce_mean(tf.square(img)))
return img/tf.maximum(std, eps)
def lap_normalize(img, scale_n=4):
'''Perform the Laplacian pyramid normalization.'''
img = tf.expand_dims(img,0)
tlevels = lap_split_n(img, scale_n)
tlevels = map(normalize_std, tlevels)
out = lap_merge(tlevels)
return out[0,:,:,:]
# Showing the lap_normalize graph with TensorBoard
lap_graph = tf.Graph()
with lap_graph.as_default():
lap_in = tf.placeholder(np.float32, name='lap_in')
lap_out = lap_normalize(lap_in)
show_graph(lap_graph)
def render_lapnorm(t_obj, img0=img_noise, visfunc=visstd,
iter_n=10, step=1.0, octave_n=3, octave_scale=1.4, lap_n=4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
# build the laplacian normalization graph
lap_norm_func = tffunc(np.float32)(partial(lap_normalize, scale_n=lap_n))
img = img0.copy()
for octave in xrange(octave_n):
if octave>0:
hw = np.float32(img.shape[:2])*octave_scale
img = resize(img, np.int32(hw))
for i in xrange(iter_n):
g = calc_grad_tiled(img, t_grad)
g = lap_norm_func(g)
img += g*step
print '.',
clear_output()
showarray(visfunc(img))
render_lapnorm(T(layer)[:,:,:,channel])
Explanation: <a id="laplacian"></a>
Laplacian Pyramid Gradient Normalization
This looks better, but the resulting images mostly contain high frequencies. Can we improve it? One way is to add a smoothness prior into the optimization objective. This will effectively blur the image a little every iteration, suppressing the higher frequencies, so that the lower frequencies can catch up. This will require more iterations to produce a nice image. Why don't we just boost lower frequencies of the gradient instead? One way to achieve this is through the Laplacian pyramid decomposition. We call the resulting technique Laplacian Pyramid Gradient Normailzation.
End of explanation
render_lapnorm(T(layer)[:,:,:,65])
Explanation: <a id="playing"></a>
Playing with feature visualizations
We got a nice smooth image using only 10 iterations per octave. In case of running on GPU this takes just a few seconds. Let's try to visualize another channel from the same layer. The network can generate wide diversity of patterns.
End of explanation
render_lapnorm(T('mixed3b_1x1_pre_relu')[:,:,:,101])
Explanation: Lower layers produce features of lower complexity.
End of explanation
render_lapnorm(T(layer)[:,:,:,65]+T(layer)[:,:,:,139], octave_n=4)
Explanation: There are many interesting things one may try. For example, optimizing a linear combination of features often gives a "mixture" pattern.
End of explanation
def render_deepdream(t_obj, img0=img_noise,
iter_n=10, step=1.5, octave_n=4, octave_scale=1.4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
# split the image into a number of octaves
img = img0
octaves = []
for i in xrange(octave_n-1):
hw = img.shape[:2]
lo = resize(img, np.int32(np.float32(hw)/octave_scale))
hi = img-resize(lo, hw)
img = lo
octaves.append(hi)
# generate details octave by octave
for octave in xrange(octave_n):
if octave>0:
hi = octaves[-octave]
img = resize(img, hi.shape[:2])+hi
for i in xrange(iter_n):
g = calc_grad_tiled(img, t_grad)
img += g*(step / (np.abs(g).mean()+1e-7))
print '.',
clear_output()
showarray(img/255.0)
Explanation: <a id="deepdream"></a>
DeepDream
Now let's reproduce the DeepDream algorithm with TensorFlow.
End of explanation
img0 = PIL.Image.open('pilatus800.jpg')
img0 = np.float32(img0)
showarray(img0/255.0)
render_deepdream(tf.square(T('mixed4c')), img0)
Explanation: Let's load some image and populate it with DogSlugs (in case you've missed them).
End of explanation
render_deepdream(T(layer)[:,:,:,139], img0)
Explanation: Note that results can differ from the Caffe's implementation, as we are using an independently trained network. Still, the network seems to like dogs and animal-like features due to the nature of the ImageNet dataset.
Using an arbitrary optimization objective still works:
End of explanation |
6,156 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
When writing production standard code, your program must be tested at many different levels.
For now, let us just talk about the lowest level of tests called Unit Tests. Lowest level doesn't mean Unit Tests are insignificant, it's quite the opposite. These tests make sure that your code is tested at atomic level. Only once all unit tests pass then you can move on to other types of tests in the pyramid.
General guidelines for unit tests
Step2: You can go the unittest docs if you're interested in knowing more about this module. We won't be focusing on this cause knowing PyTest is important.
Doctest
This module searches for pieces of text that resemble interactive Python sessions in docstrings, and then executes those lines of code to make sure they work. Mostly doctests are simple examples to give an idea of what the function is supposed to do.
The main use of doctests are to improve the documentation of the module by showing some main use cases of the module and its components | Python Code:
import unittest
def cube(x):
return x ** 3
def square(x):
return x**2
def add(x, y):
return x + y
class CalcTest(unittest.TestCase):
def test_square(self):
self.assertTrue(square(3) == 9)
self.assertFalse(square(1) == 2)
with self.assertRaises(TypeError):
cube("Lite")
def test_add(self):
self.assertTrue(add(3, 4) == 9)
def test_cube(self):
self.assertEqual(cube(3), 27)
unittest.main(argv=['first-arg-is-ignored'], exit=False, verbosity=2)
Explanation: Introduction
When writing production standard code, your program must be tested at many different levels.
For now, let us just talk about the lowest level of tests called Unit Tests. Lowest level doesn't mean Unit Tests are insignificant, it's quite the opposite. These tests make sure that your code is tested at atomic level. Only once all unit tests pass then you can move on to other types of tests in the pyramid.
General guidelines for unit tests :
Each test should focus on an atomic functionality.
A unit test should only test, and never change any data that it is testing.
Units tests should always be independent. What I mean is that, in a certain test file the order of unit tests should be interchangeable.
Use descriptive names for tester functions. This is because other people will need to look over the tests you wrote, modify or add more units tests to them.
Various types of unittests in Python
Unittest
unittest was the most frequently used unit testing module at one time. You define your own classes which subclasses the unittest.TestCase superclass.
End of explanation
import doctest
def concat(*words, sep=" "):
Return a sentence from input words
separated by a separator(default being space)
>>> concat('a','b','c', 'd')
'a b c d'
>>> concat('a','1')
'b 1'
return sep.join(words)
doctest.testmod()
Explanation: You can go the unittest docs if you're interested in knowing more about this module. We won't be focusing on this cause knowing PyTest is important.
Doctest
This module searches for pieces of text that resemble interactive Python sessions in docstrings, and then executes those lines of code to make sure they work. Mostly doctests are simple examples to give an idea of what the function is supposed to do.
The main use of doctests are to improve the documentation of the module by showing some main use cases of the module and its components
End of explanation |
6,157 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LSTM Time Series Example
This tutorial is based on Time Series Forecasting with the Long Short-Term Memory Network in Python by Jason Brownlee.
Part 1 - Data Prep
Before we get into the example, lets look at some visitor data from Yellowstone National park.
Step1: The park's recreational visits are highly seasonable with the peak season in July. The park tracks monthly averages from the last four years on it's web site. A simple approach to predict the next years visitors, is to use these averages.
Step2: ## Monthly Average Accuracy
Before this example uses Keras to predict visitors, we'll measure the monthly average method's root mean squared error. While the monthly averages aren't compeletly accurate, this method is very simple and explainable. | Python Code:
# load and plot dataset
from pandas import read_csv
from pandas import datetime
from matplotlib import pyplot
# load dataset
def parser(x):
return datetime.strptime(x, '%Y-%m-%d')
series = read_csv('../data/yellowstone-visitors.csv', header=0, parse_dates=[0], index_col=0, squeeze=True, date_parser=parser)
# summarize first few rows
print(series.head())
# line plot
series.plot()
pyplot.show()
Explanation: LSTM Time Series Example
This tutorial is based on Time Series Forecasting with the Long Short-Term Memory Network in Python by Jason Brownlee.
Part 1 - Data Prep
Before we get into the example, lets look at some visitor data from Yellowstone National park.
End of explanation
prev_4_years = series[-60:-12]
last_year = series[12:]
pred = prev_4_years.groupby(by=prev_4_years.index.month).mean()
pred.plot()
act = last_year.groupby(by=last_year.index.month).mean()
act.plot()
pyplot.show()
Explanation: The park's recreational visits are highly seasonable with the peak season in July. The park tracks monthly averages from the last four years on it's web site. A simple approach to predict the next years visitors, is to use these averages.
End of explanation
from math import sqrt
from sklearn.metrics import mean_squared_error
rmse = sqrt(mean_squared_error(act, pred))
print('Test RMSE: %.3f' % rmse)
Explanation: ## Monthly Average Accuracy
Before this example uses Keras to predict visitors, we'll measure the monthly average method's root mean squared error. While the monthly averages aren't compeletly accurate, this method is very simple and explainable.
End of explanation |
6,158 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Posting files to the web server
Using the requests library
Step1: The success/error messages from the server are stored in the response
Step2: To send multiple files to the web service, just add them to the list all under the name data_file[]
(this is the name of the field, not to be confused with the filename.) | Python Code:
import requests
# define urls for token generation and file upload
upload_token_url = 'http://ciwsdbs.uwrl.usu.edu/auth'
upload_url = 'http://ciwsdbs.uwrl.usu.edu/data-api'
client_passcode = 'XhTVtPjQWyw64awm7td+3ygiIpLDkE3uBaHSc7Yz/AA='
# store file and filename for server request
data_file = open('series_data.csv', 'rb')
files = [('data_file[]', data_file), ]
filenames = ['series_data.csv', ]
# make requests
upload_token = requests.post(upload_token_url, data={'token': client_passcode, 'filenames': filenames})
upload_response = requests.post(upload_url, headers={'Authorization': f'Bearer {upload_token.text}'}, files=files)
data_file.close()
Explanation: Posting files to the web server
Using the requests library:
End of explanation
print(upload_response.text)
Explanation: The success/error messages from the server are stored in the response:
End of explanation
files = [
('data_file[]', open('series_data1.csv', 'rb')),
('data_file[]', open('series_data2.csv', 'rb')),
('data_file[]', open('series_data3.csv', 'rb')), ]
filenames = ['series_data1.csv', 'series_data2.csv', 'series_data3.csv', ]
Explanation: To send multiple files to the web service, just add them to the list all under the name data_file[]
(this is the name of the field, not to be confused with the filename.)
End of explanation |
6,159 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Build, train and evaluate models with TensorFlow Decision Forests
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Wurlitzer is needed to display the detailed training logs in Colabs (when using verbose=2 in the model constructor).
Step3: Importing libraries
Step4: The hidden code cell limits the output height in colab.
Step5: Training a Random Forest model
In this section, we train, evaluate, analyse and export a binary classification Random Forest trained on the Palmer's Penguins dataset.
<center>
<img src="https
Step6: The dataset contains a mix of numerical (e.g. bill_depth_mm), categorical
(e.g. island) and missing features. TF-DF supports all these feature types natively (differently than NN based models), therefore there is no need for preprocessing in the form of one-hot encoding, normalization or extra is_present feature.
Labels are a bit different
Step8: Next split the dataset into training and testing
Step9: And finally, convert the pandas dataframe (pd.Dataframe) into tensorflow datasets (tf.data.Dataset)
Step10: Notes
Step11: Remarks
No input features are specified. Therefore, all the columns will be used as
input features except for the label. The feature used by the model are shown
in the training logs and in the model.summary().
DFs consume natively numerical, categorical, categorical-set features and
missing-values. Numerical features do not need to be normalized. Categorical
string values do not need to be encoded in a dictionary.
No training hyper-parameters are specified. Therefore the default
hyper-parameters will be used. Default hyper-parameters provide
reasonable results in most situations.
Calling compile on the model before the fit is optional. Compile can be
used to provide extra evaluation metrics.
Training algorithms do not need validation datasets. If a validation dataset
is provided, it will only be used to show metrics.
Add a verbose argument to RandomForestModel to control the amount of
displayed training logs. Set verbose=0 to hide most of the logs. Set
verbose=2 to show all the logs.
Note
Step12: Remark
Step13: Plot the model
Plotting a decision tree and following the first branches helps learning about decision forests. In some cases, plotting a model can even be used for debugging.
Because of the difference in the way they are trained, some models are more interesting to plan than others. Because of the noise injected during training and the depth of the trees, plotting Random Forest is less informative than plotting a CART or the first tree of a Gradient Boosted Tree.
Never the less, let's plot the first tree of our Random Forest model
Step14: The root node on the left contains the first condition (bill_depth_mm >= 16.55), number of examples (240) and label distribution (the red-blue-green bar).
Examples that evaluates true to bill_depth_mm >= 16.55 are branched to the green path. The other ones are branched to the red path.
The deeper the node, the more pure they become i.e. the label distribution is biased toward a subset of classes.
Note
Step15: The information in summary are all available programatically using the model inspector
Step16: The content of the summary and the inspector depends on the learning algorithm (tfdf.keras.RandomForestModel in this case) and its hyper-parameters (e.g. compute_oob_variable_importances=True will trigger the computation of Out-of-bag variable importances for the Random Forest learner).
Model Self Evaluation
During training TFDF models can self evaluate even if no validation dataset is provided to the fit() method. The exact logic depends on the model. For example, Random Forest will use Out-of-bag evaluation while Gradient Boosted Trees will use internal train-validation.
Note
Step17: Plotting the training logs
The training logs show the quality of the model (e.g. accuracy evaluated on the out-of-bag or validation dataset) according to the number of trees in the model. These logs are helpful to study the balance between model size and model quality.
The logs are available in multiple ways
Step18: Let's plot it
Step19: This dataset is small. You can see the model converging almost immediately.
Let's use TensorBoard
Step20: <!-- <img class="tfo-display-only-on-site" src="images/beginner_tensorboard.png"/> -->
Re-train the model with a different learning algorithm
The learning algorithm is defined by the model class. For
example, tfdf.keras.RandomForestModel() trains a Random Forest, while
tfdf.keras.GradientBoostedTreesModel() trains a Gradient Boosted Decision
Trees.
The learning algorithms are listed by calling tfdf.keras.get_all_models() or in the
learner list.
Step21: The description of the learning algorithms and their hyper-parameters are also available in the API reference and builtin help
Step22: Using a subset of features
The previous example did not specify the features, so all the columns were used
as input feature (except for the label). The following example shows how to
specify input features.
Step23: Note
Step24: Note that year is in the list of CATEGORICAL features (unlike the first run).
Hyper-parameters
Hyper-parameters are parameters of the training algorithm that impact
the quality of the final model. They are specified in the model class
constructor. The list of hyper-parameters is visible with the question mark colab command (e.g. ?tfdf.keras.GradientBoostedTreesModel).
Alternatively, you can find them on the TensorFlow Decision Forest Github or the Yggdrasil Decision Forest documentation.
The default hyper-parameters of each algorithm matches approximatively the initial publication paper. To ensure consistancy, new features and their matching hyper-parameters are always disable by default. That's why it is a good idea to tune your hyper-parameters.
Step25: As new training methods are published and implemented, combinaisons of hyper-parameters can emerge as good or almost-always-better than the default parameters. To avoid changing the default hyper-parameter values these good combinaisons are indexed and available as hyper-parameter templates.
For example, the benchmark_rank1 template is the best combinaison on our internal benchmarks. Those templates are versioned to allow training configuration stability e.g. benchmark_rank1@v1.
Step26: The available tempaltes are available with predefined_hyperparameters. Note that different learning algorithms have different templates, even if the name is similar.
Step27: Feature Preprocessing
Pre-processing features is sometimes necessary to consume signals with complex
structures, to regularize the model or to apply transfer learning.
Pre-processing can be done in one of three ways
Step28: The following example re-implements the same logic using TensorFlow Feature
Columns.
Step29: Training a regression model
The previous example trains a classification model (TF-DF does not differentiate
between binary classification and multi-class classification). In the next
example, train a regression model on the
Abalone dataset. The
objective of this dataset is to predict the number of shell's rings of an
abalone.
Note
Step30: Training a ranking model
Finaly, after having trained a classification and a regression models, train a ranking model.
The goal of a ranking is to order items by importance. The "value" of
relevance does not matter directly. Ranking a set of documents with regard to
a user query is an example of ranking problem
Step32: The dataset is stored as a .txt file in a specific format, so first convert it into a csv file.
Step33: In this dataset, the relevance defines the ground-truth rank among rows of the same group.
Step34: At this point, keras does not propose any ranking metrics. Instead, the training and validation (a GBDT uses a validation dataset) are shown in the training
logs. In this case the loss is LAMBDA_MART_NDCG5, and the final (i.e. at
the end of the training) NDCG (normalized discounted cumulative gain) is 0.510136 (see line Final model valid-loss | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
!pip install tensorflow_decision_forests
Explanation: Build, train and evaluate models with TensorFlow Decision Forests
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/decision_forests/tutorials/beginner_colab"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/decision-forests/blob/main/documentation/tutorials/beginner_colab.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/decision-forests/blob/main/documentation/tutorials/beginner_colab.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/decision-forests/documentation/tutorials/beginner_colab.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Introduction
Decision Forests (DF) are a large family of Machine Learning algorithms for
supervised classification, regression and ranking. As the name suggests, DFs use
decision trees as a building block. Today, the two most popular DF training
algorithms are Random Forests and
Gradient Boosted Decision Trees. Both algorithms are ensemble techniques that use multiple decision trees, but differ on how they do it.
TensorFlow Decision Forests (TF-DF) is a library for the training,
evaluation, interpretation and inference of Decision Forest models.
In this tutorial, you will learn how to:
Train a binary classification Random Forest on a dataset containing numerical, categorical and missing features.
Evaluate the model on a test dataset.
Prepare the model for
TensorFlow Serving.
Examine the overall structure of the model and the importance of each feature.
Re-train the model with a different learning algorithm (Gradient Boosted Decision Trees).
Use a different set of input features.
Change the hyperparameters of the model.
Preprocess the features.
Train a model for regression.
Train a model for ranking.
Detailed documentation is available in the user manual.
The example directory contains other end-to-end examples.
Installing TensorFlow Decision Forests
Install TF-DF by running the following cell.
End of explanation
!pip install wurlitzer
Explanation: Wurlitzer is needed to display the detailed training logs in Colabs (when using verbose=2 in the model constructor).
End of explanation
import tensorflow_decision_forests as tfdf
import os
import numpy as np
import pandas as pd
import tensorflow as tf
import math
Explanation: Importing libraries
End of explanation
#@title
from IPython.core.magic import register_line_magic
from IPython.display import Javascript
from IPython.display import display as ipy_display
# Some of the model training logs can cover the full
# screen if not compressed to a smaller viewport.
# This magic allows setting a max height for a cell.
@register_line_magic
def set_cell_height(size):
ipy_display(
Javascript("google.colab.output.setIframeHeight(0, true, {maxHeight: " +
str(size) + "})"))
# Check the version of TensorFlow Decision Forests
print("Found TensorFlow Decision Forests v" + tfdf.__version__)
Explanation: The hidden code cell limits the output height in colab.
End of explanation
# Download the dataset
!wget -q https://storage.googleapis.com/download.tensorflow.org/data/palmer_penguins/penguins.csv -O /tmp/penguins.csv
# Load a dataset into a Pandas Dataframe.
dataset_df = pd.read_csv("/tmp/penguins.csv")
# Display the first 3 examples.
dataset_df.head(3)
Explanation: Training a Random Forest model
In this section, we train, evaluate, analyse and export a binary classification Random Forest trained on the Palmer's Penguins dataset.
<center>
<img src="https://allisonhorst.github.io/palmerpenguins/man/figures/palmerpenguins.png" width="150"/></center>
Note: The dataset was exported to a csv file without pre-processing: library(palmerpenguins); write.csv(penguins, file="penguins.csv", quote=F, row.names=F).
Load the dataset and convert it in a tf.Dataset
This dataset is very small (300 examples) and stored as a .csv-like file. Therefore, use Pandas to load it.
Note: Pandas is practical as you don't have to type in name of the input features to load them. For larger datasets (>1M examples), using the
TensorFlow Dataset to read the files may be better suited.
Let's assemble the dataset into a csv file (i.e. add the header), and load it:
End of explanation
# Encode the categorical label into an integer.
#
# Details:
# This stage is necessary if your classification label is represented as a
# string. Note: Keras expected classification labels to be integers.
# Name of the label column.
label = "species"
classes = dataset_df[label].unique().tolist()
print(f"Label classes: {classes}")
dataset_df[label] = dataset_df[label].map(classes.index)
Explanation: The dataset contains a mix of numerical (e.g. bill_depth_mm), categorical
(e.g. island) and missing features. TF-DF supports all these feature types natively (differently than NN based models), therefore there is no need for preprocessing in the form of one-hot encoding, normalization or extra is_present feature.
Labels are a bit different: Keras metrics expect integers. The label (species) is stored as a string, so let's convert it into an integer.
End of explanation
# Split the dataset into a training and a testing dataset.
def split_dataset(dataset, test_ratio=0.30):
Splits a panda dataframe in two.
test_indices = np.random.rand(len(dataset)) < test_ratio
return dataset[~test_indices], dataset[test_indices]
train_ds_pd, test_ds_pd = split_dataset(dataset_df)
print("{} examples in training, {} examples for testing.".format(
len(train_ds_pd), len(test_ds_pd)))
Explanation: Next split the dataset into training and testing:
End of explanation
train_ds = tfdf.keras.pd_dataframe_to_tf_dataset(train_ds_pd, label=label)
test_ds = tfdf.keras.pd_dataframe_to_tf_dataset(test_ds_pd, label=label)
Explanation: And finally, convert the pandas dataframe (pd.Dataframe) into tensorflow datasets (tf.data.Dataset):
End of explanation
%set_cell_height 300
# Specify the model.
model_1 = tfdf.keras.RandomForestModel()
# Train the model.
model_1.fit(x=train_ds)
Explanation: Notes: pd_dataframe_to_tf_dataset could have converted the label to integer for you.
And, if you wanted to create the tf.data.Dataset yourself, there is a couple of things to remember:
The learning algorithms work with a one-epoch dataset and without shuffling.
The batch size does not impact the training algorithm, but a small value might slow down reading the dataset.
Train the model
End of explanation
model_1.compile(metrics=["accuracy"])
evaluation = model_1.evaluate(test_ds, return_dict=True)
print()
for name, value in evaluation.items():
print(f"{name}: {value:.4f}")
Explanation: Remarks
No input features are specified. Therefore, all the columns will be used as
input features except for the label. The feature used by the model are shown
in the training logs and in the model.summary().
DFs consume natively numerical, categorical, categorical-set features and
missing-values. Numerical features do not need to be normalized. Categorical
string values do not need to be encoded in a dictionary.
No training hyper-parameters are specified. Therefore the default
hyper-parameters will be used. Default hyper-parameters provide
reasonable results in most situations.
Calling compile on the model before the fit is optional. Compile can be
used to provide extra evaluation metrics.
Training algorithms do not need validation datasets. If a validation dataset
is provided, it will only be used to show metrics.
Add a verbose argument to RandomForestModel to control the amount of
displayed training logs. Set verbose=0 to hide most of the logs. Set
verbose=2 to show all the logs.
Note: A Categorical-Set feature is composed of a set of categorical values (while a Categorical is only one value). More details and examples are given later.
Evaluate the model
Let's evaluate our model on the test dataset.
End of explanation
model_1.save("/tmp/my_saved_model")
Explanation: Remark: The test accuracy (0.86514) is close to the Out-of-bag accuracy
(0.8672) shown in the training logs.
See the Model Self Evaluation section below for more evaluation methods.
Prepare this model for TensorFlow Serving.
Export the model to the SavedModel format for later re-use e.g.
TensorFlow Serving.
End of explanation
tfdf.model_plotter.plot_model_in_colab(model_1, tree_idx=0, max_depth=3)
Explanation: Plot the model
Plotting a decision tree and following the first branches helps learning about decision forests. In some cases, plotting a model can even be used for debugging.
Because of the difference in the way they are trained, some models are more interesting to plan than others. Because of the noise injected during training and the depth of the trees, plotting Random Forest is less informative than plotting a CART or the first tree of a Gradient Boosted Tree.
Never the less, let's plot the first tree of our Random Forest model:
End of explanation
%set_cell_height 300
model_1.summary()
Explanation: The root node on the left contains the first condition (bill_depth_mm >= 16.55), number of examples (240) and label distribution (the red-blue-green bar).
Examples that evaluates true to bill_depth_mm >= 16.55 are branched to the green path. The other ones are branched to the red path.
The deeper the node, the more pure they become i.e. the label distribution is biased toward a subset of classes.
Note: Over the mouse on top of the plot for details.
Model structure and feature importance
The overall structure of the model is show with .summary(). You will see:
Type: The learning algorithm used to train the model (Random Forest in
our case).
Task: The problem solved by the model (Classification in our case).
Input Features: The input features of the model.
Variable Importance: Different measures of the importance of each
feature for the model.
Out-of-bag evaluation: The out-of-bag evaluation of the model. This is a
cheap and efficient alternative to cross-validation.
Number of {trees, nodes} and other metrics: Statistics about the
structure of the decisions forests.
Remark: The summary's content depends on the learning algorithm (e.g.
Out-of-bag is only available for Random Forest) and the hyper-parameters (e.g.
the mean-decrease-in-accuracy variable importance can be disabled in the
hyper-parameters).
End of explanation
# The input features
model_1.make_inspector().features()
# The feature importances
model_1.make_inspector().variable_importances()
Explanation: The information in summary are all available programatically using the model inspector:
End of explanation
model_1.make_inspector().evaluation()
Explanation: The content of the summary and the inspector depends on the learning algorithm (tfdf.keras.RandomForestModel in this case) and its hyper-parameters (e.g. compute_oob_variable_importances=True will trigger the computation of Out-of-bag variable importances for the Random Forest learner).
Model Self Evaluation
During training TFDF models can self evaluate even if no validation dataset is provided to the fit() method. The exact logic depends on the model. For example, Random Forest will use Out-of-bag evaluation while Gradient Boosted Trees will use internal train-validation.
Note: While this evaluation is computed during training, it is NOT computed on the training dataset and can be used as a low quality evaluation.
The model self evaluation is available with the inspector's evaluation():
End of explanation
%set_cell_height 150
model_1.make_inspector().training_logs()
Explanation: Plotting the training logs
The training logs show the quality of the model (e.g. accuracy evaluated on the out-of-bag or validation dataset) according to the number of trees in the model. These logs are helpful to study the balance between model size and model quality.
The logs are available in multiple ways:
Displayed in during training if fit() is wrapped in with sys_pipes(): (see example above).
At the end of the model summary i.e. model.summary() (see example above).
Programmatically, using the model inspector i.e. model.make_inspector().training_logs().
Using TensorBoard
Let's try the options 2 and 3:
End of explanation
import matplotlib.pyplot as plt
logs = model_1.make_inspector().training_logs()
plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot([log.num_trees for log in logs], [log.evaluation.accuracy for log in logs])
plt.xlabel("Number of trees")
plt.ylabel("Accuracy (out-of-bag)")
plt.subplot(1, 2, 2)
plt.plot([log.num_trees for log in logs], [log.evaluation.loss for log in logs])
plt.xlabel("Number of trees")
plt.ylabel("Logloss (out-of-bag)")
plt.show()
Explanation: Let's plot it:
End of explanation
# This cell start TensorBoard that can be slow.
# Load the TensorBoard notebook extension
%load_ext tensorboard
# Google internal version
# %load_ext google3.learning.brain.tensorboard.notebook.extension
# Clear existing results (if any)
!rm -fr "/tmp/tensorboard_logs"
# Export the meta-data to tensorboard.
model_1.make_inspector().export_to_tensorboard("/tmp/tensorboard_logs")
# docs_infra: no_execute
# Start a tensorboard instance.
%tensorboard --logdir "/tmp/tensorboard_logs"
Explanation: This dataset is small. You can see the model converging almost immediately.
Let's use TensorBoard:
End of explanation
tfdf.keras.get_all_models()
Explanation: <!-- <img class="tfo-display-only-on-site" src="images/beginner_tensorboard.png"/> -->
Re-train the model with a different learning algorithm
The learning algorithm is defined by the model class. For
example, tfdf.keras.RandomForestModel() trains a Random Forest, while
tfdf.keras.GradientBoostedTreesModel() trains a Gradient Boosted Decision
Trees.
The learning algorithms are listed by calling tfdf.keras.get_all_models() or in the
learner list.
End of explanation
# help works anywhere.
help(tfdf.keras.RandomForestModel)
# ? only works in ipython or notebooks, it usually opens on a separate panel.
tfdf.keras.RandomForestModel?
Explanation: The description of the learning algorithms and their hyper-parameters are also available in the API reference and builtin help:
End of explanation
feature_1 = tfdf.keras.FeatureUsage(name="bill_length_mm")
feature_2 = tfdf.keras.FeatureUsage(name="island")
all_features = [feature_1, feature_2]
# Note: This model is only trained with two features. It will not be as good as
# the one trained on all features.
model_2 = tfdf.keras.GradientBoostedTreesModel(
features=all_features, exclude_non_specified_features=True)
model_2.compile(metrics=["accuracy"])
model_2.fit(x=train_ds, validation_data=test_ds)
print(model_2.evaluate(test_ds, return_dict=True))
Explanation: Using a subset of features
The previous example did not specify the features, so all the columns were used
as input feature (except for the label). The following example shows how to
specify input features.
End of explanation
%set_cell_height 300
feature_1 = tfdf.keras.FeatureUsage(name="year", semantic=tfdf.keras.FeatureSemantic.CATEGORICAL)
feature_2 = tfdf.keras.FeatureUsage(name="bill_length_mm")
feature_3 = tfdf.keras.FeatureUsage(name="sex")
all_features = [feature_1, feature_2, feature_3]
model_3 = tfdf.keras.GradientBoostedTreesModel(features=all_features, exclude_non_specified_features=True)
model_3.compile( metrics=["accuracy"])
model_3.fit(x=train_ds, validation_data=test_ds)
Explanation: Note: As expected, the accuracy is lower than previously.
TF-DF attaches a semantics to each feature. This semantics controls how
the feature is used by the model. The following semantics are currently supported:
Numerical: Generally for quantities or counts with full ordering. For
example, the age of a person, or the number of items in a bag. Can be a
float or an integer. Missing values are represented with float(Nan) or with
an empty sparse tensor.
Categorical: Generally for a type/class in finite set of possible values
without ordering. For example, the color RED in the set {RED, BLUE, GREEN}.
Can be a string or an integer. Missing values are represented as "" (empty
sting), value -2 or with an empty sparse tensor.
Categorical-Set: A set of categorical values. Great to represent
tokenized text. Can be a string or an integer in a sparse tensor or a
ragged tensor (recommended). The order/index of each item doesn't matter.
If not specified, the semantics is inferred from the representation type and shown in the training logs:
int, float (dense or sparse) → Numerical semantics.
str (dense or sparse) → Categorical semantics
int, str (ragged) → Categorical-Set semantics
In some cases, the inferred semantics is incorrect. For example: An Enum stored as an integer is semantically categorical, but it will be detected as numerical. In this case, you should specify the semantic argument in the input. The education_num field of the Adult dataset is classical example.
This dataset doesn't contain such a feature. However, for the demonstration, we will make the model treat the year as a categorical feature:
End of explanation
# A classical but slighly more complex model.
model_6 = tfdf.keras.GradientBoostedTreesModel(
num_trees=500, growing_strategy="BEST_FIRST_GLOBAL", max_depth=8)
model_6.fit(x=train_ds)
# A more complex, but possibly, more accurate model.
model_7 = tfdf.keras.GradientBoostedTreesModel(
num_trees=500,
growing_strategy="BEST_FIRST_GLOBAL",
max_depth=8,
split_axis="SPARSE_OBLIQUE",
categorical_algorithm="RANDOM",
)
model_7.fit(x=train_ds)
Explanation: Note that year is in the list of CATEGORICAL features (unlike the first run).
Hyper-parameters
Hyper-parameters are parameters of the training algorithm that impact
the quality of the final model. They are specified in the model class
constructor. The list of hyper-parameters is visible with the question mark colab command (e.g. ?tfdf.keras.GradientBoostedTreesModel).
Alternatively, you can find them on the TensorFlow Decision Forest Github or the Yggdrasil Decision Forest documentation.
The default hyper-parameters of each algorithm matches approximatively the initial publication paper. To ensure consistancy, new features and their matching hyper-parameters are always disable by default. That's why it is a good idea to tune your hyper-parameters.
End of explanation
# A good template of hyper-parameters.
model_8 = tfdf.keras.GradientBoostedTreesModel(hyperparameter_template="benchmark_rank1")
model_8.fit(x=train_ds)
Explanation: As new training methods are published and implemented, combinaisons of hyper-parameters can emerge as good or almost-always-better than the default parameters. To avoid changing the default hyper-parameter values these good combinaisons are indexed and available as hyper-parameter templates.
For example, the benchmark_rank1 template is the best combinaison on our internal benchmarks. Those templates are versioned to allow training configuration stability e.g. benchmark_rank1@v1.
End of explanation
# The hyper-parameter templates of the Gradient Boosted Tree model.
print(tfdf.keras.GradientBoostedTreesModel.predefined_hyperparameters())
Explanation: The available tempaltes are available with predefined_hyperparameters. Note that different learning algorithms have different templates, even if the name is similar.
End of explanation
%set_cell_height 300
body_mass_g = tf.keras.layers.Input(shape=(1,), name="body_mass_g")
body_mass_kg = body_mass_g / 1000.0
bill_length_mm = tf.keras.layers.Input(shape=(1,), name="bill_length_mm")
raw_inputs = {"body_mass_g": body_mass_g, "bill_length_mm": bill_length_mm}
processed_inputs = {"body_mass_kg": body_mass_kg, "bill_length_mm": bill_length_mm}
# "preprocessor" contains the preprocessing logic.
preprocessor = tf.keras.Model(inputs=raw_inputs, outputs=processed_inputs)
# "model_4" contains both the pre-processing logic and the decision forest.
model_4 = tfdf.keras.RandomForestModel(preprocessing=preprocessor)
model_4.fit(x=train_ds)
model_4.summary()
Explanation: Feature Preprocessing
Pre-processing features is sometimes necessary to consume signals with complex
structures, to regularize the model or to apply transfer learning.
Pre-processing can be done in one of three ways:
Preprocessing on the Pandas dataframe. This solution is easy to implement
and generally suitable for experimentation. However, the
pre-processing logic will not be exported in the model by model.save().
Keras Preprocessing: While
more complex than the previous solution, Keras Preprocessing is packaged in
the model.
TensorFlow Feature Columns:
This API is part of the TF Estimator library (!= Keras) and planned for
deprecation. This solution is interesting when using existing preprocessing
code.
Note: Using TensorFlow Hub
pre-trained embedding is often, a great way to consume text and image with
TF-DF. For example, hub.KerasLayer("https://tfhub.dev/google/nnlm-en-dim128/2"). See the Intermediate tutorial for more details.
In the next example, pre-process the body_mass_g feature into body_mass_kg = body_mass_g / 1000. The bill_length_mm is consumed without pre-processing. Note that such
monotonic transformations have generally no impact on decision forest models.
End of explanation
def g_to_kg(x):
return x / 1000
feature_columns = [
tf.feature_column.numeric_column("body_mass_g", normalizer_fn=g_to_kg),
tf.feature_column.numeric_column("bill_length_mm"),
]
preprocessing = tf.keras.layers.DenseFeatures(feature_columns)
model_5 = tfdf.keras.RandomForestModel(preprocessing=preprocessing)
model_5.fit(x=train_ds)
Explanation: The following example re-implements the same logic using TensorFlow Feature
Columns.
End of explanation
# Download the dataset.
!wget -q https://storage.googleapis.com/download.tensorflow.org/data/abalone_raw.csv -O /tmp/abalone.csv
dataset_df = pd.read_csv("/tmp/abalone.csv")
print(dataset_df.head(3))
# Split the dataset into a training and testing dataset.
train_ds_pd, test_ds_pd = split_dataset(dataset_df)
print("{} examples in training, {} examples for testing.".format(
len(train_ds_pd), len(test_ds_pd)))
# Name of the label column.
label = "Rings"
train_ds = tfdf.keras.pd_dataframe_to_tf_dataset(train_ds_pd, label=label, task=tfdf.keras.Task.REGRESSION)
test_ds = tfdf.keras.pd_dataframe_to_tf_dataset(train_ds_pd, label=label, task=tfdf.keras.Task.REGRESSION)
%set_cell_height 300
# Configure the model.
model_7 = tfdf.keras.RandomForestModel(task = tfdf.keras.Task.REGRESSION)
# Train the model.
model_7.fit(x=train_ds)
# Evaluate the model on the test dataset.
model_7.compile(metrics=["mse"])
evaluation = model_7.evaluate(test_ds, return_dict=True)
print(evaluation)
print()
print(f"MSE: {evaluation['mse']}")
print(f"RMSE: {math.sqrt(evaluation['mse'])}")
Explanation: Training a regression model
The previous example trains a classification model (TF-DF does not differentiate
between binary classification and multi-class classification). In the next
example, train a regression model on the
Abalone dataset. The
objective of this dataset is to predict the number of shell's rings of an
abalone.
Note: The csv file is assembled by appending UCI's header and data files. No preprocessing was applied.
<center>
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/33/LivingAbalone.JPG/800px-LivingAbalone.JPG" width="200"/></center>
End of explanation
%set_cell_height 200
archive_path = tf.keras.utils.get_file("letor.zip",
"https://download.microsoft.com/download/E/7/E/E7EABEF1-4C7B-4E31-ACE5-73927950ED5E/Letor.zip",
extract=True)
# Path to the train and test dataset using libsvm format.
raw_dataset_path = os.path.join(os.path.dirname(archive_path),"OHSUMED/Data/All/OHSUMED.txt")
Explanation: Training a ranking model
Finaly, after having trained a classification and a regression models, train a ranking model.
The goal of a ranking is to order items by importance. The "value" of
relevance does not matter directly. Ranking a set of documents with regard to
a user query is an example of ranking problem: It is only important to get the right order, where the top documents matter more.
TF-DF expects for ranking datasets to be presented in a "flat" format. A
document+query dataset might look like that:
query | document_id | feature_1 | feature_2 | relevance/label
----- | ----------- | --------- | --------- | ---------------
cat | 1 | 0.1 | blue | 4
cat | 2 | 0.5 | green | 1
cat | 3 | 0.2 | red | 2
dog | 4 | NA | red | 0
dog | 5 | 0.2 | red | 1
dog | 6 | 0.6 | green | 1
The relevance/label is a floating point numerical value between 0 and 5
(generally between 0 and 4) where 0 means "completely unrelated", 4 means "very
relevant" and 5 means "the same as the query".
Interestingly, decision forests are often good rankers, and many
state-of-the-art ranking models are decision forests.
In this example, use a sample of the
LETOR3
dataset. More precisely, we want to download the OHSUMED.zip from the LETOR3 repo. This dataset is stored in the
libsvm format, so we will need to convert it to csv.
End of explanation
def convert_libsvm_to_csv(src_path, dst_path):
Converts a libsvm ranking dataset into a flat csv file.
Note: This code is specific to the LETOR3 dataset.
dst_handle = open(dst_path, "w")
first_line = True
for src_line in open(src_path,"r"):
# Note: The last 3 items are comments.
items = src_line.split(" ")[:-3]
relevance = items[0]
group = items[1].split(":")[1]
features = [ item.split(":") for item in items[2:]]
if first_line:
# Csv header
dst_handle.write("relevance,group," + ",".join(["f_" + feature[0] for feature in features]) + "\n")
first_line = False
dst_handle.write(relevance + ",g_" + group + "," + (",".join([feature[1] for feature in features])) + "\n")
dst_handle.close()
# Convert the dataset.
csv_dataset_path="/tmp/ohsumed.csv"
convert_libsvm_to_csv(raw_dataset_path, csv_dataset_path)
# Load a dataset into a Pandas Dataframe.
dataset_df = pd.read_csv(csv_dataset_path)
# Display the first 3 examples.
dataset_df.head(3)
train_ds_pd, test_ds_pd = split_dataset(dataset_df)
print("{} examples in training, {} examples for testing.".format(
len(train_ds_pd), len(test_ds_pd)))
# Display the first 3 examples of the training dataset.
train_ds_pd.head(3)
Explanation: The dataset is stored as a .txt file in a specific format, so first convert it into a csv file.
End of explanation
# Name of the relevance and grouping columns.
relevance = "relevance"
ranking_train_ds = tfdf.keras.pd_dataframe_to_tf_dataset(train_ds_pd, label=relevance, task=tfdf.keras.Task.RANKING)
ranking_test_ds = tfdf.keras.pd_dataframe_to_tf_dataset(train_ds_pd, label=relevance, task=tfdf.keras.Task.RANKING)
%set_cell_height 400
model_8 = tfdf.keras.GradientBoostedTreesModel(
task=tfdf.keras.Task.RANKING,
ranking_group="group",
num_trees=50)
model_8.fit(x=ranking_train_ds)
Explanation: In this dataset, the relevance defines the ground-truth rank among rows of the same group.
End of explanation
%set_cell_height 400
model_8.summary()
Explanation: At this point, keras does not propose any ranking metrics. Instead, the training and validation (a GBDT uses a validation dataset) are shown in the training
logs. In this case the loss is LAMBDA_MART_NDCG5, and the final (i.e. at
the end of the training) NDCG (normalized discounted cumulative gain) is 0.510136 (see line Final model valid-loss: -0.510136).
Note that the NDCG is a value between 0 and 1. The larget the NDCG, the better
the model. For this reason, the loss to be -NDCG.
As before, the model can be analysed:
End of explanation |
6,160 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Finding Lane Lines on the Road
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
Note If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".
The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.
<figure>
<img src="line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align
Step8: Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are
Step9: Test on Images
Now you should build your pipeline to work on the images in the directory "test_images"
You should make sure your pipeline works well on these images before you try the videos.
Step10: run your solution on all test_images and make copies into the test_images directory).
Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos
Step11: Let's try the one with the solid white lane on the right first ...
Step13: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
Step15: At this point, if you were successful you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. Modify your draw_lines function accordingly and try re-running your pipeline.
Now for the one with the solid yellow lane on the left. This one's more tricky!
Step17: Reflections
Congratulations on finding the lane lines! As the final step in this project, we would like you to share your thoughts on your lane finding pipeline... specifically, how could you imagine making your algorithm better / more robust? Where will your current algorithm be likely to fail?
Please add your thoughts below, and if you're up for making your pipeline more robust, be sure to scroll down and check out the optional challenge video below!
The current algorithm is likely to fail with | Python Code:
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) #call as plt.imshow(gray, cmap='gray') to show a grayscaled image
Explanation: Finding Lane Lines on the Road
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
Note If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".
The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.
<figure>
<img src="line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
</figcaption>
</figure>
End of explanation
import math
def grayscale(img):
Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
you should call plt.imshow(gray, cmap='gray')
return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
Applies the Canny transform
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
Applies a Gaussian Noise kernel
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=6):
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
# Setup variables
from math import floor
left_line_count = 0
right_line_count = 0
left_average_slope = 0
right_average_slope = 0
left_average_x = 0
left_average_y = 0
right_average_x = 0
right_average_y = 0
left_min_y = left_max_y = left_min_y = right_min_y = right_max_y = img.shape[1]
for line in lines:
for x1,y1,x2,y2 in line:
# Calculate each line slope
slope = (y2-y1)/(x2-x1)
if (not math.isinf(slope) and slope != 0):
# Classify lines by slope (lines at the right have positive slope)
# Calculate total slope, x and y to then find average
# Calculate min y
if (slope >= 0.5 and slope <= 0.85):
right_line_count += 1
right_average_slope += slope
right_average_x += x1 + x2
right_average_y += y1 + y2
if right_min_y > y1: right_min_y = y1
if right_min_y > y2: right_min_y = y2
elif (slope <= -0.5 and slope >= -0.85):
left_line_count += 1
left_average_slope += slope
left_average_x += x1 + x2
left_average_y += y1 + y2
if left_min_y > y1: left_min_y = y1
if left_min_y > y2: left_min_y = y2
if ((left_line_count != 0) and (right_line_count != 0)):
# Find average slope for each side
left_average_slope = left_average_slope / left_line_count
right_average_slope = right_average_slope / right_line_count
# Find average x and y for each side
left_average_x = left_average_x / (left_line_count * 2)
left_average_y = left_average_y / (left_line_count * 2)
right_average_x = right_average_x / (right_line_count * 2)
right_average_y = right_average_y / (right_line_count * 2)
# Find y intercept for each side
# b = y - mx
left_y_intercept = left_average_y - left_average_slope * left_average_x
right_y_intercept = right_average_y - right_average_slope * right_average_x
# Find max x values for each side
# x = ( y - b ) / m
left_max_x = floor((left_max_y - left_y_intercept) / left_average_slope)
right_max_x = floor((right_max_y - right_y_intercept) / right_average_slope)
# Find min x values for each side
left_min_x = floor((left_min_y - left_y_intercept) / left_average_slope)
right_min_x = floor((right_min_y - right_y_intercept) / right_average_slope)
# Draw left line
cv2.line(img, (left_min_x, left_min_y), (left_max_x, left_max_y), color, thickness)
# Draw right line
cv2.line(img, (right_min_x, right_min_y), (right_max_x, right_max_y), color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((*img.shape, 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., λ=0.):
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + λ
NOTE: initial_img and img must be the same shape!
return cv2.addWeighted(initial_img, α, img, β, λ)
Explanation: Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:
cv2.inRange() for color selection
cv2.fillPoly() for regions selection
cv2.line() to draw lines on an image given endpoints
cv2.addWeighted() to coadd / overlay two images
cv2.cvtColor() to grayscale or change color
cv2.imwrite() to output images to file
cv2.bitwise_and() to apply a mask to an image
Check out the OpenCV documentation to learn about these and discover even more awesome functionality!
Below are some helper functions to help get you started. They should look familiar from the lesson!
End of explanation
import os
# Remove any previously processed images
filelist = [ f for f in os.listdir('test_images/') if f.find('processed') != -1]
for f in filelist:
print('Removing image:', 'test_images/' + f)
os.remove('test_images/' + f)
test_images = os.listdir('test_images/')
for fname in test_images:
# Get image path and name details
basedir, basename = os.path.split(fname)
root, ext = os.path.splitext(basename)
# Read in an image
image = mpimg.imread('test_images/' + basename)
imshape = image.shape
# Print out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape, 'and name', basename)
# Make a grayscale copy of the image for processing
gray = grayscale(image)
# Define kernel size for Gaussian smoothing / blurring
kernel_size = 5
blur_gray = gaussian_blur(gray, kernel_size)
# Define Canny transform paramerets
low_threshold = 50
high_threshold = 150
edges = canny(blur_gray, low_threshold, high_threshold)
vertices = np.array([[(100,imshape[0]),(450, 325), (550, 325), (imshape[1]-100,imshape[0])]], dtype=np.int32)
masked_edges = region_of_interest(edges, vertices)
# Define the Hough transform parameters
# Make a blank the same size as our image to draw on
rho = 2 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 45 # minimum number of votes (intersections in Hough grid cell)
min_line_len = 20 # minimum number of pixels making up a line
max_line_gap = 60 # maximum gap in pixels between connectable line segments
lines = hough_lines(masked_edges, rho, theta, threshold, min_line_len, max_line_gap)
lines_edges = weighted_img(lines, image, α=0.8, β=1., λ=0.)
print('Saving image:', root + '_processed.jpg')
mpimg.imsave('test_images/' + root + '_processed.jpg', lines_edges)
Explanation: Test on Images
Now you should build your pipeline to work on the images in the directory "test_images"
You should make sure your pipeline works well on these images before you try the videos.
End of explanation
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image with lines are drawn on lanes)
imshape = image.shape
gray = grayscale(image)
# Define kernel size for Gaussian smoothing / blurring
kernel_size = 5
blur_gray = gaussian_blur(gray, kernel_size)
# Define Canny transform paramerets
low_threshold = 50
high_threshold = 150
edges = canny(blur_gray, low_threshold, high_threshold)
vertices = np.array([[(100,imshape[0]),(450, 325), (550, 325), (imshape[1]-100,imshape[0])]], dtype=np.int32)
masked_edges = region_of_interest(edges, vertices)
# Define the Hough transform parameters
# Make a blank the same size as our image to draw on
rho = 2 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 45 # minimum number of votes (intersections in Hough grid cell)
min_line_len = 20 # minimum number of pixels making up a line
max_line_gap = 60 # maximum gap in pixels between connectable line segments
lines = hough_lines(masked_edges, rho, theta, threshold, min_line_len, max_line_gap)
result = weighted_img(lines, image, α=0.8, β=1., λ=0.)
return result
Explanation: run your solution on all test_images and make copies into the test_images directory).
Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos:
solidWhiteRight.mp4
solidYellowLeft.mp4
End of explanation
white_output = 'white.mp4'
clip1 = VideoFileClip("solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
Explanation: Let's try the one with the solid white lane on the right first ...
End of explanation
HTML(
<video width="960" height="540" controls>
<source src="{0}">
</video>
.format(white_output))
Explanation: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
End of explanation
yellow_output = 'yellow.mp4'
clip2 = VideoFileClip('solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML(
<video width="960" height="540" controls>
<source src="{0}">
</video>
.format(yellow_output))
Explanation: At this point, if you were successful you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. Modify your draw_lines function accordingly and try re-running your pipeline.
Now for the one with the solid yellow lane on the left. This one's more tricky!
End of explanation
challenge_output = 'extra.mp4'
clip2 = VideoFileClip('challenge.mp4')
challenge_clip = clip2.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML(
<video width="960" height="540" controls>
<source src="{0}">
</video>
.format(challenge_output))
Explanation: Reflections
Congratulations on finding the lane lines! As the final step in this project, we would like you to share your thoughts on your lane finding pipeline... specifically, how could you imagine making your algorithm better / more robust? Where will your current algorithm be likely to fail?
Please add your thoughts below, and if you're up for making your pipeline more robust, be sure to scroll down and check out the optional challenge video below!
The current algorithm is likely to fail with:
1. Curved lane lines
2. Different lighting conditions on the road
3. Vertical lane lines (infinite slope)
4. Lane lines that slope in the same direction
I can imagine making my algorithm better or more robust by
1. Instead of interpolating into a line, interpolate into a curve, maybe a bezier with several control points.
2. Analyze the contrast/brightness in the area of interest and even (average?) it out so darker areas become lighter.
3. Treat vertical lines as a separate scenario and either ignore them or assign some default values
4. Separate the left and the right side of the image and analyze the lines on each side independently
Submission
If you're satisfied with your video outputs it's time to submit! Submit this ipython notebook for review.
Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
End of explanation |
6,161 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
JSON file beolvasás
Step1: Excel file beolvasás
Step2: numpy egy matematikai bővítőcsomag
Step3: A nan értékek numpy-ban vannak definiálva.
Step4: ffill azt jelenti forward fill, és a nan-okat kitölti a balra vagy fölötte álló értékkel. Az axis=0 a sorokat jelenti, az axis=1 az oszlopokat.
Step5: Sorok/oszlopok törlése.
Step6: Nested pythonic lista - két felsorolás egymás után
Step7: Új oszlopok a dimenzióknak.
Step8: unstack paranccsal egy MultiIndex (azaz többszintes index) pivot-álható.
Step9: Hiányzó értékek (nan-ok) helyettesítése.
Step10: join - több DataFrame összefűzése. Az index ugyanaz kell legyen. Az oszlopok nevei különbözőek. Az index neve nem számít. | Python Code:
pd.read_json('data.json')
Explanation: JSON file beolvasás
End of explanation
df=pd.read_excel('2.17deaths causes.xls',sheet_name='2.17',skiprows=5)
Explanation: Excel file beolvasás: sorok kihagyhatók a file tetejéről, munkalap neve választható.
End of explanation
import numpy as np
Explanation: numpy egy matematikai bővítőcsomag
End of explanation
df=df.set_index('Unnamed: 0').dropna(how='any').replace('-',np.nan)
df2=pd.read_excel('2.17deaths causes.xls',sheet_name='2.17',skiprows=4)
Explanation: A nan értékek numpy-ban vannak definiálva.
End of explanation
df2.loc[[0]].ffill(axis=1)
Explanation: ffill azt jelenti forward fill, és a nan-okat kitölti a balra vagy fölötte álló értékkel. Az axis=0 a sorokat jelenti, az axis=1 az oszlopokat.
End of explanation
df=df.drop('Unnamed: 13',axis=1)
df.columns
[year for year in range(2011,2017)]
df.columns=[year for year in range(2011,2017) for k in range(2)]
Explanation: Sorok/oszlopok törlése.
End of explanation
[str(year)+'-'+str(k) for year in range(2011,2017) for k in range(2)]
nemek=['Masculin','Feminin']
[str(year)+'-'+nem for year in range(2011,2017) for nem in nemek]
df.columns=[str(year)+'-'+nem for year in range(2011,2017) for nem in nemek]
df
evek=[str(year) for year in range(2011,2017) for nem in nemek]
nemlista=[nem for year in range(2011,2017) for nem in nemek]
df=df.T
Explanation: Nested pythonic lista - két felsorolás egymás után
End of explanation
df['Ev']=evek
df['Nem']=nemlista
df.head(6)
df.set_index(['Ev','Nem'])
Explanation: Új oszlopok a dimenzióknak.
End of explanation
df.set_index(['Ev','Nem'])[['Total']].unstack()
Explanation: unstack paranccsal egy MultiIndex (azaz többszintes index) pivot-álható.
End of explanation
pd.DataFrame([0,3,4,5,'gfgf',np.nan]).replace(np.nan,'Mas')
pd.DataFrame([0,3,4,5,'gfgf',np.nan]).fillna('Mas')
Explanation: Hiányzó értékek (nan-ok) helyettesítése.
End of explanation
df1=pd.read_excel('pensiunea comfort 1.xlsx',sheet_name='Sheet1')
df2=pd.read_excel('pensiunea comfort 1.xlsx',sheet_name='Sheet2')
df3=pd.read_excel('pensiunea comfort 1.xlsx',sheet_name='Sheet3')
df1=df1.dropna(how='all',axis=0).dropna(how='all',axis=1).set_index(2019)
df2=df2.dropna(how='all',axis=0).dropna(how='all',axis=1).set_index(2019)
df3=df3.dropna(how='all',axis=0).dropna(how='all',axis=1).set_index('2019/ NR. DE NOPTI')
df1.join(df2).join(df3)
Explanation: join - több DataFrame összefűzése. Az index ugyanaz kell legyen. Az oszlopok nevei különbözőek. Az index neve nem számít.
End of explanation |
6,162 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
\title{Upsampling and Downsampling in myHDL}
\author{Steven K Armour}
\maketitle
Python Libraries Utilized
Step1: Acknowledgments
The orgianl Interpolation Decimation componetswritten in myHDL was done by Christopher Felton
(myHDL.old version here)
Author of myHDL Jan Decaluwe and the author of the myHDL Peeker XESS Corp.
Refrances
Down Sampling via Decimation
In DownSmapling by decmiation the input signal $x(n)$ is feed into the downsampler and the output is then shifted to the downshifted signal $x_d(Dn)$ where the down sampleing factor $D\geq$. In a analog time system the Downsampling action would be intupirted to be exsapnsive time scaling. In the discrate space downsampleing a alreaddy sampled signal is the action of skiping every $D$ inputs to the dowmsample. Effectivly removing, aka decmating, those inputs by the action of skiping them
Step2: The test data that is being used has 200 samples in 20$mu$ sec with a sampling rate of 10 MHz. We can see what the result of downsampling will be via the following
Step4: by the nyqest condidtion that the lowest sampling rate must be $f_s>2\max{f_a}$ and by using a downsampler the myquest condtion becomes $f_s>2D\max{f_a}$. Therefore the upper bound of D in order to not incurre the penility of ailissing is $D<\dfrac{f_s}{2\max{f_a}}$ where ofcource $D$ is an integer satifying affor mentioned condidtion
myHDL implimintaion of a Decimator
in python we simpling use slicing to performthe action of Decimation blindly, however in order to implimint this at the hardware level the layers of aprstarion in slicing must be peeled back
When the slice x[
Step5: Upsampling via Interpolation
Unlike downsampling that simply ignores input values via a built in counter and passing zeros inplace of the ignored values. Upsampleing must add intermidite values between the given input. The travial way is padding a zeros between the inputs by the upsampling factor $U$; and wile this does acomplish the act of incrassing the samples all that has been done is zero padding. This can be seen as follows for a upsampling of $U=5$
Step6: as can be seen this zero intermidite padding has not done much but to incress the number of samples. In order to make this non trival the intermited padded values must take on a value other than zero. We can accomplish this by means of interpolation by the sample and hold method. Wich happens to be rectanlge impulse function in the discrate time domain and in frequancy domn becomes the sinc function (more detail explination here & here)
Step7: as shown by using a zero order hold interpolation we can artificsly increass the sampling; and by artfical it we can see clearly that the outcome of the interpolation is not a perfect sinusiode | Python Code:
import numpy as np
import scipy.signal as sig
import pandas as pd
from sympy import *
init_printing()
from IPython.display import display, Math, Latex
from myhdl import *
from myhdlpeek import Peeker
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: \title{Upsampling and Downsampling in myHDL}
\author{Steven K Armour}
\maketitle
Python Libraries Utilized
End of explanation
TestValues=pd.read_pickle('SinGen16.pkl')
TestValues.head(16)
plt.stem(TestValues['GenValue'])
None
1/TestValues['Time[s]'][1]
Explanation: Acknowledgments
The orgianl Interpolation Decimation componetswritten in myHDL was done by Christopher Felton
(myHDL.old version here)
Author of myHDL Jan Decaluwe and the author of the myHDL Peeker XESS Corp.
Refrances
Down Sampling via Decimation
In DownSmapling by decmiation the input signal $x(n)$ is feed into the downsampler and the output is then shifted to the downshifted signal $x_d(Dn)$ where the down sampleing factor $D\geq$. In a analog time system the Downsampling action would be intupirted to be exsapnsive time scaling. In the discrate space downsampleing a alreaddy sampled signal is the action of skiping every $D$ inputs to the dowmsample. Effectivly removing, aka decmating, those inputs by the action of skiping them
End of explanation
D=5
plt.stem(TestValues['GenValue'][::D])
plt.stem(TestValues['Time[s]'][::D], TestValues['GenValue'][::D])
plt.ticklabel_format(style='sci', axis='x', scilimits=(0,0))
Explanation: The test data that is being used has 200 samples in 20$mu$ sec with a sampling rate of 10 MHz. We can see what the result of downsampling will be via the following
End of explanation
def Decimator(x_in, y_out, clk, rst, D_parm=2):
Parameters:
-----------
D_parm: The Decimation (downsampling factor) value in base 10
Ports:
------
x_in: 2's compliment input
y_out: 2's compliment output
clk: System clock, must be equal to or greater than the max
sample rate after interpolation or before decimation
rst: System reset
D_base2= int(np.ceil(np.log2(D_parm)))
Count=Signal(intbv(0, max=2**D_base2))
@always(clk.posedge)
def counter():
if rst:
Count.next=0
else:
#recyle the count
if Count==D_parm-1:
Count.next=0
#up incrimite counter upwards to D_parm
else:
Count.next=Count+1
@always(clk.posedge)
def action():
if rst:
y_out.next=0
else:
#counter has terminated
if Count==0:
y_out.next=x_in
else:
y_out.next=0
return instances()
Peeker.clear()
x_in=Signal(intbv(0, max=TestValues['GenValue'].max()+1,
min=TestValues['GenValue'].min()-1))
Peeker(x_in, 'x_in')
y_out=Signal(intbv(0, max=TestValues['GenValue'].max()+1,
min=TestValues['GenValue'].min()-1))
Peeker(y_out, 'y_out')
DecimatorTracker=[]
clk=Signal(bool(0)); Peeker(clk, 'clk')
rst=Signal(bool(0)); Peeker(rst, 'rst')
DUT=Decimator(x_in=x_in, y_out=y_out, clk=clk, rst=rst, D_parm=5)
def Decimator_TB():
TestValueGen=TestValues['GenValue'].iteritems()
@always(delay(1))
def clkgen():
clk.next = not clk
@instance
def stimules():
for step, val in TestValueGen:
x_in.next=int(val)
if step==50:
rst.next=True
else:
rst.next=False
DecimatorTracker.append(int(y_out))
yield clk.negedge
raise StopSimulation
return instances()
sim = Simulation(DUT, Decimator_TB(), *Peeker.instances()).run()
DecimatorTracker=np.array(DecimatorTracker)
Peeker.to_wavedrom(start_time=40, stop_time=60, tock=True)
plt.stem(DecimatorTracker)
plt.stem(DecimatorTracker[np.nonzero(DecimatorTracker)])
Explanation: by the nyqest condidtion that the lowest sampling rate must be $f_s>2\max{f_a}$ and by using a downsampler the myquest condtion becomes $f_s>2D\max{f_a}$. Therefore the upper bound of D in order to not incurre the penility of ailissing is $D<\dfrac{f_s}{2\max{f_a}}$ where ofcource $D$ is an integer satifying affor mentioned condidtion
myHDL implimintaion of a Decimator
in python we simpling use slicing to performthe action of Decimation blindly, however in order to implimint this at the hardware level the layers of aprstarion in slicing must be peeled back
When the slice x[::D] is taken we are telling python to return the nth value after egnoring D intermited values. How python (or any other programing language does this is) is by estaplishing a counter that counts eather up to and down from the the skip factor, returing the now nth value and then reseting that counter and performing that action to a end condition is meet. In order to do this via a single HDL object it will have be broken down into two internal componets. One componet is the afored mentioned counter and the other is the counter watcher that allows the nth input to pass through it if counter reaches the skip number D.
End of explanation
original=list(np.array(TestValues['GenValue']))
U=5
zeroPadding= [0]*((U+1)*len(original)-1)
zeroPadding[::U+1]=original; InterpoloteData=pd.DataFrame(zeroPadding, columns=['GenValue'])
plt.stem(InterpoloteData['GenValue'])
InterpoloteData.head((U+2)*2)
zeroPadding=np.array(zeroPadding)
zeroPadding[zeroPadding==0]=np.nan
zeroPadding[0]=0
InterpoloteData=pd.DataFrame(zeroPadding, columns=['GenValue'])
Explanation: Upsampling via Interpolation
Unlike downsampling that simply ignores input values via a built in counter and passing zeros inplace of the ignored values. Upsampleing must add intermidite values between the given input. The travial way is padding a zeros between the inputs by the upsampling factor $U$; and wile this does acomplish the act of incrassing the samples all that has been done is zero padding. This can be seen as follows for a upsampling of $U=5$
End of explanation
InterpoloteData['GenValue'].interpolate(method='zero', inplace=True)
plt.stem(InterpoloteData['GenValue'])
Explanation: as can be seen this zero intermidite padding has not done much but to incress the number of samples. In order to make this non trival the intermited padded values must take on a value other than zero. We can accomplish this by means of interpolation by the sample and hold method. Wich happens to be rectanlge impulse function in the discrate time domain and in frequancy domn becomes the sinc function (more detail explination here & here)
End of explanation
plt.stem(InterpoloteData['GenValue'][:200])
Explanation: as shown by using a zero order hold interpolation we can artificsly increass the sampling; and by artfical it we can see clearly that the outcome of the interpolation is not a perfect sinusiode
End of explanation |
6,163 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
tsam - Segmentation
Example usage of the time series aggregation module (tsam)
Date
Step1: Input data
Read in time series from testdata.csv with pandas
Step2: Create a plot function for the temperature for a visual comparison of the time series
Step3: Hierarchical aggregation with medoid representation and 10 typical days with 24 hourly segments
Initialize an aggregation class object with hierarchical as method for eight typical days
Step4: Create the typical periods
Step5: Predict original data
Step6: Get accuracy indicators
Step7: Hierarchical aggregation with medoid representation and 20 typical days with 12 irregular segments
Step8: Create the typical periods
Step9: Predict original data
Step10: Get accuracy indicators
Step11: Comparison of the aggregations
It was shown for the temperature, but both times all four time series have been aggregated. Therefore, we compare here also the duration curves of the electrical load for the original time series, the aggregation with k-mean, and the hierarchical aggregation including peak periods.
Step12: Validation
Check that the means of the original time series and the predicted ones are the same.
Step13: Check that a segmented period has the same column-wise means as a non-segmented period for if the periods are the same.
Step14: Print out the (segmented) typical periods. | Python Code:
%load_ext autoreload
%autoreload 2
import copy
import os
import pandas as pd
import matplotlib.pyplot as plt
import tsam.timeseriesaggregation as tsam
%matplotlib inline
Explanation: tsam - Segmentation
Example usage of the time series aggregation module (tsam)
Date: 31.10.2019
Author: Maximilian Hoffmann
Import pandas and the relevant time series aggregation class
End of explanation
raw = pd.read_csv('testdata.csv', index_col = 0)
Explanation: Input data
Read in time series from testdata.csv with pandas
End of explanation
def plotTS(data, periodlength, vmin, vmax, label = 'T [°C]'):
fig, axes = plt.subplots(figsize = [6, 2], dpi = 100, nrows = 1, ncols = 1)
stacked, timeindex = tsam.unstackToPeriods(copy.deepcopy(data), periodlength)
cax = axes.imshow(stacked.values.T, interpolation = 'nearest', vmin = vmin, vmax = vmax)
axes.set_aspect('auto')
axes.set_ylabel('Hour')
plt.xlabel('Day')
fig.subplots_adjust(right = 1.2)
cbar=plt.colorbar(cax)
cbar.set_label(label)
Explanation: Create a plot function for the temperature for a visual comparison of the time series
End of explanation
aggregation = tsam.TimeSeriesAggregation(raw, noTypicalPeriods = 10, hoursPerPeriod = 24,
clusterMethod = 'hierarchical')
Explanation: Hierarchical aggregation with medoid representation and 10 typical days with 24 hourly segments
Initialize an aggregation class object with hierarchical as method for eight typical days
End of explanation
typPeriods = aggregation.createTypicalPeriods()
Explanation: Create the typical periods
End of explanation
predictedPeriods = aggregation.predictOriginalData()
Explanation: Predict original data
End of explanation
aggregation.accuracyIndicators()
Explanation: Get accuracy indicators
End of explanation
aggregationSeg = tsam.TimeSeriesAggregation(raw, noTypicalPeriods = 20, hoursPerPeriod = 24,
clusterMethod = 'hierarchical', segmentation=True, noSegments=12)
Explanation: Hierarchical aggregation with medoid representation and 20 typical days with 12 irregular segments
End of explanation
typPeriodsSeg = aggregationSeg.createTypicalPeriods()
Explanation: Create the typical periods
End of explanation
predictedPeriodsSeg = aggregationSeg.predictOriginalData()
Explanation: Predict original data
End of explanation
aggregationSeg.accuracyIndicators()
Explanation: Get accuracy indicators
End of explanation
fig, axes = plt.subplots(figsize = [6, 2], dpi = 100, nrows = 1, ncols = 1)
raw['Load'].sort_values(ascending=False).reset_index(drop=True).plot(label = 'Original')
predictedPeriods['Load'].sort_values(ascending=False).reset_index(drop=True).plot(label = '10 with 24 hours')
predictedPeriodsSeg['Load'].sort_values(
ascending=False).reset_index(drop=True).plot(label = '20 with 12 Seg')
plt.legend()
plt.xlabel('Hours [h]')
plt.ylabel('Duration Load [MW]')
param = 'GHI'
plotTS(raw[param], 24, vmin = raw[param].min(), vmax = raw[param].max(), label = param)
plotTS(predictedPeriods[param], 24, vmin = raw[param].min(), vmax = raw[param].max(), label = param)
plotTS(predictedPeriodsSeg[param], 24, vmin = raw[param].min(), vmax = raw[param].max(), label = param)
fig, axes = plt.subplots(figsize = [6, 2], dpi = 100, nrows = 1, ncols = 1)
raw['Load']['20100210':'20100218'].plot(label = 'Original')
predictedPeriods['Load']['20100210':'20100218'].plot(label = '10 with 24 hours')
predictedPeriodsSeg['Load']['20100210':'20100218'].plot(label = '20 with 12 seg')
plt.legend()
plt.ylabel('Load [MW]')
Explanation: Comparison of the aggregations
It was shown for the temperature, but both times all four time series have been aggregated. Therefore, we compare here also the duration curves of the electrical load for the original time series, the aggregation with k-mean, and the hierarchical aggregation including peak periods.
End of explanation
raw.mean()
predictedPeriods.mean()
predictedPeriodsSeg.mean()
Explanation: Validation
Check that the means of the original time series and the predicted ones are the same.
End of explanation
aggregation.createTypicalPeriods().loc[0,:].mean()
aggregationSegTest = tsam.TimeSeriesAggregation(raw, noTypicalPeriods = 10, hoursPerPeriod = 24,
clusterMethod = 'hierarchical', segmentation=True, noSegments=12)
segmentDurations=aggregationSegTest.createTypicalPeriods().loc[0,:].reset_index(0, drop=True).index.values
aggregationSegTest.createTypicalPeriods().loc[0,:].mul(segmentDurations, axis=0).sum()/segmentDurations.sum()
Explanation: Check that a segmented period has the same column-wise means as a non-segmented period for if the periods are the same.
End of explanation
aggregationSeg.createTypicalPeriods()
aggregation.createTypicalPeriods()
Explanation: Print out the (segmented) typical periods.
End of explanation |
6,164 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Class Structure in Python
Lesson Page
Step1: The attributes in Client are name, balance and level.
Note
Step2: We can see the attributes of John_Doe, or Jane_Defoe by calling them
Step3: We can also add, remove or modify attributes as we like
Step4: You can also use the following instead instead of the normal statements
Step5: Methods
Methods are functions that can be applied (only) to instances of your class.
For example, in the case of our 'Client' class, we may want to update a person's bank account once they withdraw or deposit money. Let's create these methods below.
Note that each method takes 'self' as an argument along with the arguments required when calling this method.
Step6: What is "self"?
*not in the philosophical sense*
In the method, withdraw(self, amount), the self refers to the instance upon which we are applying the instructions of the method.
When we call a method, f(self, arg), on the object x, we use x.f(arg).
- x is passed as the first argument, self, by default and all that is required are the other arguments that comprise the function.
It is equivalent to calling MyClass.f(x, arg).
Try it yourself with the Client class and one of the methods we've written.
Step7: Static Methods
Static methods are methods that belong to a class but do not have access to self and hence don't require an instance to function (i.e. it will work on the class level as well as the instance level).
We denote these with the line @staticmethod before we define our static method.
Let's create a static method called make_money_sound() that will simply print "Cha-ching!" when called.
Step8: Class Methods
A class method is a type of method that will receive the class rather than the instance as the first parameter. It is also identified similarly to a static method, with @classmethod.
Create a class method called bank_location() that will print both the bank name and location when called upon the class.
Step9: Key Concept | Python Code:
# create the Client class below
class Client(object):
def __init__(self, name, balance):
self.name = name
self.balance = balance + 100
#define account level
if self.balance < 5000:
self.level = "Basic"
elif self.balance < 15000:
self.level = "Intermediate"
else:
self.level = "Advanced"
Explanation: The Class Structure in Python
Lesson Page: https://github.com/UofTCoders/studyGroup/blob/gh-pages/lessons/python/classes/lesson.md
Adapted from https://www.jeffknupp.com/blog/2014/06/18/improve-your-python-python-classes-and-object-oriented-programming/
What is a Class?
A class is a structure in Python that can be used as a blueprint to create objects that have
1. prototyped features, "attributes" that are variable
2. "methods" which are functions that can be applied to the object that is created, or rather, an instance of that class.
Defining a Class
We want to define a class called Client in which a new instance stores a client's name, balance, and account level.
It will take the format of:
class Client(object):
def __init__(self, args[, ...])
#more code
"def __init__" is what we use when creating classes to define how we can create a new instance of this class.
The arguments of __init__ are required input when creating a new instance of this class, except for 'self'.
End of explanation
John_Doe = Client("John Doe", 500)
Jane_Defoe = Client("Jane Defoe", 150000)
Explanation: The attributes in Client are name, balance and level.
Note: "self.name" and "name" are different variables. Here they represent the same values, but in other cases, this may lead to problems. For example, here the bank has decided to update "self.balance" by giving all new members a bonus $100 on top of what they're putting in the bank. Calling "balance" for other calculations will not have the correct value.
Creating an Instance of a Class
Now, lets try creating some new clients named John_Doe, and Jane_Defoe:
End of explanation
John_Doe.name
Jane_Defoe.level
Jane_Defoe.balance
Explanation: We can see the attributes of John_Doe, or Jane_Defoe by calling them:
End of explanation
John_Doe.email = "[email protected]"
John_Doe.email = "[email protected]"
del John_Doe.email
getattr(John_Doe, 'name')
setattr(John_Doe, 'email', '[email protected]')
John_Doe.email
Explanation: We can also add, remove or modify attributes as we like:
End of explanation
Client.bank = "TD"
Client.location = "Toronto, ON"
# try calling these attributes at the class and instance level
Client.bank
Jane_Defoe.bank
Explanation: You can also use the following instead instead of the normal statements:
The getattr(obj, name[, default]) : to access the attribute of object.
The hasattr(obj,name) : to check if an attribute exists or not.
The setattr(obj,name,value) : to set an attribute. If attribute does not exist, then it would be created.
The delattr(obj, name) : to delete an attribute.
Class Attributes vs. Normal Attributes
A class attribute is an attribute set at the class-level rather than the instance-level, such that the value of this attribute will be the same across all instances.
For our Client class, we might want to set the name of the bank, and the location, which would not change from instance to instance.
End of explanation
# Use the Client class code above to now add methods for withdrawal and depositing of money
# create the Client class below
class Client(object):
def __init__(self, name, balance):
self.name = name
self.balance = balance + 100
#define account level
if self.balance < 5000:
self.level = "Basic"
elif self.balance < 15000:
self.level = "Intermediate"
else:
self.level = "Advanced"
def deposit(self, amount):
self.balance += amount
return self.balance
def withdraw(self, amount):
if amount > self.balance:
raise RuntimeError("Insufficient for withdrawal")
else:
self.balance -= amount
return self.balance
Jane_Defoe.deposit(150000)
Explanation: Methods
Methods are functions that can be applied (only) to instances of your class.
For example, in the case of our 'Client' class, we may want to update a person's bank account once they withdraw or deposit money. Let's create these methods below.
Note that each method takes 'self' as an argument along with the arguments required when calling this method.
End of explanation
# Try calling a method two different ways
John_Doe.deposit(500)
Client.withdraw(Jane_Defoe, 50000)
Explanation: What is "self"?
*not in the philosophical sense*
In the method, withdraw(self, amount), the self refers to the instance upon which we are applying the instructions of the method.
When we call a method, f(self, arg), on the object x, we use x.f(arg).
- x is passed as the first argument, self, by default and all that is required are the other arguments that comprise the function.
It is equivalent to calling MyClass.f(x, arg).
Try it yourself with the Client class and one of the methods we've written.
End of explanation
# Add a static method called make_money_sound()
# create the Client class below
class Client(object):
def __init__(self, name, balance):
self.name = name
self.balance = balance + 100
#define account level
if self.balance < 5000:
self.level = "Basic"
elif self.balance < 15000:
self.level = "Intermediate"
else:
self.level = "Advanced"
@staticmethod
def make_money_sound():
print "Cha-ching!"
Client.make_money_sound()
Explanation: Static Methods
Static methods are methods that belong to a class but do not have access to self and hence don't require an instance to function (i.e. it will work on the class level as well as the instance level).
We denote these with the line @staticmethod before we define our static method.
Let's create a static method called make_money_sound() that will simply print "Cha-ching!" when called.
End of explanation
# Add a class method called bank_location()
# create the Client class below
class Client(object):
bank = "TD"
location = "Toronto, ON"
def __init__(self, name, balance):
self.name = name
self.balance = balance + 100
#define account level
if self.balance < 5000:
self.level = "Basic"
elif self.balance < 15000:
self.level = "Intermediate"
else:
self.level = "Advanced"
@classmethod
def bank_location(cls):
return str(cls.bank + " " + cls.location)
Client.bank_location()
Explanation: Class Methods
A class method is a type of method that will receive the class rather than the instance as the first parameter. It is also identified similarly to a static method, with @classmethod.
Create a class method called bank_location() that will print both the bank name and location when called upon the class.
End of explanation
# create the Savings class below
class Savings(Client):
interest_rate = 0.005
def update_balance(self):
self.balance += self.balance*self.interest_rate
return self.balance
# create an instance the same way as a Client but this time by calling Savings instead
Lina_Tran = Savings("Lina Tran", 50)
# it now has access to the new attributes and methods in Savings...
print Lina_Tran.name
print Lina_Tran.balance
print Lina_Tran.interest_rate
# ...as well as access to attributes and methods from the Client class as well
Lina_Tran.update_balance()
#defining a method outside the class definition
def check_balance(self):
return self.balance
Client.check_balance = check_balance
John_Doe.check_balance()
Explanation: Key Concept: Inheritance
A 'child' class can be created from a 'parent' class, whereby the child will bring over attributes and methods that its parent has, but where new features can be created as well.
This would be useful if you want to create multiple classes that would have some features that are kept the same between them. You would simply create a parent class of these children classes that have those maintained features.
Imagine we want to create different types of clients but still have all the base attributes and methods found in client currently.
For example, let's create a class called Savings that inherits from the Client class. In doing so, we do not need to write another __init__ method as it will inherit this from its parent.
End of explanation |
6,165 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Artificial Intelligence for Humans
Introduction to the Math of Neural Networks
Understanding the Summation Operator
You will frequently summations, as shown below
Step1: Understanding the Product Operator
Equation 1.2
Step2: Regression and Classification
Transfer Functions
Linear Transfer Function
Equation 1.4
Step3: Softmax Transfer Function
Equation 1.4
Step4: Sigmoid Transfer Function
Equation 1.4
Step5: Hyperbolic Transfer Function
Equation 1.3
Step6: $$ f(x) = \frac{1}{1 + e^{-x}} $$
Calculating a Neuron
Equation 1.2 | Python Code:
import numpy as np
i = np.arange(1,11) # 11, because arange is not inclusive
s = np.sum(2*i)
print(s)
More traditional looping (non-Numpy) would perform the summation as follows:
s = 0
for i in range(1,11):
s += 2*i
print(s)
Explanation: Artificial Intelligence for Humans
Introduction to the Math of Neural Networks
Understanding the Summation Operator
You will frequently summations, as shown below:
Equation 1.1: The Summation Operator
$$ s = \sum_{i=1}^{10} 2i $$
If you were to write the above equation as code (using Numpy/Python) you would have the following:
End of explanation
import numpy as np
i = np.arange(1,6) # 6, because arange is not inclusive
s = np.prod(2*i)
print(s)
s = 1
for i in range(1,6): # 6, because arange is not inclusive
s *= 2*i
print(s)
Explanation: Understanding the Product Operator
Equation 1.2: The Product Operator
$$ s = \prod_{i=1}^{5} 2i $$
End of explanation
import numpy as np
import matplotlib.pyplot as plt
def linear_transfer(t):
return t
x = np.arange(-5.0, 5.0, 0.02)
plt.xlabel("x")
plt.ylabel("y")
plt.plot(x, linear_transfer(x), 'r')
plt.show()
Explanation: Regression and Classification
Transfer Functions
Linear Transfer Function
Equation 1.4: The Linear Transfer Function
$$ f(x) = x $$
End of explanation
import numpy as np
import matplotlib.pyplot as plt
def relu_transfer(t):
return np.maximum(0,x)
x = np.arange(-5.0, 5.0, 0.02)
plt.ylim([-2,3])
plt.xlim([-5,4])
plt.xlabel("x")
plt.ylabel("y")
plt.plot(x, relu_transfer(x), 'r')
plt.show()
x = np.array([1,2,3,4,5])
print(np.maximum(x,10))
Explanation: Softmax Transfer Function
Equation 1.4: The Softmax Transfer Function
$$ \sigma(\mathbf{z})j = \frac{e^{z_j}}{\sum{k=1}^K e^{z_k}} $$
Rectifier Linear Unit (ReLU) Transfer Function
Equation 1.4: The ReLU Transfer Function
$$ f(x) = \max(0, x) $$
End of explanation
import numpy as np
import matplotlib.pyplot as plt
def sigmoid_transfer(t):
return 1.0 / (1+np.exp(-x))
x = np.arange(-5.0, 5.0, 0.02)
plt.xlabel("x")
plt.ylabel("y")
plt.plot(x, sigmoid_transfer(x), 'r')
plt.show()
Explanation: Sigmoid Transfer Function
Equation 1.4: The Sigmoid Transfer Function
$$ f(x) = \frac{1}{1 + e^{-x}} $$
End of explanation
import numpy as np
import matplotlib.pyplot as plt
def tanh_transfer(t):
return np.tanh(t)
x = np.arange(-5.0, 5.0, 0.02)
plt.xlabel("x")
plt.ylabel("y")
plt.plot(x, tanh_transfer(x), 'r')
plt.show()
Explanation: Hyperbolic Transfer Function
Equation 1.3: The Hyperbolic Tangent Function
$$ f(x) = \tanh(x) $$
End of explanation
import numpy as np
a = np.array( [1, 1, 0] ) # First 1 is the bias, 1 and 0 are the inputs
Explanation: $$ f(x) = \frac{1}{1 + e^{-x}} $$
Calculating a Neuron
Equation 1.2: Calculate H1
$$ h_1 = A(\sum_{c=1}^n (i_c * w_c)) $$
End of explanation |
6,166 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dependencies
Step1: nbformat doc
Step2: whoosh doc
Step3: widget discussion | Python Code:
import nbformat
Explanation: Dependencies:
- whoosh
- yattag
- hurry.filesize
End of explanation
from whoosh.index import create_in
from whoosh.fields import *
from whoosh.qparser import QueryParser
Explanation: nbformat doc: http://nbformat.readthedocs.org/en/latest/api.html
End of explanation
import os, fnmatch
import stat
import datetime
schema = Schema(
title=TEXT(stored=True),
markdown=TEXT(stored=True),
code=TEXT(stored=True),
path=ID(stored=True),
user=KEYWORD(stored=True),
tags=KEYWORD(stored=True),
modified=DATETIME(stored=True),
accessed=DATETIME(stored=True),
size=NUMERIC(stored=True)
)
ix = create_in("indexdir", schema)
writer = ix.writer()
def get_file_info(path):
info = dict()
try:
st = os.stat(path)
info['uid'] = st.st_uid
info['gid'] = st.st_gid
info['size'] = st.st_size
info['atime'] = st.st_atime
info['mtime'] = st.st_mtime
info['ctime'] = st.st_ctime
except IOError:
print("Failed to get information about", file)
else:
try:
import pwd # not available on all platforms
userinfo = pwd.getpwuid(st[stat.ST_UID])
except (ImportError, KeyError):
print("Failed to get the owner name for", file)
else:
info['owner'] = userinfo.pw_name
info['complete_owner'] = userinfo.pw_gecos
return info
def add_notebook(writer, path):
info = get_file_info(path)
note = nbformat.read(path, nbformat.NO_CONVERT)
markdown = ''
code = ''
tags = ''
for cell in note['cells']:
if cell['cell_type'] == 'markdown':
markdown += cell['source']
markdown += '\n\n'
if cell['cell_type'] == 'code':
code += cell['source']
code += '\n\n'
writer.add_document(
title=''.join(path.split('/')[-1].split('.')[:-1]),
path=path,
markdown=markdown,
code=code,
user=info['owner'],
tags=tags,
modified=datetime.datetime.fromtimestamp(info['mtime']),
accessed=datetime.datetime.fromtimestamp(info['atime']),
size=info['size']
)
def find(path, pattern, antipattern):
result = []
for root, dirs, files in os.walk(path):
for name in files:
if fnmatch.fnmatch(name, pattern) and not fnmatch.fnmatch(name, antipattern):
result.append(os.path.join(root, name))
return result
file_path = find('/Users/lukas/Projects', '*.ipynb', '*.ipynb_checkpoints')
for path in file_path:
add_notebook(writer, path)
writer.commit()
from IPython.html import widgets
from IPython.display import display, clear_output
from IPython.core.display import HTML
from yattag import Doc
from hurry.filesize import size
display(HTML('''
<style>
.rendered_html tr, .rendered_html th, .rendered_html td {
border-collapse: collapse;
margin: 1em 2em;
}
.rendered_html table {
margin-left: auto;
margin-right: auto;
border: none;
border-collapse: collapse;
}
</style>
'''
))
Explanation: whoosh doc: http://whoosh.readthedocs.org/en/latest/quickstart.html
End of explanation
def format_output(results):
doc, tag, text = Doc().tagtext()
doc.text('Number of hits: {}\n'.format(len(results)))
with tag('table', klass='table table-striped'):
with tag('thead'):
with tag('tr'):
with tag('th'):
doc.text('#')
with tag('th'):
doc.text('Title')
with tag('tbody'):
for idx, result in enumerate(results):
with tag('tr'):
with tag('td'):
doc.text(idx)
with tag('td'):
with tag('table'):
with tag('tbody'):
with tag('tr'):
with tag('td', klass='col-md-6'):
with tag('a', href=result['path']):
doc.text(result['title'])
with tag('td', klass='col-md-6'):
doc.text(' asdf({})'.format(result['user']))
display(HTML(doc.getvalue()))
import whoosh.highlight as highlight
class BracketFormatter(highlight.Formatter):
def format_token(self, text, token, replace=False):
# Use the get_text function to get the text corresponding to the
# token
tokentext = highlight.get_text(text, token, replace=True)
# Return the text as you want it to appear in the highlighted
# string
return "<mark>%s</mark>" % tokentext
def format_output(results):
doc, tag, text = Doc().tagtext()
doc.text('Number of hits: {}\n'.format(len(results)))
for idx, result in enumerate(results):
with tag('div', klass='row'):
with tag('div', klass='col-md-1'):
doc.text(idx)
with tag('div', klass='col-md-11'):
with tag('strong'):
doc.text(result['title'])
with tag('div', klass='row'):
with tag('div', klass='col-md-1'):
pass
with tag('div', klass='col-md-11'):
with tag('a'):
doc.text(result['path'])
with tag('div', klass='row'):
with tag('div', klass='col-md-1'):
pass
with tag('div', klass='col-md-2'):
doc.text('User: {}'.format(result['user']))
with tag('div', klass='col-md-2'):
doc.text('Size: {}B'.format(size(result['size'])))
with tag('div', klass='col-md-7'):
doc.text('Modified: {}'.format(result['modified']))
with tag('div', klass='row'):
with tag('div', klass='col-md-1'):
pass
with tag('div', klass='col-md-11'):
doc.asis(result.highlights('markdown'))
with tag('br'):
pass
display(HTML(doc.getvalue()))
def search(query_string, field='markdown'):
limit = 200
with ix.searcher() as searcher:
query = QueryParser(field, ix.schema).parse(query_string)
results = searcher.search(query, limit=limit)
brf = BracketFormatter()
results.formatter = brf
results.fragmenter.maxchars = 300
results.fragmenter.surround = 50
format_output(results)
def button_callback(btn):
clear_output()
search(query_string=container.children[0].value)
button = widgets.ButtonWidget(description="Click me!")
button.on_click(button_callback)
text_box = widgets.Text(value='exa*')
container = widgets.HBox(children=(text_box, button))
display(container)
Explanation: widget discussion: http://stackoverflow.com/questions/26352555/how-does-one-get-widget-values-with-a-button-in-ipython
End of explanation |
6,167 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Model Versioning Design Pattern
In the Model Versioning design pattern, backward compatibility is achieved by deploying a changed model as a microservice with a different REST endpoint. This is a necessary prerequisite for many of the other patterns discussed in this chapter.
Step1: Download and preprocess data
You'll need to authenticate to your Google Cloud to run the BigQuery query below.
Step2: In the following cell, replace your-cloud-project with the name of your GCP project.
Step3: Model version #1
Step4: Deploying classification model to AI Platform
Replace your-cloud-project below with the name of your cloud project.
Step5: Model version #2
Step6: Note that accuracy will be similar to the XGBoost model. We're just using this to demonstrate how training a model with a different framework could be deployed as a new version.
Step7: Next we'll deploy the updated TF model to AI Platform as a v2.
Step8: Alternative | Python Code:
import json
import numpy as np
import pandas as pd
import xgboost as xgb
import tensorflow as tf
from sklearn.utils import shuffle
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from google.cloud import bigquery
Explanation: Model Versioning Design Pattern
In the Model Versioning design pattern, backward compatibility is achieved by deploying a changed model as a microservice with a different REST endpoint. This is a necessary prerequisite for many of the other patterns discussed in this chapter.
End of explanation
from google.colab import auth
auth.authenticate_user()
Explanation: Download and preprocess data
You'll need to authenticate to your Google Cloud to run the BigQuery query below.
End of explanation
# Note: this query may take a few minutes to run
%%bigquery df --project your-cloud-project
SELECT
arr_delay,
carrier,
origin,
dest,
dep_delay,
taxi_out,
distance
FROM
`cloud-training-demos.flights.tzcorr`
WHERE
extract(year from fl_date) = 2015
ORDER BY fl_date ASC
LIMIT 300000
df = df.dropna()
df = shuffle(df, random_state=2)
df.head()
# Only include origins and destinations that occur frequently in the dataset
df = df[df['origin'].map(df['origin'].value_counts()) > 500]
df = df[df['dest'].map(df['dest'].value_counts()) > 500]
df = pd.get_dummies(df, columns=['carrier', 'origin', 'dest'])
Explanation: In the following cell, replace your-cloud-project with the name of your GCP project.
End of explanation
# Create a boolean column to indicate whether flight was > 30 mins delayed
df.loc[df['arr_delay'] >= 30, 'arr_delay_bool'] = 1
df.loc[df['arr_delay'] < 30, 'arr_delay_bool'] = 0
df['arr_delay_bool'].value_counts()
classify_model_labels = df['arr_delay_bool']
classify_model_data = df.drop(columns=['arr_delay', 'arr_delay_bool'])
x,y = classify_model_data,classify_model_labels
x_train,x_test,y_train,y_test = train_test_split(x,y)
model = xgb.XGBRegressor(
objective='reg:logistic'
)
# Given the dataset size, this may take 1-2 minutes to run
model.fit(x_train, y_train)
y_pred = model.predict(x_test)
acc = accuracy_score(y_test, np.round(y_pred))
print(acc)
# Save the model
model.save_model('model.bst')
Explanation: Model version #1: predict whether or not the flight is > 30 min delayed
End of explanation
# Set your cloud project
PROJECT = 'your-cloud-project'
!gcloud config set project $PROJECT
BUCKET = PROJECT + '_flight_model_bucket'
# Create a bucket if you don't have one
# You only need to run this once
!gsutil mb gs://$BUCKET
!gsutil cp 'model.bst' gs://$BUCKET
# Create the model resource
!gcloud ai-platform models create flight_delay_prediction
# Create the version
!gcloud ai-platform versions create 'v1' \
--model 'flight_delay_prediction' \
--origin gs://$BUCKET \
--runtime-version=1.15 \
--framework 'XGBOOST' \
--python-version=3.7
# Get a prediction on the first example from our test set
!rm input.json
num_examples = 10
with open('input.json', 'a') as f:
for i in range(num_examples):
f.write(str(x_test.iloc[i].values.tolist()))
f.write('\n')
!cat input.json
# Make a prediction to the deployed model
!gcloud ai-platform predict --model 'flight_delay_prediction' --version \
'v1' --json-instances 'input.json'
# Compare this with actual values
print(y_test.iloc[:5])
Explanation: Deploying classification model to AI Platform
Replace your-cloud-project below with the name of your cloud project.
End of explanation
tf_model = tf.keras.Sequential([
tf.keras.layers.Dense(32, activation='relu', input_shape=[len(x_train.iloc[0])]),
tf.keras.layers.Dense(32, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
tf_model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
tf_model.fit(x_train, y_train, epochs=10, validation_split=0.1)
Explanation: Model version #2: replace XGBoost with TensorFlow
End of explanation
metrics = tf_model.evaluate(x_test, y_test)
print(metrics)
Explanation: Note that accuracy will be similar to the XGBoost model. We're just using this to demonstrate how training a model with a different framework could be deployed as a new version.
End of explanation
tf_model_path = 'gs://' + BUCKET + '/tf'
tf_model.save(tf_model_path, save_format='tf')
!gcloud ai-platform versions create 'v2' \
--model 'flight_delay_prediction' \
--origin $tf_model_path \
--runtime-version=2.1 \
--framework 'TENSORFLOW' \
--python-version=3.7
# Make a prediction to the new version
!gcloud ai-platform predict --model 'flight_delay_prediction' --version \
'v2' --json-instances 'input.json'
Explanation: Next we'll deploy the updated TF model to AI Platform as a v2.
End of explanation
regression_model_labels = df['arr_delay']
regression_model_data = df.drop(columns=['arr_delay', 'arr_delay_bool'])
x,y = regression_model_data,regression_model_labels
x_train,x_test,y_train,y_test = train_test_split(x,y)
model = xgb.XGBRegressor(
objective='reg:linear'
)
# This will take 1-2 minutes to run
model.fit(x_train, y_train)
y_pred = model.predict(x_test)
for i,val in enumerate(y_pred[:10]):
print(val)
print(y_test.iloc[i])
print()
model.save_model('model.bst')
!gsutil cp model.bst gs://$BUCKET/regression/
!gcloud ai-platform models create 'flights_regression'
# Create the version
!gcloud ai-platform versions create 'v1' \
--model 'flights_regression' \
--origin gs://$BUCKET/regression \
--runtime-version=1.15 \
--framework 'XGBOOST' \
--python-version=3.7
!gcloud ai-platform predict --model 'flighs_regression' --version \
'v1' --json-instances 'input.json'
Explanation: Alternative: reframe as a regression problem
In this case, you'd likely want to create a new model resource since the response format of your model has changed.
End of explanation |
6,168 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Obsah dnesnej prednasky
1. Rozsah platnosti premennej
2. Vnorena funkcia
3. Closure
4. Decorator
Closure - Uzaver
Funkcia, ktora pouziva neglobalnu premennu definovanu mimo svojho tela (vysvetlim)
Da sa to pouzit napriklad na asynchronne programovanie pomocou callbackov (spatne volanie?).
prevzane z
Step1: Funkcia vie citat premennu definovanu mimo svojho tela
Step2: Python nedovoli menit (len citat) hodnotu premennej, ktora nie je lokalna
Python nevyzaduje deklaraciu premennych ale predpoklada, ze premenna priradena v tele funkcie je lokalna.
JavaScripte napriklad vyzaduje deklaraciu lokalnych premennych pomocou var. Ak na to zabudente, tak nic nepredpokalda a pokusi sa hladat premennu medzi globalnymi. Toto casto sposobuje bugy.
Ak chcete vo funkcii priradit hodnotu premennej definovanej mimo nej, musite z nej spravit globalnu premennu
Step3: Ale my globalne premenne nemame radi. Takze toto nebudeme pouzivat
Co si zapamatat o rozsahu platnosti premennych?
funkcia vie citat premenne definovane mimo svojho tela
tieto premenne ale nevie menit.
ak by ich chcela menit, tak pri kompilacii do bitekodu sa vytvori lokalna premenna a ak ju pouzijeme skor ako jej priradime hodnotu, tak mame problem
teoreticky mozeme pouzit globalnu premennu, ale my nechceme
Dalsia vec, ktoru si potrebujeme vysvetlit na to aby sme spravili closure su vnorene funkcie
Ano, funkcie sa daju definovat vo vnutri inej funkcie
Step4: Vnutorna funkcia nie je z vonka dostupna. Existuje len v ramci tela vonkajsej funkcie. Teda pokial ju nevratime ako navratovu hodnotu.
Step5: Na co su vnorene funkcie dobre?
Hlavne skryvaju implementaciu funkcie pred okolim
Umoznuju definovanie komplexnej logiky a mnozstva pomocnych funkcii zatial co zvonka bude dostupna len jedina.
Nemusite tak definovat zbytocne vela "privatnych" funkcii
V pythne ani privatne funkcie niesu, takze toto je jediny sposob ako skryt pomocne funkcie pred svetom
Napriklad ak sa vam opakuje rovnaka postupnost riadkov, tak ju vyjmete do funkcie ale ak je tato zbytocna pre ine funkie, tak zvonka vobec nemusi byt dostupna
tj. kalsicky princip privatnej pomocnej funkcie
Step6: Daju sa pouzit napriklad na kontrolu vstupov odelenu od samotenho vypoctu
Step7: Takze co je to teda ten uzaver?
Funkcia, ktora pouziva neglobalnu premennu definovanu mimo svojho tela
Naco?
Specializovanie vseobecnej funkcie (Partial function application)
Udrziavanie stavu medzi volaniami funkcie
Ano, bude to spinava funkcia
Obcas sa ale uchovavaniu stavu nevyhneme
Ideme spravit funkcionalny sposob ako si uchovavat stav
Chcem ukazat ako sa daju funkcionalne crty pouzit na vylepsenie imperativneho kodu
Specializovanie vseobecnej funkcie (Partial function application)
Teraz len jeden priklad. Poriadne sa tomu budem venovat zajtra
Step8: Udrziavanie stavu medzi volaniami funkcie - Ako?
Funkcia vracajuca vnorenu funkciu, ktora vyuziva lokalnu premennu na udrziavanie stavu.
Predstavte si takuto ulohu
Step9: Na vypocet potrebujeme uchovavat stav
Step10: Obalujuca funkcia definuje rozsah platnosti lokalnych premennych. Po skonceni vykonavania uz na ne neexistuje referencia. Okrem tej, ktora sa vrati ako navratova hodnota. Co je zhodou okolnosti funkcia, ktora jednu z lokalnych premennych pouziva. Tato premenna sa vola volna premenna, kedze na nu neexistuje ziadna ina referencia.
Aj ked k volnej premennej sa este da dostat
Kedze v pythone nie je slovicko private a vsetka kontrola pristupu je len na konvecnii, tak by ste sa toho nemali chytat. Na debugovanie a testovanie je to ale celkom dobre vediet.
Step11: Da sa pristupit k nazvom premennych a aj volnych premennych
Step12: A aj k ich hodnotam
Step13: Na spocitanie priemeru nepotrebujeme cely zoznam. Staci nam suma a pocet.
Tento priklad je ale pokazeny. Kto vie preco? Uz som to naznacil viac krat.
Step14: += je vlastne priradenie a teda spravi z premennych count a total lokalne premenne, pricom ich chce hned predtym aj pouzit
Pripominam, ze predchadzajuci priklad nepriradzoval do premennej, len upravoval mutable objekt.
Podobne ako uplne na zaciatku bol riesenim prikaz global, teraz to bude nonlocal
Step15: Minule som vam slubil vysvetlenie ako opravit jeden hack
Mali sme kod, ktory meral mnozstvo pamati spotrebovanej pri pocitani s generatorom a bez neho
Funkcia measure_add mala vedlajsi efekt, ktory vypisoval spotrebu pamati kazdych 200 000 volani. Potrebovala teda pocitadlo, ktore si uchovavalo stav medzi volaniami. Pouzili sme mutable objekt na to aby sme nemuseli pouzit priradovanie a nesnazili sa pristupit k premennej pred jej definicou alebo aby sme nedefinovali globalnu premennu..
Step16: Ako tento hack opravit?
zabalime to cele do funkcie
zmenime mutable objekt za immutable
definujeme nonlokalnu premennu
vratime vnutornu funkciu
Step17: A teraz to vyskusame
Step18: Pomocou Closure sa da vytvorit napriklad aj jednoducha trieda
Step19: Pomocou closure napriklad takto
Step20: Ale mohol by som vracat aj viac "metod"
Step21: A aby bolo to volanie krajsie, tak mozem spravit nieco taketo
dalo by sa to aj krajsie, ale bol som lenivy
Step22: V skutocnosti su uzaver a objekt vytvoreny z triedy ekvivalentne
http
Step23: Ked uz mame tu funkciu, tak s nou mozeme nieco aj spravit - napriklad obalit niecim inym
Step24: Alebo nahradit niecim uplne inym
Step25: stale plati to, ze je to funkcia, ktora dostava ako parameter funkciu a vracia funkciu
Ako potom takyto dekorator pouzit?
Step26: Dekorovana funkcia sa pouziva namiesto povodnej
Step27: Dekorator je spusteny pri importovani ale dekorovana funkcia az po explicitnom zavolani
Step28: Co dava zmysel ak sa vlastne deje toto
Step29: Co ked ma dekorovana funkcia nejake parametre?
Step30: Co ked ma tych parametrov viac?
To iste ako v predchadzajucom pripade. Wrapper musi mat tie iste parametre.
Skusme to zovseobecnit pre funkcie s hociakym poctom atributov
Step31: No a co pomenovane atributy?
Step32: Teraz si mozeme vyrobit napriklad uplne vseobecny dekorator, ktory bude pocitat pocty volani nejakej funkcie
Step33: dalo by sa to este vylepsit tak, aby som mal praktickejsi pristup k tomu pocitadlu, ale nateraz mi to staci
Pocita sa celkovy pocet volani funkcie
Step34: Dekorator sa da pouzit na Memoizaciu (Memoization)
Ak mame ciste funkcie, tak ich vystup zalezi len od vstupov.
Ak mam dve volania funkcie s rovnakymi atributmi, tak to druhe viem nahradit predchadzajucou hodnotou bez toho, aby som realne spustal vypocet.
Dostal by som teda cachovanie funkcii
Dekorator sa da presne na toto pouzit
Step35: Potrebujeme nejaku strukturu, kde si budeme ukladat priebezne vysledky
Napriklad slovnik
Step36: Toto bola len jednoducha verzia, na memoizovanie funkcie s jednym prametrom.
Rozsirenie na viacero atributov by malo byt pre vas jednoduche
Rozsirenie na pomenovane atributy uz take jednoduche nie je pretoze **kwargs je slovnik a ten nie je hashovatelny
kto vie preco nieje hashovatelny?
Podobne to nebude fungovat ak by hociktory z parametrov nebol hashovatelny.
Odpoved na predchadzajucu otazku
na to aby mohol byt objekt hashovatelny, musi byt nemenny. Pri zmene objektu by sa totiz musel zmenut vysledok hashovacej funkcie (vysledok by mal zavisiet od obsahu obejtku) a teda uplne straca svoj zmysel pri identifikacii objektu
Co ked chcem dat dekoratoru nejake parametre?
Tu je syntax trochu nestastna a musim to zabalit este do jednej funkcie
Step37: Pouzit to viem napriklad na vytvorenie dekoratora, ktory mi bude logovat volania funkcie a ja si zvolim uroven logovania
Step41: Dekorovanim menim niektore atributy funkcie
Step44: Ak dekorujem funkciu, tak nova funkcia dostane __name__, __doc__, __module__ atributy z dekoratora a nie z tej povodnej funkcie.
__module__ sa nezmeni, kedze dekorator je definovany v tom istom module, ak by som ho ale importoval ako balicek, tak by sa zmenilo aj to
Nastastie na to mame riesenie - dalsi dekorator
tento nastastie ale staci importovat a takmer nijak to nekomplikuje nas povodny kod
Step45: Sumarizujeme - Rozne mozne formy dekoratorov
Nahradzajuci generator nahradi funkciu uplne niecim inym
Step46: Obalujuci dekorator prida nieco pred a/alebo za volanie funkcie
Step47: Dekorator uchovavajuci si stav
Step48: Parametrizovany dekorator
Step49: Registracny dekorator vykona nieco pri registracii funkcie
vykona nieco pri registracii funkcie v case importovania a nie vykonavania samotenej funkcie.
kludne si moze dekorator udrzovat nejaky stav pomocou lokalnych premennych | Python Code:
def f1(a):
print(a)
print(b)
f1(3)
b = 6
f1(3)
Explanation: Obsah dnesnej prednasky
1. Rozsah platnosti premennej
2. Vnorena funkcia
3. Closure
4. Decorator
Closure - Uzaver
Funkcia, ktora pouziva neglobalnu premennu definovanu mimo svojho tela (vysvetlim)
Da sa to pouzit napriklad na asynchronne programovanie pomocou callbackov (spatne volanie?).
prevzane z: Ramalho, Luciano. Fluent Python. O'Reilly Media, 2015.
Rozsah platnosti premennych
End of explanation
b = 6
def f1(a):
print(a)
print(b)
f1(3)
b = 6
def f2(a):
print(a)
print(b)
b = 9
f2(3)
b = 6
def f2(a):
print(a)
print(b) # tuna nevypisujeme obsah premennej z prveho riadku ale tej, ktora je az na dalsom riadku. To logicky nemoze prejst
b = 9 # akonahle raz vo funkcii priradujete do premennej, tak sa vytvori nova, lokalna pri kompilacii do bitekodu.
f2(3)
Explanation: Funkcia vie citat premennu definovanu mimo svojho tela
End of explanation
b = 6
def f3(a):
global b
print(a)
print(b)
b = 9
f3(3)
print(b)
Explanation: Python nedovoli menit (len citat) hodnotu premennej, ktora nie je lokalna
Python nevyzaduje deklaraciu premennych ale predpoklada, ze premenna priradena v tele funkcie je lokalna.
JavaScripte napriklad vyzaduje deklaraciu lokalnych premennych pomocou var. Ak na to zabudente, tak nic nepredpokalda a pokusi sa hladat premennu medzi globalnymi. Toto casto sposobuje bugy.
Ak chcete vo funkcii priradit hodnotu premennej definovanej mimo nej, musite z nej spravit globalnu premennu
End of explanation
def outer():
def inner(a):
return a + 7
return inner
Explanation: Ale my globalne premenne nemame radi. Takze toto nebudeme pouzivat
Co si zapamatat o rozsahu platnosti premennych?
funkcia vie citat premenne definovane mimo svojho tela
tieto premenne ale nevie menit.
ak by ich chcela menit, tak pri kompilacii do bitekodu sa vytvori lokalna premenna a ak ju pouzijeme skor ako jej priradime hodnotu, tak mame problem
teoreticky mozeme pouzit globalnu premennu, ale my nechceme
Dalsia vec, ktoru si potrebujeme vysvetlit na to aby sme spravili closure su vnorene funkcie
Ano, funkcie sa daju definovat vo vnutri inej funkcie
End of explanation
def outer():
def inner(a):
return a + 7
return inner
inner(3) # zvonka funkcia nieje dostupna a je teda chranena (nikto k nej nemoze a nezaspini nam priestor mien)
pom = outer()
pom
Explanation: Vnutorna funkcia nie je z vonka dostupna. Existuje len v ramci tela vonkajsej funkcie. Teda pokial ju nevratime ako navratovu hodnotu.
End of explanation
def process(file_name):
def do_stuff(file_process):
for line in file_process:
print(line)
if isinstance(file_name, str):
with open(file_name, 'r') as f:
do_stuff(f)
else:
do_stuff(file_name)
Explanation: Na co su vnorene funkcie dobre?
Hlavne skryvaju implementaciu funkcie pred okolim
Umoznuju definovanie komplexnej logiky a mnozstva pomocnych funkcii zatial co zvonka bude dostupna len jedina.
Nemusite tak definovat zbytocne vela "privatnych" funkcii
V pythne ani privatne funkcie niesu, takze toto je jediny sposob ako skryt pomocne funkcie pred svetom
Napriklad ak sa vam opakuje rovnaka postupnost riadkov, tak ju vyjmete do funkcie ale ak je tato zbytocna pre ine funkie, tak zvonka vobec nemusi byt dostupna
tj. kalsicky princip privatnej pomocnej funkcie
End of explanation
# https://realpython.com/blog/python/inner-functions-what-are-they-good-for/
def factorial(number):
# error handling
if not isinstance(number, int):
raise TypeError("Sorry. 'number' must be an integer.")
if not number >= 0:
raise ValueError("Sorry. 'number' must be zero or positive.")
# logika spracovania je pekne sustredena na jednom mieste
def inner_factorial(number):
if number <= 1:
return 1
return number*inner_factorial(number-1)
return inner_factorial(number)
# call the outer function
print(factorial(4))
Explanation: Daju sa pouzit napriklad na kontrolu vstupov odelenu od samotenho vypoctu
End of explanation
def make_power(a):
def power(b):
return b ** a
return power
square = make_power(2)
square(3)
Explanation: Takze co je to teda ten uzaver?
Funkcia, ktora pouziva neglobalnu premennu definovanu mimo svojho tela
Naco?
Specializovanie vseobecnej funkcie (Partial function application)
Udrziavanie stavu medzi volaniami funkcie
Ano, bude to spinava funkcia
Obcas sa ale uchovavaniu stavu nevyhneme
Ideme spravit funkcionalny sposob ako si uchovavat stav
Chcem ukazat ako sa daju funkcionalne crty pouzit na vylepsenie imperativneho kodu
Specializovanie vseobecnej funkcie (Partial function application)
Teraz len jeden priklad. Poriadne sa tomu budem venovat zajtra
End of explanation
# zatial nespustat. avg nieje definovane
avg(10)
# 10.0
avg(11)
# 10.5
avg(12)
# 11
Explanation: Udrziavanie stavu medzi volaniami funkcie - Ako?
Funkcia vracajuca vnorenu funkciu, ktora vyuziva lokalnu premennu na udrziavanie stavu.
Predstavte si takuto ulohu:
Chceme funkciu, ktora bude pocitat priemer stale rastuceho poctu cisel.
Poziadavka je aby sme to vedeli spravit v jednom prechode cez data nad potencialne nekonecnou sekvenciou dat.
Jedna moznost je generator, druha je uzaver
Chceme funkciu, ktoru budeme opakovane volat s dalsim a dalsim cislom a vracat nam to bude vzdy aktualny priemer vsetkych doterajsich cisel
End of explanation
def make_averager():
series = [] # tato premenna je platna len vo funkcii make_averager. Je pre nu lokalna. Mimo nej neexistuje.
def averager(new_value):
series.append(new_value) # vieme pristupit k premennej definovanej vyssie.
# Kedze list je mutable, tak ho vieme aj zmenit. Pozor, nemenime premennu, menime objekt!
# Aby sme menili premennu, tak by tu muselo byt =
total = sum(series)
return total/len(series)
return averager
avg = make_averager()
print(avg(10))
print(avg(11))
print(avg(12))
Explanation: Na vypocet potrebujeme uchovavat stav: sumu a pocet hodnot. Kde sa uklada stav?
V globalnej premennej? Nie, nechceme si zapratat priestor mien nejakymi nahodnymi premennymi, ktore by nam mohol hocikto prepisat.
Potrebuje nieco, co nam tie premenne schova.
Nieco taketo by sa dalo implementovat pomocou uzaveru
End of explanation
avg
# je to funkcia, ktora je definovana vo funkcii make_averager ako lokalna premenna s nazvom averager
Explanation: Obalujuca funkcia definuje rozsah platnosti lokalnych premennych. Po skonceni vykonavania uz na ne neexistuje referencia. Okrem tej, ktora sa vrati ako navratova hodnota. Co je zhodou okolnosti funkcia, ktora jednu z lokalnych premennych pouziva. Tato premenna sa vola volna premenna, kedze na nu neexistuje ziadna ina referencia.
Aj ked k volnej premennej sa este da dostat
Kedze v pythone nie je slovicko private a vsetka kontrola pristupu je len na konvecnii, tak by ste sa toho nemali chytat. Na debugovanie a testovanie je to ale celkom dobre vediet.
End of explanation
print(avg.__code__.co_varnames)
print(avg.__code__.co_freevars)
Explanation: Da sa pristupit k nazvom premennych a aj volnych premennych
End of explanation
print(avg.__closure__)
print(avg.__closure__[0].cell_contents) # tato hodnota sa da aj zmenit, ale nerobte to.
Explanation: A aj k ich hodnotam
End of explanation
def make_averager():
count = 0
total = 0
def averager(new_value):
count += 1
total += new_value
return total / count
return averager
avg = make_averager()
avg(10)
Explanation: Na spocitanie priemeru nepotrebujeme cely zoznam. Staci nam suma a pocet.
Tento priklad je ale pokazeny. Kto vie preco? Uz som to naznacil viac krat.
End of explanation
def make_averager():
count = 0
total = 0
def averager(new_value):
nonlocal count, total # tieto dve premenne teda nebudu lokalne v ramci funkcie averager
# ale sa zoberu z funkcie o uroven vyssie
count += 1
total += new_value
return total / count
return averager
avg = make_averager()
avg(10)
Explanation: += je vlastne priradenie a teda spravi z premennych count a total lokalne premenne, pricom ich chce hned predtym aj pouzit
Pripominam, ze predchadzajuci priklad nepriradzoval do premennej, len upravoval mutable objekt.
Podobne ako uplne na zaciatku bol riesenim prikaz global, teraz to bude nonlocal
End of explanation
from functools import reduce
import gc
import os
import psutil
process = psutil.Process(os.getpid())
def print_memory_usage():
print(process.memory_info().rss)
counter = [0] # Toto je ta globalny mutable objekt
def measure_add(a, result, counter=counter):
if counter[0] % 200000 == 0:
print_memory_usage()
counter[0] = counter[0] + 1
return a + result
gc.collect()
counter[0] = 0
print_memory_usage()
print(reduce(measure_add, [x*x for x in range(1000000)]))
Explanation: Minule som vam slubil vysvetlenie ako opravit jeden hack
Mali sme kod, ktory meral mnozstvo pamati spotrebovanej pri pocitani s generatorom a bez neho
Funkcia measure_add mala vedlajsi efekt, ktory vypisoval spotrebu pamati kazdych 200 000 volani. Potrebovala teda pocitadlo, ktore si uchovavalo stav medzi volaniami. Pouzili sme mutable objekt na to aby sme nemuseli pouzit priradovanie a nesnazili sa pristupit k premennej pred jej definicou alebo aby sme nedefinovali globalnu premennu..
End of explanation
counter = [0] # Toto je ta globalny mutable objekt
def measure_add(a, result, counter=counter):
if counter[0] % 200000 == 0:
print_memory_usage()
counter[0] = counter[0] + 1
return a + result
# toto tu mam en pre kontrolu, aby som nespravil chybu
def make_adder():
counter = 0
def adder(a, result):
nonlocal counter
if counter % 2000000 == 0:
print_memory_usage()
counter += 1
return a+result
return adder
Explanation: Ako tento hack opravit?
zabalime to cele do funkcie
zmenime mutable objekt za immutable
definujeme nonlokalnu premennu
vratime vnutornu funkciu
End of explanation
measure_add = make_adder()
gc.collect()
print_memory_usage()
print(reduce(measure_add, [x*x for x in range(10000000)]))
Explanation: A teraz to vyskusame
End of explanation
class A:
def __init__(self, x):
self._x = x
def incr(self):
self._x += 1
return self._x
obj = A(0)
obj.incr()
Explanation: Pomocou Closure sa da vytvorit napriklad aj jednoducha trieda
End of explanation
def A(x):
def incr():
nonlocal x
x +=1
return x
return incr
obj = A(0)
obj()
Explanation: Pomocou closure napriklad takto
End of explanation
def A(x):
def incr():
nonlocal x
x +=1
return x
def twice():
incr()
incr()
return x
return {'incr': incr, 'twice':twice}
obj = A(0)
obj['incr']()
# obj['twice']()
Explanation: Ale mohol by som vracat aj viac "metod"
End of explanation
import pyrsistent as ps
def A(x):
def incr():
nonlocal x
x +=1
return x
def twice():
incr()
incr()
return x
return ps.freeze({'incr': incr, 'twice':twice})
obj = A(0)
# obj.incr()
obj.twice()
Explanation: A aby bolo to volanie krajsie, tak mozem spravit nieco taketo
dalo by sa to aj krajsie, ale bol som lenivy :)
End of explanation
# takyto dekorator s tou obalovanou funkciou vlastne nic nespravi
def najjednoduchsi_mozny_dekorator(param_fct):
return param_fct # vsimnite si, ze tu niesu zatvorky. Cize sa vracia funkcia ako objekt a nevykonava sa
Explanation: V skutocnosti su uzaver a objekt vytvoreny z triedy ekvivalentne
http://c2.com/cgi/wiki?ClosuresAndObjectsAreEquivalent
The venerable master Qc Na was walking with his student, Anton. Hoping to prompt the master into a discussion, Anton said "Master, I have heard that objects are a very good thing - is this true?" Qc Na looked pityingly at his student and replied, "Foolish pupil - objects are merely a poor man's closures."
Chastised, Anton took his leave from his master and returned to his cell, intent on studying closures. He carefully read the entire "Lambda: The Ultimate..." series of papers and its cousins, and implemented a small Scheme interpreter with a closure-based object system. He learned much, and looked forward to informing his master of his progress.
On his next walk with Qc Na, Anton attempted to impress his master by saying "Master, I have diligently studied the matter, and now understand that objects are truly a poor man's closures." Qc Na responded by hitting Anton with his stick, saying "When will you learn? Closures are a poor man's object." At that moment, Anton became enlightened.
Aky-taky preklad
Ctihodný majster Qc Na šiel so svojím študentom, Antonom. Dúfajúc, že vyzve majstra do diskusie, Anton povedal: "Pane, počul som, že objekty sú veľmi dobrá vec - je to pravda?" Qc Na pozrel súcitne na svojho študenta a odpovedal: "pochabý žiak - objekty sú len chudákove uzávery."
Pokarhaný Anton odišiel od svojho majstra a vrátil sa do svojej cely, študovať uzávery. Starostlivo si prečítal celú "Lambda: The Ultimate ..." sériu článkov spolu s referenciami, a implementoval malý Scheme interpret s objektovým modelom založeným na uzáveroch. Naučil sa veľa, a tešil sa na to ako bude informovať svojho majstra o svojom pokroku.
Na jeho ďalšej ceste s Qc Na, sa Anton pokúšal zapôsobiť na svojho pána tým, že hovorí: "Majstre, usilovne som študoval a pochopil som, že objekty sú skutočne chudákove uzávery." Qc Na reagoval tým, že udrel Antona palicou. Hovorí: "Kedy sa poučíš? Uzávery sú objektami chudáka." V tej chvíli Anton dosiahol osvietenie.
Objekty a uzavery maju rovnaku vyjadrovaciu silu
Dolezite je vybrat si ktore pouzit v ktorej situacii, tak aby ste vyuzili pekne vlastnosti oboch.
Ak toto dokazete, tak ste sa stali skutocnymi odbornikmi a dosiahnete nirvanu :)
Decorator
Decorator (Python) vs. navrhovy vzor Dekorator
Navrhovy vzor
Ciel: pridanie dodatocnej zodpovednosti objektu dynamicky. Za behu.
Ak mame velmi vela moznych kombinacii rozsireni nejakeho objektu
Priklad s kaviarnou (Priklad je v pythone ale nepouziva konstrukciu decorator. Je to len implementacia anvrhoveho vzoru.)
Zabalenie objektu do noveho objektu, kde ten stary je parametrom konstruktora a pri volani hociakej metody sa vola aj metoda obalovaneho objektu. Piklad v Jave
Obalujuci objekt sa puziva namiesto obalovaneho
Konstrukcia v pythone
Obalenie funkcie alebo triedy do vlastneho kodu
Pouziva sa pri definicii metody alebo triedy
Nie priamo urcene na individualne objekty
Pomocou tejto konstrukcie by sa dal implementovat navrhovy vzor Dekorator, ale to by bolo len velmi obmedzene vyuzitie.
inspirovane - http://www.artima.com/weblogs/viewpost.jsp?thread=240808
Co je dekorator konstrukcia v pythone
Zabalenie funkcie do nejakej inej a pouzivanie obalenej namiesto povodnej
neznie vam to podobne ako aspektovo orientovane programovanie v jave? Je to podobne, ale jednoduchsie (pouzitim a aj moznostami)
Moze byt implementovany hociakym zavolatelnym objektom (funkcia alebo objekt triedy implementujucej metodu __call__)
Decorator moze byt hociaka funkcia, ktora prijma inu funkciu ako parameter a vracia funkciu
End of explanation
def zaujimavejsi_dekorator(param_fct):
def inner():
do_stuff()
result = param_fct()
do_another_stuff()
return result
return inner
Explanation: Ked uz mame tu funkciu, tak s nou mozeme nieco aj spravit - napriklad obalit niecim inym
End of explanation
def nahradzujuci_dekorator(param_fct):
def nieco_uplne_ine():
pass
return nieco_uplne_ine
Explanation: Alebo nahradit niecim uplne inym
End of explanation
def function(): # funkcia, ktoru chceme dekorovat
pass
function = decorator(function)
# syntakticky cukor
@decorator
def function():
pass
Explanation: stale plati to, ze je to funkcia, ktora dostava ako parameter funkciu a vracia funkciu
Ako potom takyto dekorator pouzit?
End of explanation
def deco(func):
def inner():
print('running inner()')
return inner
@deco
def target():
print('running target()')
target()
target
Explanation: Dekorovana funkcia sa pouziva namiesto povodnej
End of explanation
registry = []
def register(func):
print('running register(%s)' % func) # nejaky kod sa vykona pri registrovani
registry.append(func)
return func # vracia sa ta ista funkcia bezo zmeny
@register
def f1():
print('running f1()')
@register
def f2():
print('running f2()')
def f3():
print('running f3()')
registry
Explanation: Dekorator je spusteny pri importovani ale dekorovana funkcia az po explicitnom zavolani
End of explanation
def f1():
print('running f1()')
f1 = register(f1)
def f2():
print('running f2()')
f2 = register(f2)
def f3():
print('running f3()')
f1()
f2()
f3()
Explanation: Co dava zmysel ak sa vlastne deje toto
End of explanation
def my_print(string):
print(string)
def param_decorator(param_fct):
def wrapper(string): # wrapper musi mat tie iste parametre
print('wrapper stuff')
return param_fct(string)
return wrapper
@param_decorator # pri pouzivani dekoratora sa potom nic nemeni
def my_print(string):
print(string)
my_print('hello')
Explanation: Co ked ma dekorovana funkcia nejake parametre?
End of explanation
def param_decorator2(param_fct):
def wrapper(*args): # wrapper musi mat tie iste parametre
print('wrapper stuff')
return param_fct(*args) # co sa stane ked tu nebude *?
return wrapper
@param_decorator2
def my_print(string):
print(string)
@param_decorator2
def my_print_more(string1, string2, string3):
print(string1, string2, string3)
@param_decorator2
def my_print_many(*args):
print(*args)
my_print('hello')
my_print_more('hello', 'hello2', 'hello3')
my_print_many('hello', 'hello2', 'hello3', 'hello4', 'hello5')
Explanation: Co ked ma tych parametrov viac?
To iste ako v predchadzajucom pripade. Wrapper musi mat tie iste parametre.
Skusme to zovseobecnit pre funkcie s hociakym poctom atributov
End of explanation
def my_print_optional(first, second='second', third='third'):
print(first, second, third)
my_print_optional('1', '2', '3')
my_print_optional('1', '2')
my_print_optional('1')
my_print_optional('1', third='3', second='2')
my_print_optional('1', third='3')
def param_decorator3(param_fct):
def wrapper(*args, **kwargs): # wrapper musi mat tie iste parametre
print('wrapper stuff')
return param_fct(*args, **kwargs)
return wrapper
@param_decorator3
def my_print_optional(first, second='second', third='third'):
print(first, second, third)
my_print_optional('1', '2', '3')
my_print_optional('1', '2')
my_print_optional('1')
my_print_optional('1', third='3', second='2')
my_print_optional('1', third='3')
Explanation: No a co pomenovane atributy?
End of explanation
def counter_decorator(fct):
counter = 0
def wrapper(*args, **kwargs):
nonlocal counter
counter += 1
return fct(*args, **kwargs)
return wrapper
@counter_decorator
def counted_fib(n):
if n == 0:
return 0
elif n == 1:
return 1
else:
return counted_fib(n-1) + counted_fib(n-2)
counted_fib(10)
print(counted_fib.__closure__[0].cell_contents)
Explanation: Teraz si mozeme vyrobit napriklad uplne vseobecny dekorator, ktory bude pocitat pocty volani nejakej funkcie
End of explanation
counted_fib(5)
print(counted_fib.__closure__[0].cell_contents)
counted_fib(5)
print(counted_fib.__closure__[0].cell_contents)
Explanation: dalo by sa to este vylepsit tak, aby som mal praktickejsi pristup k tomu pocitadlu, ale nateraz mi to staci
Pocita sa celkovy pocet volani funkcie
End of explanation
def fib(n):
if n == 0:
return 0
elif n == 1:
return 1
else:
return fib(n-1) + fib(n-2)
fib(10)
Explanation: Dekorator sa da pouzit na Memoizaciu (Memoization)
Ak mame ciste funkcie, tak ich vystup zalezi len od vstupov.
Ak mam dve volania funkcie s rovnakymi atributmi, tak to druhe viem nahradit predchadzajucou hodnotou bez toho, aby som realne spustal vypocet.
Dostal by som teda cachovanie funkcii
Dekorator sa da presne na toto pouzit
End of explanation
def memoize(f):
memo = {}
counter = 0
def wrapper(x):
if x not in memo:
nonlocal counter # toto by tu nemuselo byt, ale ja chcem vediet kolko som si usetril volani
memo[x] = f(x)
counter += 1 # toto by tu nemuselo byt
return memo[x]
return wrapper
@memoize
def memoized_fib(n):
if n == 0:
return 0
elif n == 1:
return 1
else:
return memoized_fib(n-1) + memoized_fib(n-2)
memoized_fib(10)
print(memoized_fib.__closure__[0].cell_contents)
@counter_decorator
def counted_fib(n):
if n == 0:
return 0
elif n == 1:
return 1
else:
return counted_fib(n-1) + counted_fib(n-2)
counted_fib(10)
print(counted_fib.__closure__[0].cell_contents)
Explanation: Potrebujeme nejaku strukturu, kde si budeme ukladat priebezne vysledky
Napriklad slovnik
End of explanation
def decorator(argument): # tuto jednu funkciu som tam pridal
def real_decorator(param_funct): # tu bu zacinal dekorator bez parametrov
def wrapper(*args, **kwargs):
before_stuff()
something_with_argument(argument)
funct(*args, **kwargs)
after_stuff()
return wrapper
return real_decorator
@decorator(argument_value)
def my_fct():
pass
Explanation: Toto bola len jednoducha verzia, na memoizovanie funkcie s jednym prametrom.
Rozsirenie na viacero atributov by malo byt pre vas jednoduche
Rozsirenie na pomenovane atributy uz take jednoduche nie je pretoze **kwargs je slovnik a ten nie je hashovatelny
kto vie preco nieje hashovatelny?
Podobne to nebude fungovat ak by hociktory z parametrov nebol hashovatelny.
Odpoved na predchadzajucu otazku
na to aby mohol byt objekt hashovatelny, musi byt nemenny. Pri zmene objektu by sa totiz musel zmenut vysledok hashovacej funkcie (vysledok by mal zavisiet od obsahu obejtku) a teda uplne straca svoj zmysel pri identifikacii objektu
Co ked chcem dat dekoratoru nejake parametre?
Tu je syntax trochu nestastna a musim to zabalit este do jednej funkcie
End of explanation
def log(level, message):
print("{0}: {1}".format(level, message))
def log_decorator(level):
def decorator(f):
def wrapper(*args, **kwargs):
log(level, "Function {} started.".format(f.__name__))
result = f(*args, **kwargs)
log(level, "Function {} finished.".format(f.__name__))
return result
return wrapper
return decorator
@log_decorator('debug')
def my_print(*args):
print(*args)
my_print('ide to?')
Explanation: Pouzit to viem napriklad na vytvorenie dekoratora, ktory mi bude logovat volania funkcie a ja si zvolim uroven logovania
End of explanation
def some_fct():
doc string of some_fct
print("some stuff")
some_fct()
print(some_fct.__name__)
print(some_fct.__doc__)
print(some_fct.__module__)
def decorator(f):
def wrapper_fct(*args, **kwargs):
wrapper_fct doc string
return f(*args, **kwargs)
return wrapper_fct
@decorator
def some_fct():
doc string of some_fct
print("some stuff")
some_fct()
print(some_fct.__name__)
print(some_fct.__doc__)
print(some_fct.__module__)
Explanation: Dekorovanim menim niektore atributy funkcie
End of explanation
from functools import wraps
def decorator(f):
@wraps(f) #mame na to dekorator, ktory tieto atributy skopiruje
def wrapper_fct(*args, **kwargs):
wrapper_fct doc string
return f(*args, **kwargs)
return wrapper_fct
@decorator
def some_fct():
doc string of some_fct
print("some stuff")
some_fct()
print(some_fct.__name__)
print(some_fct.__doc__)
print(some_fct.__module__)
Explanation: Ak dekorujem funkciu, tak nova funkcia dostane __name__, __doc__, __module__ atributy z dekoratora a nie z tej povodnej funkcie.
__module__ sa nezmeni, kedze dekorator je definovany v tom istom module, ak by som ho ale importoval ako balicek, tak by sa zmenilo aj to
Nastastie na to mame riesenie - dalsi dekorator
tento nastastie ale staci importovat a takmer nijak to nekomplikuje nas povodny kod
End of explanation
def nahradzujuci_dekorator(param_fct):
def nieco_uplne_ine():
pass
return nieco_uplne_ine
Explanation: Sumarizujeme - Rozne mozne formy dekoratorov
Nahradzajuci generator nahradi funkciu uplne niecim inym
End of explanation
def obalujuci_dekorator(param_fct):
def inner():
before_call()
result = param_fct()
after_call()
return result
return inner
Explanation: Obalujuci dekorator prida nieco pred a/alebo za volanie funkcie
End of explanation
def obalujuci_dekorator(param_fct):
stav = hodnota
def inner():
nonlocal stav # ak mame mutable objekt ako stav, tak netreba pouzivat nonlocal
stav = ina_hodnota
return param_fct()
return inner
Explanation: Dekorator uchovavajuci si stav
End of explanation
def vonkajsi_decorator(argument):
def decorator(param_funct):
def fct_wrapper(*args, **kwargs):
before_stuff()
something_with_argument(argument)
funct(*args, **kwargs)
after_stuff()
return fct_wrapper
return decorator
@vonkajsi_dekorator(parameter)
def funkcia():
pass
Explanation: Parametrizovany dekorator
End of explanation
def registracny_dekorator(param_func):
when_registering_stuff()
return param_func
Explanation: Registracny dekorator vykona nieco pri registracii funkcie
vykona nieco pri registracii funkcie v case importovania a nie vykonavania samotenej funkcie.
kludne si moze dekorator udrzovat nejaky stav pomocou lokalnych premennych
End of explanation |
6,169 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Defining labels
Everything is actually done in terms of ensembles. We can map the ensembles to any labels. In our case, we use the initial replica ID associated with the ensemble. We use this as both the numeric and string label.
Step1: Trace of ensemble visited by a replica
In the plot below, you'll see we set the labels on the axis as sset0[e].replica, so we use the replica ID associated with the ensemble in the first timestep.
Step2: Replica flow
Step3: Trips
Now we calculate "up" trips, "down" trips, and round-trips.
Step4: Transition matrix
The transition matrix is the (unsymmetrized) matrix of the transition probabilities. By default, it automatically sets its order from the Cuthill-McKee reverse ordering algorithm.
Step5: If you would like to set a different order, that can be done by providing a list of the ensembles in whatever order you choose
Step6: Mixing matrix
Same as the transition matrix $T$, but $\frac{1}{2}(T+T^T)$.
Step7: Making a pretty picture
Step8: Blue is a minus interface, red is a normal interface. Multiple state outer interfaces (not in this example) would be green.
Alternate way of calculating transitions
There's another, perhaps better, way to calculate transitions. This does double count, but doesn't care if the how the transition happened (only that it did). | Python Code:
sset0 = storage.samplesets[0]
numeric_labels = { s.ensemble : s.replica for s in sset0}
string_labels = { s.ensemble : str(s.replica) for s in sset0 }
numeric_to_string = { numeric_labels[e] : string_labels[e] for e in numeric_labels.keys()}
Explanation: Defining labels
Everything is actually done in terms of ensembles. We can map the ensembles to any labels. In our case, we use the initial replica ID associated with the ensemble. We use this as both the numeric and string label.
End of explanation
%%time
trace_1 = paths.trace_ensembles_for_replica(0, storage.steps)
plt.plot([numeric_labels[e] for e in trace_1])
Explanation: Trace of ensemble visited by a replica
In the plot below, you'll see we set the labels on the axis as sset0[e].replica, so we use the replica ID associated with the ensemble in the first timestep.
End of explanation
repx_net = paths.ReplicaNetwork(scheme, storage.steps)
print repx_net
flow = repx_net.flow(bottom=retis.minus_ensemble, top=retis.ensembles[-1])
print flow
flow_num = {numeric_labels[k] : flow[k] for k in flow.keys()}
print flow_num
sorted_vals = []
for k in sorted(flow_num.keys()):
sorted_vals.append(flow_num[k])
plt.plot(sorted(flow_num.keys()), sorted_vals)
Explanation: Replica flow
End of explanation
repx_net.trips(bottom=retis.minus_ensemble, top=retis.ensembles[-1])
Explanation: Trips
Now we calculate "up" trips, "down" trips, and round-trips.
End of explanation
repx_net.transition_matrix()
Explanation: Transition matrix
The transition matrix is the (unsymmetrized) matrix of the transition probabilities. By default, it automatically sets its order from the Cuthill-McKee reverse ordering algorithm.
End of explanation
import numpy as np
perm = np.random.permutation(len(mstis.all_ensembles))
print perm
order = [mstis.all_ensembles[p] for p in perm]
repx_net.transition_matrix(index_order=order)
Explanation: If you would like to set a different order, that can be done by providing a list of the ensembles in whatever order you choose:
End of explanation
repx_net.mixing_matrix()
Explanation: Mixing matrix
Same as the transition matrix $T$, but $\frac{1}{2}(T+T^T)$.
End of explanation
repxG = paths.ReplicaNetworkGraph(repx_net)
# draw('graphviz') gives better results, but requires pygraphviz
repxG.draw('spring')
Explanation: Making a pretty picture
End of explanation
transitions = repx_net.transitions_from_traces(storage.steps)
for (k1, k2) in transitions.keys():
print numeric_labels[k1], numeric_labels[k2], transitions[(k1, k2)]
for (k1, k2) in repx_net.analysis['n_accepted'].keys():
print numeric_labels[k1], numeric_labels[k2], repx_net.analysis['n_accepted'][(k1, k2)]
Explanation: Blue is a minus interface, red is a normal interface. Multiple state outer interfaces (not in this example) would be green.
Alternate way of calculating transitions
There's another, perhaps better, way to calculate transitions. This does double count, but doesn't care if the how the transition happened (only that it did).
End of explanation |
6,170 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: 量子畳み込みニューラルネットワーク
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: TensorFlow Quantum をインストールします。
Step3: 次に、TensorFlow とモジュールの依存関係をインポートします。
Step4: 1. QCNN を構築する
1.1 TensorFlow グラフで回路を組み立てる
TensorFlow Quantum(TFQ)には、グラフ内で回路を構築するために設計されたレイヤークラスがあります。たとえば tfq.layers.AddCircuit レイヤーがあり、tf.keras.Layer を継承しています。このレイヤーは、次の図で示すように、回路の入力バッチの前後いずれかに追加できます。
<img src="./images/qcnn_1.png" width="700">
次のスニペットには、このレイヤーが使用されています。
Step5: 入力テンソルを調べます。
Step6: 次に、出力テンソルを調べます。
Step8: 以下の例は tfq.layers.AddCircuit を使用せずに実行できますが、TensorFlow 計算グラフに複雑な機能を埋め込む方法を理解する上で役立ちます。
1.2 問題の概要
クラスター状態を準備し、「励起」があるかどうかを検出する量子分類器をトレーニングします。クラスター状態は極めてこじれていますが、古典的コンピュータにおいては必ずしも困難ではありません。わかりやすく言えば、これは論文で使用されているデータセットよりも単純です。
この分類タスクでは、次の理由により、ディープ <a href="https
Step9: 通常の機械学習と同じように、モデルのベンチマークに使用するトレーニングとテストのセットを作成していることがわかります。次のようにすると、データポイントを素早く確認できます。
Step11: 1.5 レイヤーを定義する
上記の図で示すレイヤーを TensorFlow で定義しましょう。
1.5.1 クラスター状態
まず始めに、<a href="https
Step12: 矩形の <a href="https
Step16: 1.5.2 QCNN レイヤー
<a href="https
Step17: 作成したものを確認するために、1 キュービットのユニタリー回路を出力しましょう。
Step18: 次に、2 キュービットのユニタリー回路を出力します。
Step19: そして 2 キュービットのプーリング回路を出力します。
Step21: 1.5.2.1 量子畳み込み
<a href="https
Step22: (非常に水平な)回路を表示します。
Step24: 1.5.2.2 量子プーリング
量子プーリングレイヤーは、上記で定義された 2 キュービットプールを使用して、$N$ キュービットから $\frac{N}{2}$ キュービットまでをプーリングします。
Step25: プーリングコンポーネント回路を調べます。
Step27: 1.6 モデルの定義
定義したレイヤーを使用して純粋な量子 CNN を構築します。8 キュービットで開始し、1 キュービットまでプールダウンしてから、$\langle \hat{Z} \rangle$ を測定します。
Step28: 1.7 モデルをトレーニングする
この例を単純化するために、完全なバッチでモデルをトレーニングします。
Step30: 2. ハイブリッドモデル
量子畳み込みを使用して 8 キュービットから 1 キュービットにする必要はありません。量子畳み込みの 1~2 ラウンドを実行し、結果を従来のニューラルネットワークにフィードすることも可能です。このセクションでは、量子と従来のハイブリッドモデルを説明します。
2.1 単一量子フィルタを備えたハイブリッドモデル
量子畳み込みのレイヤーを 1 つ適用し、すべてのビットの $\langle \hat{Z}_n \rangle$ を読み取り、続いて密に接続されたニューラルネットワークを読み取ります。
<img src="./images/qcnn_5.png" width="1000">
2.1.1 モデルの定義
Step31: 2.1.2 モデルをトレーニングする
Step32: ご覧のとおり、非常に控えめな古典的支援により、ハイブリッドモデルは通常、純粋な量子バージョンよりも速く収束します。
2.2 多重量子フィルタを備えたハイブリッド畳み込み
多重量子畳み込みと従来のニューラルネットワークを使用してそれらを組み合わせるアーキテクチャを試してみましょう。
<img src="./images/qcnn_6.png" width="1000">
2.2.1 モデルの定義
Step33: 2.2.2 モデルをトレーニングする | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
!pip install tensorflow==2.7.0
Explanation: 量子畳み込みニューラルネットワーク
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/quantum/tutorials/qcnn"><img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.orgで表示</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/quantum/tutorials/qcnn.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png"> Google Colab で実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/quantum/tutorials/qcnn.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png"> GitHubでソースを表示</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/quantum/tutorials/qcnn.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a></td>
</table>
このチュートリアルでは、単純な<a href="https://www.nature.com/articles/s41567-019-0648-8" class="external">量子畳み込みニューラルネットワーク</a>(QCNN)を実装します。QCNN は、並進的に不変でもある古典的な畳み込みニューラルネットワークに提案された量子アナログです。
この例では、デバイスの量子センサまたは複雑なシミュレーションなど、量子データソースの特定のプロパティを検出する方法を実演します。量子データソースは、励起の有無にかかわらず<a href="https://arxiv.org/pdf/quant-ph/0504097.pdf" class="external">クラスタ状態</a>です。QCNN はこの検出を学習します(論文で使用されたデータセットは SPT フェーズ分類です)。
セットアップ
End of explanation
!pip install tensorflow-quantum
# Update package resources to account for version changes.
import importlib, pkg_resources
importlib.reload(pkg_resources)
Explanation: TensorFlow Quantum をインストールします。
End of explanation
import tensorflow as tf
import tensorflow_quantum as tfq
import cirq
import sympy
import numpy as np
# visualization tools
%matplotlib inline
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
Explanation: 次に、TensorFlow とモジュールの依存関係をインポートします。
End of explanation
qubit = cirq.GridQubit(0, 0)
# Define some circuits.
circuit1 = cirq.Circuit(cirq.X(qubit))
circuit2 = cirq.Circuit(cirq.H(qubit))
# Convert to a tensor.
input_circuit_tensor = tfq.convert_to_tensor([circuit1, circuit2])
# Define a circuit that we want to append
y_circuit = cirq.Circuit(cirq.Y(qubit))
# Instantiate our layer
y_appender = tfq.layers.AddCircuit()
# Run our circuit tensor through the layer and save the output.
output_circuit_tensor = y_appender(input_circuit_tensor, append=y_circuit)
Explanation: 1. QCNN を構築する
1.1 TensorFlow グラフで回路を組み立てる
TensorFlow Quantum(TFQ)には、グラフ内で回路を構築するために設計されたレイヤークラスがあります。たとえば tfq.layers.AddCircuit レイヤーがあり、tf.keras.Layer を継承しています。このレイヤーは、次の図で示すように、回路の入力バッチの前後いずれかに追加できます。
<img src="./images/qcnn_1.png" width="700">
次のスニペットには、このレイヤーが使用されています。
End of explanation
print(tfq.from_tensor(input_circuit_tensor))
Explanation: 入力テンソルを調べます。
End of explanation
print(tfq.from_tensor(output_circuit_tensor))
Explanation: 次に、出力テンソルを調べます。
End of explanation
def generate_data(qubits):
Generate training and testing data.
n_rounds = 20 # Produces n_rounds * n_qubits datapoints.
excitations = []
labels = []
for n in range(n_rounds):
for bit in qubits:
rng = np.random.uniform(-np.pi, np.pi)
excitations.append(cirq.Circuit(cirq.rx(rng)(bit)))
labels.append(1 if (-np.pi / 2) <= rng <= (np.pi / 2) else -1)
split_ind = int(len(excitations) * 0.7)
train_excitations = excitations[:split_ind]
test_excitations = excitations[split_ind:]
train_labels = labels[:split_ind]
test_labels = labels[split_ind:]
return tfq.convert_to_tensor(train_excitations), np.array(train_labels), \
tfq.convert_to_tensor(test_excitations), np.array(test_labels)
Explanation: 以下の例は tfq.layers.AddCircuit を使用せずに実行できますが、TensorFlow 計算グラフに複雑な機能を埋め込む方法を理解する上で役立ちます。
1.2 問題の概要
クラスター状態を準備し、「励起」があるかどうかを検出する量子分類器をトレーニングします。クラスター状態は極めてこじれていますが、古典的コンピュータにおいては必ずしも困難ではありません。わかりやすく言えば、これは論文で使用されているデータセットよりも単純です。
この分類タスクでは、次の理由により、ディープ <a href="https://arxiv.org/pdf/quant-ph/0610099.pdf" class="external">MERA</a> のような QCNN アーキテクチャを実装します。
QCNN と同様に、リングのクラスター状態は並進的に不変である
クラスター状態は非常にもつれている
このアーキテクチャはエンタングルメントを軽減し、単一のキュービットを読み出すことで分類を取得する上で効果があります。
<img src="./images/qcnn_2.png" width="1000">
「励起」のあるクラスター状態は、cirq.rx ゲートがすべてのキュービットに適用されたクラスター状態として定義されます。Qconv と QPool については、このチュートリアルの後の方で説明しています。
1.3 TensorFlow のビルディングブロック
<img src="./images/qcnn_3.png" width="1000">
TensorFlow Quantum を使ってこの問題を解決する方法として、次を実装することが挙げられます。
モデルへの入力は回路で、空の回路か励起を示す特定のキュー人における X ゲートです。
モデルの残りの量子コンポーネントは、tfq.layers.AddCircuit レイヤーで作成されます。
たとえば tfq.layers.PQC レイヤーが使用されているとした場合、$\langle \hat{Z} \rangle$ を読み取って、励起のある状態には 1 のラベルと、励起のない状態には -1 のラベルと比較します。
1.4 データ
モデルを構築する前に、データを生成することができます。この場合には、クラスター状態に励起がは一斉思案す(元の論文では、より複雑なデータセットが使用されています)。励起は、cirq.rx ゲートで表されます。十分に大きい回転は励起と見なされ、1 とラベル付けされ、十分に大きくない回転は -1 とラベル付けされ、励起ではないと見なされます。
End of explanation
sample_points, sample_labels, _, __ = generate_data(cirq.GridQubit.rect(1, 4))
print('Input:', tfq.from_tensor(sample_points)[0], 'Output:', sample_labels[0])
print('Input:', tfq.from_tensor(sample_points)[1], 'Output:', sample_labels[1])
Explanation: 通常の機械学習と同じように、モデルのベンチマークに使用するトレーニングとテストのセットを作成していることがわかります。次のようにすると、データポイントを素早く確認できます。
End of explanation
def cluster_state_circuit(bits):
Return a cluster state on the qubits in `bits`.
circuit = cirq.Circuit()
circuit.append(cirq.H.on_each(bits))
for this_bit, next_bit in zip(bits, bits[1:] + [bits[0]]):
circuit.append(cirq.CZ(this_bit, next_bit))
return circuit
Explanation: 1.5 レイヤーを定義する
上記の図で示すレイヤーを TensorFlow で定義しましょう。
1.5.1 クラスター状態
まず始めに、<a href="https://arxiv.org/pdf/quant-ph/0504097.pdf" class="external">クラスター状態</a>を定義しますが、これには Google が量子回路のプログラミング用に提供している <a href="https://github.com/quantumlib/Cirq" class="external">Cirq</a> フレームワークを使用します。モデルの静的な部分であるため、tfq.layers.AddCircuit 機能を使用して埋め込みます。
End of explanation
SVGCircuit(cluster_state_circuit(cirq.GridQubit.rect(1, 4)))
Explanation: 矩形の <a href="https://cirq.readthedocs.io/en/stable/generated/cirq.GridQubit.html" class="external"><code>cirq.GridQubit</code></a> のクラスター状態回路を表示します。
End of explanation
def one_qubit_unitary(bit, symbols):
Make a Cirq circuit enacting a rotation of the bloch sphere about the X,
Y and Z axis, that depends on the values in `symbols`.
return cirq.Circuit(
cirq.X(bit)**symbols[0],
cirq.Y(bit)**symbols[1],
cirq.Z(bit)**symbols[2])
def two_qubit_unitary(bits, symbols):
Make a Cirq circuit that creates an arbitrary two qubit unitary.
circuit = cirq.Circuit()
circuit += one_qubit_unitary(bits[0], symbols[0:3])
circuit += one_qubit_unitary(bits[1], symbols[3:6])
circuit += [cirq.ZZ(*bits)**symbols[6]]
circuit += [cirq.YY(*bits)**symbols[7]]
circuit += [cirq.XX(*bits)**symbols[8]]
circuit += one_qubit_unitary(bits[0], symbols[9:12])
circuit += one_qubit_unitary(bits[1], symbols[12:])
return circuit
def two_qubit_pool(source_qubit, sink_qubit, symbols):
Make a Cirq circuit to do a parameterized 'pooling' operation, which
attempts to reduce entanglement down from two qubits to just one.
pool_circuit = cirq.Circuit()
sink_basis_selector = one_qubit_unitary(sink_qubit, symbols[0:3])
source_basis_selector = one_qubit_unitary(source_qubit, symbols[3:6])
pool_circuit.append(sink_basis_selector)
pool_circuit.append(source_basis_selector)
pool_circuit.append(cirq.CNOT(control=source_qubit, target=sink_qubit))
pool_circuit.append(sink_basis_selector**-1)
return pool_circuit
Explanation: 1.5.2 QCNN レイヤー
<a href="https://arxiv.org/abs/1810.03787" class="external">Cong and Lukin の QCNN に関する論文</a>を使用して、モデルを構成するレイヤーを定義します。これには次の前提条件があります。
<a href="https://arxiv.org/abs/quant-ph/0507171" class="external">Tucci の論文</a>にある 1 キュービットと 2 キュービットのパラメータ化されたユニタリ―行列
一般的なパラメータ化された 2 キュービットプーリング演算
End of explanation
SVGCircuit(one_qubit_unitary(cirq.GridQubit(0, 0), sympy.symbols('x0:3')))
Explanation: 作成したものを確認するために、1 キュービットのユニタリー回路を出力しましょう。
End of explanation
SVGCircuit(two_qubit_unitary(cirq.GridQubit.rect(1, 2), sympy.symbols('x0:15')))
Explanation: 次に、2 キュービットのユニタリー回路を出力します。
End of explanation
SVGCircuit(two_qubit_pool(*cirq.GridQubit.rect(1, 2), sympy.symbols('x0:6')))
Explanation: そして 2 キュービットのプーリング回路を出力します。
End of explanation
def quantum_conv_circuit(bits, symbols):
Quantum Convolution Layer following the above diagram.
Return a Cirq circuit with the cascade of `two_qubit_unitary` applied
to all pairs of qubits in `bits` as in the diagram above.
circuit = cirq.Circuit()
for first, second in zip(bits[0::2], bits[1::2]):
circuit += two_qubit_unitary([first, second], symbols)
for first, second in zip(bits[1::2], bits[2::2] + [bits[0]]):
circuit += two_qubit_unitary([first, second], symbols)
return circuit
Explanation: 1.5.2.1 量子畳み込み
<a href="https://arxiv.org/abs/1810.03787" class="external">Cong と Lukin</a> の論文にあるとおり、1 次元量子畳み込みを、ストライド 1 の隣接するすべてのキュービットペアに 2 キュービットのパラメーター化されたユニタリの適用として定義します。
End of explanation
SVGCircuit(
quantum_conv_circuit(cirq.GridQubit.rect(1, 8), sympy.symbols('x0:15')))
Explanation: (非常に水平な)回路を表示します。
End of explanation
def quantum_pool_circuit(source_bits, sink_bits, symbols):
A layer that specifies a quantum pooling operation.
A Quantum pool tries to learn to pool the relevant information from two
qubits onto 1.
circuit = cirq.Circuit()
for source, sink in zip(source_bits, sink_bits):
circuit += two_qubit_pool(source, sink, symbols)
return circuit
Explanation: 1.5.2.2 量子プーリング
量子プーリングレイヤーは、上記で定義された 2 キュービットプールを使用して、$N$ キュービットから $\frac{N}{2}$ キュービットまでをプーリングします。
End of explanation
test_bits = cirq.GridQubit.rect(1, 8)
SVGCircuit(
quantum_pool_circuit(test_bits[:4], test_bits[4:], sympy.symbols('x0:6')))
Explanation: プーリングコンポーネント回路を調べます。
End of explanation
def create_model_circuit(qubits):
Create sequence of alternating convolution and pooling operators
which gradually shrink over time.
model_circuit = cirq.Circuit()
symbols = sympy.symbols('qconv0:63')
# Cirq uses sympy.Symbols to map learnable variables. TensorFlow Quantum
# scans incoming circuits and replaces these with TensorFlow variables.
model_circuit += quantum_conv_circuit(qubits, symbols[0:15])
model_circuit += quantum_pool_circuit(qubits[:4], qubits[4:],
symbols[15:21])
model_circuit += quantum_conv_circuit(qubits[4:], symbols[21:36])
model_circuit += quantum_pool_circuit(qubits[4:6], qubits[6:],
symbols[36:42])
model_circuit += quantum_conv_circuit(qubits[6:], symbols[42:57])
model_circuit += quantum_pool_circuit([qubits[6]], [qubits[7]],
symbols[57:63])
return model_circuit
# Create our qubits and readout operators in Cirq.
cluster_state_bits = cirq.GridQubit.rect(1, 8)
readout_operators = cirq.Z(cluster_state_bits[-1])
# Build a sequential model enacting the logic in 1.3 of this notebook.
# Here you are making the static cluster state prep as a part of the AddCircuit and the
# "quantum datapoints" are coming in the form of excitation
excitation_input = tf.keras.Input(shape=(), dtype=tf.dtypes.string)
cluster_state = tfq.layers.AddCircuit()(
excitation_input, prepend=cluster_state_circuit(cluster_state_bits))
quantum_model = tfq.layers.PQC(create_model_circuit(cluster_state_bits),
readout_operators)(cluster_state)
qcnn_model = tf.keras.Model(inputs=[excitation_input], outputs=[quantum_model])
# Show the keras plot of the model
tf.keras.utils.plot_model(qcnn_model,
show_shapes=True,
show_layer_names=False,
dpi=70)
Explanation: 1.6 モデルの定義
定義したレイヤーを使用して純粋な量子 CNN を構築します。8 キュービットで開始し、1 キュービットまでプールダウンしてから、$\langle \hat{Z} \rangle$ を測定します。
End of explanation
# Generate some training data.
train_excitations, train_labels, test_excitations, test_labels = generate_data(
cluster_state_bits)
# Custom accuracy metric.
@tf.function
def custom_accuracy(y_true, y_pred):
y_true = tf.squeeze(y_true)
y_pred = tf.map_fn(lambda x: 1.0 if x >= 0 else -1.0, y_pred)
return tf.keras.backend.mean(tf.keras.backend.equal(y_true, y_pred))
qcnn_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.02),
loss=tf.losses.mse,
metrics=[custom_accuracy])
history = qcnn_model.fit(x=train_excitations,
y=train_labels,
batch_size=16,
epochs=25,
verbose=1,
validation_data=(test_excitations, test_labels))
plt.plot(history.history['loss'][1:], label='Training')
plt.plot(history.history['val_loss'][1:], label='Validation')
plt.title('Training a Quantum CNN to Detect Excited Cluster States')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
Explanation: 1.7 モデルをトレーニングする
この例を単純化するために、完全なバッチでモデルをトレーニングします。
End of explanation
# 1-local operators to read out
readouts = [cirq.Z(bit) for bit in cluster_state_bits[4:]]
def multi_readout_model_circuit(qubits):
Make a model circuit with less quantum pool and conv operations.
model_circuit = cirq.Circuit()
symbols = sympy.symbols('qconv0:21')
model_circuit += quantum_conv_circuit(qubits, symbols[0:15])
model_circuit += quantum_pool_circuit(qubits[:4], qubits[4:],
symbols[15:21])
return model_circuit
# Build a model enacting the logic in 2.1 of this notebook.
excitation_input_dual = tf.keras.Input(shape=(), dtype=tf.dtypes.string)
cluster_state_dual = tfq.layers.AddCircuit()(
excitation_input_dual, prepend=cluster_state_circuit(cluster_state_bits))
quantum_model_dual = tfq.layers.PQC(
multi_readout_model_circuit(cluster_state_bits),
readouts)(cluster_state_dual)
d1_dual = tf.keras.layers.Dense(8)(quantum_model_dual)
d2_dual = tf.keras.layers.Dense(1)(d1_dual)
hybrid_model = tf.keras.Model(inputs=[excitation_input_dual], outputs=[d2_dual])
# Display the model architecture
tf.keras.utils.plot_model(hybrid_model,
show_shapes=True,
show_layer_names=False,
dpi=70)
Explanation: 2. ハイブリッドモデル
量子畳み込みを使用して 8 キュービットから 1 キュービットにする必要はありません。量子畳み込みの 1~2 ラウンドを実行し、結果を従来のニューラルネットワークにフィードすることも可能です。このセクションでは、量子と従来のハイブリッドモデルを説明します。
2.1 単一量子フィルタを備えたハイブリッドモデル
量子畳み込みのレイヤーを 1 つ適用し、すべてのビットの $\langle \hat{Z}_n \rangle$ を読み取り、続いて密に接続されたニューラルネットワークを読み取ります。
<img src="./images/qcnn_5.png" width="1000">
2.1.1 モデルの定義
End of explanation
hybrid_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.02),
loss=tf.losses.mse,
metrics=[custom_accuracy])
hybrid_history = hybrid_model.fit(x=train_excitations,
y=train_labels,
batch_size=16,
epochs=25,
verbose=1,
validation_data=(test_excitations,
test_labels))
plt.plot(history.history['val_custom_accuracy'], label='QCNN')
plt.plot(hybrid_history.history['val_custom_accuracy'], label='Hybrid CNN')
plt.title('Quantum vs Hybrid CNN performance')
plt.xlabel('Epochs')
plt.legend()
plt.ylabel('Validation Accuracy')
plt.show()
Explanation: 2.1.2 モデルをトレーニングする
End of explanation
excitation_input_multi = tf.keras.Input(shape=(), dtype=tf.dtypes.string)
cluster_state_multi = tfq.layers.AddCircuit()(
excitation_input_multi, prepend=cluster_state_circuit(cluster_state_bits))
# apply 3 different filters and measure expectation values
quantum_model_multi1 = tfq.layers.PQC(
multi_readout_model_circuit(cluster_state_bits),
readouts)(cluster_state_multi)
quantum_model_multi2 = tfq.layers.PQC(
multi_readout_model_circuit(cluster_state_bits),
readouts)(cluster_state_multi)
quantum_model_multi3 = tfq.layers.PQC(
multi_readout_model_circuit(cluster_state_bits),
readouts)(cluster_state_multi)
# concatenate outputs and feed into a small classical NN
concat_out = tf.keras.layers.concatenate(
[quantum_model_multi1, quantum_model_multi2, quantum_model_multi3])
dense_1 = tf.keras.layers.Dense(8)(concat_out)
dense_2 = tf.keras.layers.Dense(1)(dense_1)
multi_qconv_model = tf.keras.Model(inputs=[excitation_input_multi],
outputs=[dense_2])
# Display the model architecture
tf.keras.utils.plot_model(multi_qconv_model,
show_shapes=True,
show_layer_names=True,
dpi=70)
Explanation: ご覧のとおり、非常に控えめな古典的支援により、ハイブリッドモデルは通常、純粋な量子バージョンよりも速く収束します。
2.2 多重量子フィルタを備えたハイブリッド畳み込み
多重量子畳み込みと従来のニューラルネットワークを使用してそれらを組み合わせるアーキテクチャを試してみましょう。
<img src="./images/qcnn_6.png" width="1000">
2.2.1 モデルの定義
End of explanation
multi_qconv_model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=0.02),
loss=tf.losses.mse,
metrics=[custom_accuracy])
multi_qconv_history = multi_qconv_model.fit(x=train_excitations,
y=train_labels,
batch_size=16,
epochs=25,
verbose=1,
validation_data=(test_excitations,
test_labels))
plt.plot(history.history['val_custom_accuracy'][:25], label='QCNN')
plt.plot(hybrid_history.history['val_custom_accuracy'][:25], label='Hybrid CNN')
plt.plot(multi_qconv_history.history['val_custom_accuracy'][:25],
label='Hybrid CNN \n Multiple Quantum Filters')
plt.title('Quantum vs Hybrid CNN performance')
plt.xlabel('Epochs')
plt.legend()
plt.ylabel('Validation Accuracy')
plt.show()
Explanation: 2.2.2 モデルをトレーニングする
End of explanation |
6,171 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Minimal Working Example for LongHCPulse
This notebook is a minimal working example for LongHCPulse which computes the heat capacity of $\rm{Yb_2Ti_2O_7}$.
This can serve as a template for processing other data sets.
A
Step1: B
Step2: Alternatively, if you want to process and plot the data with an external program, the computed heat capacity traces can be saved to a comma-separated text file with the following command
Step3: C
Step4: END OF MINIMAL WORKING EXAMPLE
If all you want are some line plots, you can stop here.
However, LongHCPulse has many other plotting utilities which you can take advantage of. The rest of this notebook explains how to use them.
LinePlotCombine
Combines all heat capacity traces taken at each magnetic field. The user has the option of applying a smoothing factor to the data to reduce the noise from combining traces.
Step5: MeshGrid
Make a 2D plot of heat capacity vs. temperature and field
Step6: plotEntropy
This function integrates heat capacity to get entropy with $\int \frac{C}{T} dT = \Delta S$.
It does this by first combining the traces (using the same method as lineplotcombine, which the "smooth" parameter is for) and then integrating over each trace, extrapolating linearly from the lowest temperature data point to $T=0$, $C=0$.
Step7: Appendix
Step8: 2. Plotting C/T for specific fields and with both heating and cooling pulses
If one wants to plot $C$ instead of $C/T$, replace the "plotHCT" command with "plotHC" | Python Code:
# Minimal Working Example for LongHCPulse
# Allen Scheie
# import libraries
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
from LongHCPulse import LongHCPulse # class to compute heat capacity
# Find Yb2Ti2O7 molar mass
mmYb = 173.201 # g/mol (from WolframAlpha.com)
mmTi = 47.867 # g/mol
mmO = 15.9994 # g/mol
MMYbTiO = (2*mmYb + 2*mmTi + 7*mmO)
# Import data
datafile = 'Yb2Ti2O7_longpulse.raw'
DRcalfile = 'DRPuck27.cal'
YbTiO = LongHCPulse(datafile=datafile,calfile=DRcalfile,
sampmass=1.04,molarmass=MMYbTiO, scaleshortpulse=1e-6/1.04e-3*MMYbTiO) ## sample mass is in mg
# Scale wire conductivity
f, ax = plt.subplots(1,1, figsize=(5,4))
plt.plot(YbTiO.avgThermCondWire[0],YbTiO.avgThermCondWire[1],
marker = 'o', label='$\\rm Yb_2Ti_2O_7$ Data File')
plt.plot(YbTiO.Kw[0],YbTiO.Kw[1], marker = '.', label='Cal. File')
plt.plot(YbTiO.Kw[0],YbTiO.Kw[1]*0.65 + 2.5e6*YbTiO.Kw[1]**2, ## <--- Play with these values until the
## curves agree
marker = '.', label='Cal. File (scaled)')
plt.xlim(0,0.66)
plt.ylim(0,10e-8)
plt.xlabel('$T$ (K)')
plt.ylabel('Wire Cond. (W/K)')
plt.legend(loc=2, frameon=False)
# Re-scale wire thermal conductivity (see cell above)
YbTiO.Kw[1] = YbTiO.Kw[1]*0.65 + 2.5e6*YbTiO.Kw[1]**2
# Compute Heat Capacity
YbTiO.heatcapacity(smoothlevel=0)
# Scale data to per Yb ion instead of per F.U.:
YbTiO.scale(2)
Explanation: Minimal Working Example for LongHCPulse
This notebook is a minimal working example for LongHCPulse which computes the heat capacity of $\rm{Yb_2Ti_2O_7}$.
This can serve as a template for processing other data sets.
A: Import and process heat capacity data
End of explanation
## Save the object with processed data to a file (uncomment the following line)
#YbTiO.saveData('Yb2Ti2O7_processedData.pickle')
## Import data again (uncomment the following line)
#YbTiO = LongHCPulse('Yb2Ti2O7_processedData.pickle')
Explanation: B: Saving Processed Data
This is very useful if the data takes a while to process: process and save it in one script, and import and plot it in another script. The saved data must be a .pickle file.
End of explanation
YbTiO.savetraces('YTO_processedData.txt', Barray=[0.0]) #Barray can be set to all, or a select number of fields.
Explanation: Alternatively, if you want to process and plot the data with an external program, the computed heat capacity traces can be saved to a comma-separated text file with the following command:
End of explanation
YbTiO.labels=[]
YbTiO.shortpulselabels=[]
f,ax = plt.subplots(1,1)
YbTiO.lineplot(ax,'All', demag=False)
# If you want to plot a particular set of magnetic fields, change 'All' to a list of fields
# If you want to plot heating pulses as well, add the command "plotHeatPulses=True"
# Label Axes
ax.set_ylabel("$C$ $(\\rm{J\> K^{-1} mol^{-1}_{Yb}})$")
ax.set_xlabel("T (K)")
ax.set_yscale('log')
# Show legend
short_legend=ax.legend(handles=YbTiO.shortpulselabels,labelspacing = 0,handlelength=1.4,
fontsize=11,frameon=False, bbox_to_anchor=(0.97, 1.0),numpoints=1)
long_legend=ax.legend(handles=YbTiO.labels,labelspacing = 0,handlelength=1.4,fontsize=11,
frameon=False, bbox_to_anchor=(1.0, 0.92))
ax.add_artist(short_legend)
Explanation: C: Plot Commands
LinePlot
Plots each heat capacity trace (cooling curves only) individually
End of explanation
YbTiO.labels=[]
YbTiO.shortpulselabels=[]
f,ax = plt.subplots(1,1)
YbTiO.lineplotCombine(ax,'All',smooth=0, demag=False)
# If you want to plot a particular set of magnetic fields, change 'All' to a list of fields
# Label Axes
ax.set_ylabel("$C$ $(\\rm{J\> K^{-1} mol^{-1}_{Yb}})$")
ax.set_xlabel("T (K)")
ax.set_yscale('log')
# show legend
short_legend=ax.legend(handles=YbTiO.shortpulselabels,labelspacing = 0,handlelength=1.4,
fontsize=11,frameon=False, bbox_to_anchor=(0.97, 1.0),numpoints=1)
long_legend=ax.legend(handles=YbTiO.labels,labelspacing = 0,handlelength=1.4,fontsize=11,
frameon=False, bbox_to_anchor=(1.0, 0.92))
ax.add_artist(short_legend)
Explanation: END OF MINIMAL WORKING EXAMPLE
If all you want are some line plots, you can stop here.
However, LongHCPulse has many other plotting utilities which you can take advantage of. The rest of this notebook explains how to use them.
LinePlotCombine
Combines all heat capacity traces taken at each magnetic field. The user has the option of applying a smoothing factor to the data to reduce the noise from combining traces.
End of explanation
f, ax = plt.subplots(1,1)
# Create 2D grid
xarray = np.arange(0.0,0.9,0.004)
intens, Bedg = YbTiO.meshgrid(Tarray =xarray,Barray='All')
# Plot 2D grid of heat capacity
meshdata = ax.pcolormesh(xarray,Bedg/10000,intens,rasterized = True,
cmap = 'rainbow', vmin=0.0, vmax=10)
# Label Axes
ax.set_ylabel('$B_{ext}$ (T)',fontsize=14,labelpad = 4)
ax.set_ylim(-100./10000,1)
ax.set_xlabel("$T$ (K)", labelpad = 1)
ax.set_xlim(0,0.7)
# Colorscale bar
cb=plt.colorbar(meshdata)
cb.set_label('$C$ $(\\rm{J\> K^{-1} mol^{-1}_{Yb}})$', rotation = -90, labelpad = 14)
Explanation: MeshGrid
Make a 2D plot of heat capacity vs. temperature and field
End of explanation
#***************
# Plot the entropy
#***************
f, ax = plt.subplots(1,1,figsize=(6,5))
YbTiO.plotEntropy(ax,'All',smooth=3)
# Label axes
ax.set_xlabel('T (K)')
ax.set_ylabel('Entropy $(\\rm{J\> K^{-1} mol^{-1}_{Yb}})$', labelpad = 3)
ax.set_ylim(-0.05,2)
ax.set_title('Entropy Recovered')
# Plot legend
f.subplots_adjust(left = 0.12,right = 0.80, bottom = 0.14, top = 0.92)
ax.legend(handles=YbTiO.entropylabels,labelspacing = 0.1,handlelength=1.4,fontsize=12,
frameon=False, bbox_to_anchor=(1.3, 1.01))
Explanation: plotEntropy
This function integrates heat capacity to get entropy with $\int \frac{C}{T} dT = \Delta S$.
It does this by first combining the traces (using the same method as lineplotcombine, which the "smooth" parameter is for) and then integrating over each trace, extrapolating linearly from the lowest temperature data point to $T=0$, $C=0$.
End of explanation
# Plot a single heating and cooling curve
curve = 48 #try changing to curve number to see other results.
f, ax = plt.subplots(1,2, figsize=(9,4))
ax[0].plot(YbTiO.smoothedData[curve][0], YbTiO.smoothedData[curve][1],'k') # smoothed data
ax[0].plot(YbTiO.rawdata[curve][0,:,0], YbTiO.rawdata[curve][1,:,0],'r') # heating pulse
ax[0].plot(YbTiO.rawdata[curve][0,:,1], YbTiO.rawdata[curve][1,:,1],'b') # cooling pulse
ax[0].set_xlabel('t (s)')
ax[0].set_ylabel('Sample Temp (K)')
YbTiO.plotHC(ax[1],index=curve,heatingcolor='darkred',coolingcolor='darkblue')
ax[1].legend(handles=YbTiO.labels, loc=2, numpoints=1, handlelength=1.4, frameon=False)
ax[1].set_xlabel('T (K)')
ax[1].set_ylabel('$C$ $(\\rm{J\> K^{-1} mol^{-1}_{Yb}})$')
plt.tight_layout()
Explanation: Appendix: Plotting Data "by hand"
Sometimes it is useful to plot the heat capacity data without the above commands. In that case, one can use the following pieces of code:
1. Plotting data from a single heating or cooling pulse
This is useful if you wand to examine the results of a particular heating or cooling pulse to see if a feature is an artifact or is real.
End of explanation
f, ax = plt.subplots(figsize=(5,5))
YbTiO.labels=[] # If with these commands, one needs set labels=[] first.
for jj in range(len(YbTiO.Bfield)):
B = round(YbTiO.Bfield[jj],-1)
if B == 1000:
YbTiO.plotHCT(plt,index=jj,heatingcolor='darkblue',coolingcolor='blue',
shortpulsecolor='steelblue',Blabels=True, demag=False)
if B == 3000:
YbTiO.plotHCT(plt,index=jj,heatingcolor='darkred',coolingcolor='red',
shortpulsecolor='red',Blabels=True, demag=False)
if B == 4000:
YbTiO.plotHCT(plt,index=jj,heatingcolor='darkgreen',coolingcolor='limegreen',
shortpulsecolor='green',Blabels=True, demag=False)
plt.xlabel("T (K)")
plt.ylabel("C/T $(\\rm{J\> K^{-2} mol^{-1}_{Yb}})$")
plt.title('$\\rm{Yb_2Ti_2O_7}$', fontsize=18)
plt.text(0.02,0.97,'B=0.1T \n$\parallel \\langle 111 \\rangle$',horizontalalignment='left',
verticalalignment = 'top',fontsize=14, transform=ax.transAxes)
plt.legend(handles=YbTiO.labels,numpoints=1, fontsize=14, handlelength=1.4, frameon=False)
#plt.ylim(0,37)
plt.xlim(0.1,0.8)
plt.tight_layout()
Explanation: 2. Plotting C/T for specific fields and with both heating and cooling pulses
If one wants to plot $C$ instead of $C/T$, replace the "plotHCT" command with "plotHC"
End of explanation |
6,172 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inpe', 'besm-2-7', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: INPE
Source ID: BESM-2-7
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:06
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
6,173 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working with large data sets
Lazy evaluation, pure functions and higher order functions
Lazy and eager evaluation
A list comprehension is eager.
Step1: A generator expression is lazy.
Step2: You can use generators as iterators.
Step3: A generator is single use.
Step4: The list constructor forces evaluation of the generator.
Step5: An eager function.
Step6: A lazy generator.
Step7: Pure and impure functions
A pure function is like a mathematical function. Given the same inputs, it always returns the same output, and has no side effects.
Step8: An impure function has side effects.
Step9: Quiz
Say if the following functions are pure or impure.
Step10: Higher order functions
Step11: Using the operator module
The operator module provides all the Python operators as functions.
Step12: Using itertools
Step13: Generate all Boolean combinations
Step14: Using toolz
Step15: Using pipes and the curried namespace
Step16: Processing many sets of DNA strings without reading into memory | Python Code:
[x*x for x in range(3)]
Explanation: Working with large data sets
Lazy evaluation, pure functions and higher order functions
Lazy and eager evaluation
A list comprehension is eager.
End of explanation
(x*x for x in range(3))
Explanation: A generator expression is lazy.
End of explanation
g = (x*x for x in range(3))
next(g)
next(g)
next(g)
next(g)
Explanation: You can use generators as iterators.
End of explanation
for i in g:
print(i, end=", ")
g = (x*x for x in range(3))
for i in g:
print(i, end=", ")
Explanation: A generator is single use.
End of explanation
list(x*x for x in range(3))
Explanation: The list constructor forces evaluation of the generator.
End of explanation
def eager_updown(n):
xs = []
for i in range(n):
xs.append(i)
for i in range(n, -1, -1):
xs.append(i)
return xs
eager_updown(3)
Explanation: An eager function.
End of explanation
def lazy_updown(n):
for i in range(n):
yield i
for i in range(n, -1, -1):
yield i
lazy_updown(3)
list(lazy_updown(3))
Explanation: A lazy generator.
End of explanation
def pure(alist):
return [x*x for x in alist]
Explanation: Pure and impure functions
A pure function is like a mathematical function. Given the same inputs, it always returns the same output, and has no side effects.
End of explanation
def impure(alist):
for i in range(len(alist)):
alist[i] = alist[i]*alist[i]
return alist
xs = [1,2,3]
ys = pure(xs)
print(xs, ys)
ys = impure(xs)
print(xs, ys)
Explanation: An impure function has side effects.
End of explanation
def f1(n):
return n//2 if n % 2==0 else n*3+1
def f2(n):
return np.random.random(n)
def f3(n):
n = 23
return n
def f4(a, n=[]):
n.append(a)
return n
Explanation: Quiz
Say if the following functions are pure or impure.
End of explanation
list(map(f1, range(10)))
list(filter(lambda x: x % 2 == 0, range(10)))
from functools import reduce
reduce(lambda x, y: x + y, range(10), 0)
reduce(lambda x, y: x + y, [[1,2], [3,4], [5,6]], [])
Explanation: Higher order functions
End of explanation
import operator as op
reduce(op.mul, range(1, 6), 1)
list(map(op.itemgetter(1), [[1,2,3],[4,5,6],[7,8,9]]))
Explanation: Using the operator module
The operator module provides all the Python operators as functions.
End of explanation
import itertools as it
list(it.combinations(range(1,6), 3))
Explanation: Using itertools
End of explanation
list(it.product([0,1], repeat=3))
list(it.starmap(op.add, zip(range(5), range(5))))
list(it.takewhile(lambda x: x < 3, range(10)))
data = sorted('the quick brown fox jumps over the lazy dog'.split(), key=len)
for k, g in it.groupby(data, key=len):
print(k, list(g))
Explanation: Generate all Boolean combinations
End of explanation
import toolz as tz
list(tz.partition(3, range(10)))
list(tz.partition(3, range(10), pad=None))
n = 30
dna = ''.join(np.random.choice(list('ACTG'), n))
dna
tz.frequencies(tz.sliding_window(2, dna))
Explanation: Using toolz
End of explanation
from toolz import curried as c
tz.pipe(
dna,
c.sliding_window(2), # using curry
c.frequencies,
)
composed = tz.compose(
c.frequencies,
c.sliding_window(2),
)
composed(dna)
Explanation: Using pipes and the curried namespace
End of explanation
m = 10000
n = 300
dnas = (''.join(np.random.choice(list('ACTG'), n, p=[.1, .2, .3, .4]))
for i in range(m))
dnas
tz.merge_with(sum,
tz.map(
composed,
dnas
)
)
Explanation: Processing many sets of DNA strings without reading into memory
End of explanation |
6,174 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Recurrent Neural network example
This is multilayer feed forward network with a Recurrent layer. Help observing the inner workingsof backpropagation through time.
Step1: We wil demonstrate the nonlinear representation capabilities fot the multilayer feedforward network with the XOR problem. First, let's create a small dataset with samples from positive and negative classes.
Step2: Can we do the same with a Multilayer perceptron? | Python Code:
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
# Module with the neural net classes
import DNN
import Solvers
Explanation: Recurrent Neural network example
This is multilayer feed forward network with a Recurrent layer. Help observing the inner workingsof backpropagation through time.
End of explanation
N = 100
T = np.ndarray((N, 1), buffer=np.array(range(N), dtype=np.float))
target = np.concatenate((np.cos(2*np.pi*T/N), np.sin(4*np.pi*T/N)), axis=1)
data = np.concatenate((np.cos(4*np.pi*T/N), np.sin(2*np.pi*T/N)), axis=1)
#data = np.random.normal(size=target.shape)
print target.shape
plt.plot(target[:,0], target[:,1])
# instantiate an empty network
rnn_net = DNN.Net()
# add layers to my_net in a bottom up fashion
rnn_net.addLayer(DNN.RNNLayer(n_in=2, n_out=2, n_hid=6, hid_activation='tanh', out_activation='tanh'))
# create solver object for training the feedforward network
solver_params = {'lr_rate': 0.001, \
'momentum': 0.7}
rnn_solver = DNN.SGDSolver(solver_params)
#my_solver = DNN.NAGSolver(solver_params)
#my_solver = DNN.RMSPropSolver(solver_params)
#my_solver = DNN.AdaGradSolver(solver_params)
#my_solver = Solvers.AdaDeltaSolver(solver_params)
# instantiate a NetTrainer to learn parameters of my_net using the my_solver
rnn_train_params = {'net': rnn_net, \
'loss_func': 'mse', \
'batch_size': 10, \
'max_iter': 100000, \
'train_data': data, \
'label_data': target, \
'solver': rnn_solver, \
'print_interval': 10000, \
'shuffle_data': False}
rnn_trainer = DNN.NetTrainer(rnn_train_params)
rnn_net.layers[0].keepHid(True)
rnn_trainer.train()
rnn_net.forward(data)
pred = rnn_net.Xout
## plot data point with the predicted labels
plt.plot(pred[:, 0], pred[:, 1])
plt.hold('on')
plt.plot(target[:, 0], target[:, 1], 'r')
plt.show()
Explanation: We wil demonstrate the nonlinear representation capabilities fot the multilayer feedforward network with the XOR problem. First, let's create a small dataset with samples from positive and negative classes.
End of explanation
# instantiate an empty network
mlp_net = DNN.Net()
# add layers to my_net in a bottom up fashion
mlp_net.addLayer(DNN.Layer(n_in=2, n_out=6, activation='tanh'))
mlp_net.addLayer(DNN.Layer(n_in=6, n_out=2, activation='tanh'))
# create solver object for training the feedforward network
mlp_solver = DNN.SGDSolver(solver_params)
#my_solver = DNN.NAGSolver(solver_params)
#my_solver = DNN.RMSPropSolver(solver_params)
#my_solver = DNN.AdaGradSolver(solver_params)
#my_solver = Solvers.AdaDeltaSolver(solver_params)
# instantiate a NetTrainer to learn parameters of my_net using the my_solver
mlp_train_params = {'net': mlp_net, \
'loss_func': 'mse', \
'batch_size': 10, \
'max_iter': 100000, \
'train_data': data, \
'label_data': target, \
'solver': rnn_solver, \
'print_interval': 10000}
mlp_trainer = DNN.NetTrainer(mlp_train_params)
mlp_trainer.train()
mlp_net.forward(data)
pred = mlp_net.Xout
## plot data point with the predicted labels
plt.plot(pred[:, 0], pred[:, 1])
plt.hold('on')
plt.plot(target[:, 0], target[:, 1], 'r')
plt.plot(data[:, 0], data[:, 1], 'g')
plt.show()
Explanation: Can we do the same with a Multilayer perceptron?
End of explanation |
6,175 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Week 4
Step1: Polynomial regression, revisited
We build on the material from Week 3, where we wrote the function to produce an SFrame with columns containing the powers of a given input. Copy and paste the function polynomial_sframe from Week 3
Step2: Let's use matplotlib to visualize what a polynomial regression looks like on the house data.
Step3: As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.
Step4: Let us revisit the 15th-order polynomial model using the 'sqft_living' input. Generate polynomial features up to degree 15 using polynomial_sframe() and fit a model with these features. When fitting the model, use an L2 penalty of 1e-5
Step5: Note
Step6: Next, fit a 15th degree polynomial on set_1, set_2, set_3, and set_4, using 'sqft_living' to predict prices. Print the weights and make a plot of the resulting model.
Hint
Step7: Once the data is shuffled, we divide it into equal segments. Each segment should receive n/k elements, where n is the number of observations in the training set and k is the number of segments. Since the segment 0 starts at index 0 and contains n/k elements, it ends at index (n/k)-1. The segment 1 starts where the segment 0 left off, at index (n/k). With n/k elements, the segment 1 ends at index (n*2/k)-1. Continuing in this fashion, we deduce that the segment i starts at index (n*i/k) and ends at (n*(i+1)/k)-1.
With this pattern in mind, we write a short loop that prints the starting and ending indices of each segment, just to make sure you are getting the splits right.
Step8: Let us familiarize ourselves with array slicing with SFrame. To extract a continuous slice from an SFrame, use colon in square brackets. For instance, the following cell extracts rows 0 to 9 of train_valid_shuffled. Notice that the first index (0) is included in the slice but the last index (10) is omitted.
Step9: Now let us extract individual segments with array slicing. Consider the scenario where we group the houses in the train_valid_shuffled dataframe into k=10 segments of roughly equal size, with starting and ending indices computed as above.
Extract the fourth segment (segment 3) and assign it to a variable called validation4.
To verify that we have the right elements extracted, run the following cell, which computes the average price of the fourth segment. When rounded to nearest whole number, the average should be $536,234.
Step10: After designating one of the k segments as the validation set, we train a model using the rest of the data. To choose the remainder, we slice (0
Step11: Extract the remainder of the data after excluding fourth segment (segment 3) and assign the subset to train4.
To verify that we have the right elements extracted, run the following cell, which computes the average price of the data with fourth segment excluded. When rounded to nearest whole number, the average should be $539,450.
Step12: Now we are ready to implement k-fold cross-validation. Write a function that computes k validation errors by designating each of the k segments as the validation set. It accepts as parameters (i) k, (ii) l2_penalty, (iii) dataframe, (iv) name of output column (e.g. price) and (v) list of feature names. The function returns the average validation error using k segments as validation sets.
For each i in [0, 1, ..., k-1]
Step13: Once we have a function to compute the average validation error for a model, we can write a loop to find the model that minimizes the average validation error. Write a loop that does the following | Python Code:
import graphlab
Explanation: Regression Week 4: Ridge Regression (interpretation)
In this notebook, we will run ridge regression multiple times with different L2 penalties to see which one produces the best fit. We will revisit the example of polynomial regression as a means to see the effect of L2 regularization. In particular, we will:
* Use a pre-built implementation of regression (GraphLab Create) to run polynomial regression
* Use matplotlib to visualize polynomial regressions
* Use a pre-built implementation of regression (GraphLab Create) to run polynomial regression, this time with L2 penalty
* Use matplotlib to visualize polynomial regressions under L2 regularization
* Choose best L2 penalty using cross-validation.
* Assess the final fit using test data.
We will continue to use the House data from previous notebooks. (In the next programming assignment for this module, you will implement your own ridge regression learning algorithm using gradient descent.)
Fire up graphlab create
End of explanation
def polynomial_sframe(feature, degree):
Explanation: Polynomial regression, revisited
We build on the material from Week 3, where we wrote the function to produce an SFrame with columns containing the powers of a given input. Copy and paste the function polynomial_sframe from Week 3:
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
sales = graphlab.SFrame('kc_house_data.gl/')
Explanation: Let's use matplotlib to visualize what a polynomial regression looks like on the house data.
End of explanation
sales = sales.sort(['sqft_living','price'])
Explanation: As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.
End of explanation
l2_small_penalty = 1e-5
Explanation: Let us revisit the 15th-order polynomial model using the 'sqft_living' input. Generate polynomial features up to degree 15 using polynomial_sframe() and fit a model with these features. When fitting the model, use an L2 penalty of 1e-5:
End of explanation
(semi_split1, semi_split2) = sales.random_split(.5,seed=0)
(set_1, set_2) = semi_split1.random_split(0.5, seed=0)
(set_3, set_4) = semi_split2.random_split(0.5, seed=0)
Explanation: Note: When we have so many features and so few data points, the solution can become highly numerically unstable, which can sometimes lead to strange unpredictable results. Thus, rather than using no regularization, we will introduce a tiny amount of regularization (l2_penalty=1e-5) to make the solution numerically stable. (In lecture, we discussed the fact that regularization can also help with numerical stability, and here we are seeing a practical example.)
With the L2 penalty specified above, fit the model and print out the learned weights.
Hint: make sure to add 'price' column to the new SFrame before calling graphlab.linear_regression.create(). Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set=None in this call.
QUIZ QUESTION: What's the learned value for the coefficient of feature power_1?
Observe overfitting
Recall from Week 3 that the polynomial fit of degree 15 changed wildly whenever the data changed. In particular, when we split the sales data into four subsets and fit the model of degree 15, the result came out to be very different for each subset. The model had a high variance. We will see in a moment that ridge regression reduces such variance. But first, we must reproduce the experiment we did in Week 3.
First, split the data into split the sales data into four subsets of roughly equal size and call them set_1, set_2, set_3, and set_4. Use .random_split function and make sure you set seed=0.
End of explanation
(train_valid, test) = sales.random_split(.9, seed=1)
train_valid_shuffled = graphlab.toolkits.cross_validation.shuffle(train_valid, random_seed=1)
Explanation: Next, fit a 15th degree polynomial on set_1, set_2, set_3, and set_4, using 'sqft_living' to predict prices. Print the weights and make a plot of the resulting model.
Hint: When calling graphlab.linear_regression.create(), use the same L2 penalty as before (i.e. l2_small_penalty). Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set = None in this call.
The four curves should differ from one another a lot, as should the coefficients you learned.
QUIZ QUESTION: For the models learned in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature power_1? (For the purpose of answering this question, negative numbers are considered "smaller" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.)
Ridge regression comes to rescue
Generally, whenever we see weights change so much in response to change in data, we believe the variance of our estimate to be large. Ridge regression aims to address this issue by penalizing "large" weights. (Weights of model15 looked quite small, but they are not that small because 'sqft_living' input is in the order of thousands.)
With the argument l2_penalty=1e5, fit a 15th-order polynomial model on set_1, set_2, set_3, and set_4. Other than the extra parameter, the code should be the same as the experiment above. Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set = None in this call.
These curves should vary a lot less, now that you applied a high degree of regularization.
QUIZ QUESTION: For the models learned with the high level of regularization in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature power_1? (For the purpose of answering this question, negative numbers are considered "smaller" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.)
Selecting an L2 penalty via cross-validation
Just like the polynomial degree, the L2 penalty is a "magic" parameter we need to select. We could use the validation set approach as we did in the last module, but that approach has a major disadvantage: it leaves fewer observations available for training. Cross-validation seeks to overcome this issue by using all of the training set in a smart way.
We will implement a kind of cross-validation called k-fold cross-validation. The method gets its name because it involves dividing the training set into k segments of roughtly equal size. Similar to the validation set method, we measure the validation error with one of the segments designated as the validation set. The major difference is that we repeat the process k times as follows:
Set aside segment 0 as the validation set, and fit a model on rest of data, and evalutate it on this validation set<br>
Set aside segment 1 as the validation set, and fit a model on rest of data, and evalutate it on this validation set<br>
...<br>
Set aside segment k-1 as the validation set, and fit a model on rest of data, and evalutate it on this validation set
After this process, we compute the average of the k validation errors, and use it as an estimate of the generalization error. Notice that all observations are used for both training and validation, as we iterate over segments of data.
To estimate the generalization error well, it is crucial to shuffle the training data before dividing them into segments. GraphLab Create has a utility function for shuffling a given SFrame. We reserve 10% of the data as the test set and shuffle the remainder. (Make sure to use seed=1 to get consistent answer.)
End of explanation
n = len(train_valid_shuffled)
k = 10 # 10-fold cross-validation
for i in xrange(k):
start = (n*i)/k
end = (n*(i+1))/k-1
print i, (start, end)
Explanation: Once the data is shuffled, we divide it into equal segments. Each segment should receive n/k elements, where n is the number of observations in the training set and k is the number of segments. Since the segment 0 starts at index 0 and contains n/k elements, it ends at index (n/k)-1. The segment 1 starts where the segment 0 left off, at index (n/k). With n/k elements, the segment 1 ends at index (n*2/k)-1. Continuing in this fashion, we deduce that the segment i starts at index (n*i/k) and ends at (n*(i+1)/k)-1.
With this pattern in mind, we write a short loop that prints the starting and ending indices of each segment, just to make sure you are getting the splits right.
End of explanation
train_valid_shuffled[0:10] # rows 0 to 9
Explanation: Let us familiarize ourselves with array slicing with SFrame. To extract a continuous slice from an SFrame, use colon in square brackets. For instance, the following cell extracts rows 0 to 9 of train_valid_shuffled. Notice that the first index (0) is included in the slice but the last index (10) is omitted.
End of explanation
print int(round(validation4['price'].mean(), 0))
Explanation: Now let us extract individual segments with array slicing. Consider the scenario where we group the houses in the train_valid_shuffled dataframe into k=10 segments of roughly equal size, with starting and ending indices computed as above.
Extract the fourth segment (segment 3) and assign it to a variable called validation4.
To verify that we have the right elements extracted, run the following cell, which computes the average price of the fourth segment. When rounded to nearest whole number, the average should be $536,234.
End of explanation
n = len(train_valid_shuffled)
first_two = train_valid_shuffled[0:2]
last_two = train_valid_shuffled[n-2:n]
print first_two.append(last_two)
Explanation: After designating one of the k segments as the validation set, we train a model using the rest of the data. To choose the remainder, we slice (0:start) and (end+1:n) of the data and paste them together. SFrame has append() method that pastes together two disjoint sets of rows originating from a common dataset. For instance, the following cell pastes together the first and last two rows of the train_valid_shuffled dataframe.
End of explanation
print int(round(train4['price'].mean(), 0))
Explanation: Extract the remainder of the data after excluding fourth segment (segment 3) and assign the subset to train4.
To verify that we have the right elements extracted, run the following cell, which computes the average price of the data with fourth segment excluded. When rounded to nearest whole number, the average should be $539,450.
End of explanation
def k_fold_cross_validation(k, l2_penalty, data, output_name, features_list):
Explanation: Now we are ready to implement k-fold cross-validation. Write a function that computes k validation errors by designating each of the k segments as the validation set. It accepts as parameters (i) k, (ii) l2_penalty, (iii) dataframe, (iv) name of output column (e.g. price) and (v) list of feature names. The function returns the average validation error using k segments as validation sets.
For each i in [0, 1, ..., k-1]:
Compute starting and ending indices of segment i and call 'start' and 'end'
Form validation set by taking a slice (start:end+1) from the data.
Form training set by appending slice (end+1:n) to the end of slice (0:start).
Train a linear model using training set just formed, with a given l2_penalty
Compute validation error using validation set just formed
End of explanation
# Plot the l2_penalty values in the x axis and the cross-validation error in the y axis.
# Using plt.xscale('log') will make your plot more intuitive.
Explanation: Once we have a function to compute the average validation error for a model, we can write a loop to find the model that minimizes the average validation error. Write a loop that does the following:
* We will again be aiming to fit a 15th-order polynomial model using the sqft_living input
* For l2_penalty in [10^1, 10^1.5, 10^2, 10^2.5, ..., 10^7] (to get this in Python, you can use this Numpy function: np.logspace(1, 7, num=13).)
* Run 10-fold cross-validation with l2_penalty
* Report which L2 penalty produced the lowest average validation error.
Note: since the degree of the polynomial is now fixed to 15, to make things faster, you should generate polynomial features in advance and re-use them throughout the loop. Make sure to use train_valid_shuffled when generating polynomial features!
QUIZ QUESTIONS: What is the best value for the L2 penalty according to 10-fold validation?
You may find it useful to plot the k-fold cross-validation errors you have obtained to better understand the behavior of the method.
End of explanation |
6,176 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lista de Exercícios - WEDER CASEMIRO DE SOUZA
Os exercícios valem 30% da nota final.
Data Entrega
Step1: Teste para as seguintes situações
Step2: Exercício 2
(0.5 ponto) Crie uma função chamda qtde_caracteres que receba um parâmetro e retorne a quantidade de caracteres. Se o valor recebido pelo parâmetro não for uma String, utilizar a função str() para converter o argumento.
Step3: Teste as seguintes situações
Step4: Exercicío 3
(1.5 ponto) Carregue o arquivo chamado funcionarios.txt. Esse arquivo contém nome e o salário anual de cada funcionário.
alexandre,42000
rose,50000
anderson,30000
antonio,60000
maria,120000
carlos,86000
cesar,48000
Faça os seguintes exercícios
Step5: b) Crie uma função chamada calcular_salario_mensal que irá calcular quanto o funcionário ganha por mês. Verifique o tipo do campo do valor do dicionário. Utilize a função float() para converter o valor.
Step6: c) Por fim, imprima para cada funcionário o nome, salário anual e mensal no seguinte formato
Step7: Exercício 4
(1 ponto) Crie um programa que deve ler as informações do usuário (nome, idade e salário) através da função input. É necessário validar as informações com os seguintes critérios
Step8: Exercicio 5
(1.5 ponto) Crie uma função que deverá receber dois parâmetros, sendo um o arquivo e o outro a quantidade de palavras que aparecem com mais frequência. O retorno deverá ser uma lista, sendo que cada elemento dessa lista, deve ser uma tupla (chave-valor), sendo chave o nome da palavra e o valor a quantidade de vezes que ela apareceu no texto.
Por exemplo,
palavras_frequentes('texto1.txt', 10) - pesquisa no arquivo texto1.txt as 10 palavras mais frequentes.
palavras_frequentes('texto2.txt', 5) - pesquisa no arquivo texto2.txt as 5 palavras mais frequentes.
<span style="color
Step9: Faça o teste para as seguintes situações
Step10: Exercício 6
(2 pontos) Utilizando o exemplo dado em aula sobre a Streaming API (Desafio 3 da Aula 5), recupere os tweets durante 10 minutos com os seguintes parâmetros
Step11: Crie as chaves de acesso
Step12: Realize a autorização e defina a token de acesso
Step13: Crie a classe DadosPublicosTwitter herdando da classe tweepy.StreamListener para rodar durante 10 minutos e salvar os tweets no arquivo.
Step14: Crie a instância da classe, o fluxo e realize a filtragem com os parâmetros definidos no enunciado.
Step15: Exercício 7
(3 pontos) Com os dados salvos no tweets_10min.json, crie um DataFrame pandas com as seguintes colunas
Step16: Utilize o comando with para abrir o arquivo 'tweets_10min.json' e salvar em uma lista
Step17: Crie o DataFrame com os dados e imprima apenas os 3 primeiros tweets (linhas)
Step18: Crie uma lista, onde cada elemento será o nome da coluna
Step19: Crie um DataFrame auxiliar passando as colunas por parâmetro
Step20: Crie a função de pegar_lat_long(local)
Step21: Crie a função para salvar_hashtags(texto)
Step22: Certifique-se que o DataFrame auxiliar tem as 14 colunas necessárias
Step23: Por fim crie o laço que irá iterar em cada elemento do DataFrame original e salvar apenas o que queremos no DataFrame auxiliar.
Lembre-se de como queremos a latitude e longitude a localização tem que ser diferente de None.
Step24: Imprima o tamanho da DataFrame original e do auxiliar
Step25: Responda
Step26: 2) Qual a porcentagem de tweets que tinham habilitado o geo_enabled?
Step27: Salve os dados em arquivos em arquivo CSV | Python Code:
def soma_tres_num (valor1, valor2, valor3=150):
total = valor1 + valor2 + valor3
return(total)
Explanation: Lista de Exercícios - WEDER CASEMIRO DE SOUZA
Os exercícios valem 30% da nota final.
Data Entrega: 18/09/2016
Formato da Entrega: .ipynb - Clique em File -> Download as -> IPython Notebook (.ipynb)
Enviar por email até a data de entrega, onde o assunto do email deve ser: Exercícios PosMBA Turma2 - SEU NOME
O cálculo da nota final é dado por: NF = (lista * 0.3) + (prova * 0.7)
Exercício 1
(0.5 ponto) Crie uma função chamada soma_tres_num que irá receber 3 parâmetros(sendo o último com valor padrão 10) e retorne a soma desses três valores.
End of explanation
print(soma_tres_num(0, 10))
print(soma_tres_num(1,2,3))
print(soma_tres_num(10, 10, 0))
Explanation: Teste para as seguintes situações:
End of explanation
def qtde_caracteres(texto):
try:
qtd = len(texto)
except:
qtd = len(str(texto))
return (qtd)
Explanation: Exercício 2
(0.5 ponto) Crie uma função chamda qtde_caracteres que receba um parâmetro e retorne a quantidade de caracteres. Se o valor recebido pelo parâmetro não for uma String, utilizar a função str() para converter o argumento.
End of explanation
print(qtde_caracteres(12345))
print(qtde_caracteres(', -'))
print(qtde_caracteres('python'))
print(qtde_caracteres('fia e big data'))
Explanation: Teste as seguintes situações:
End of explanation
def carregar_dados_dic(arquivo):
arq = open(arquivo,'r')
linhas = arq.readlines()
dici = {}
for linha in linhas:
linha = linha.strip()
linha = linha.split(',')
dici[linha[0]] = int(linha[1])
return (dici)
arq.close()
salarios = carregar_dados_dic('funcionarios.txt')
print(salarios)
Explanation: Exercicío 3
(1.5 ponto) Carregue o arquivo chamado funcionarios.txt. Esse arquivo contém nome e o salário anual de cada funcionário.
alexandre,42000
rose,50000
anderson,30000
antonio,60000
maria,120000
carlos,86000
cesar,48000
Faça os seguintes exercícios:
a) Crie uma função chamda carregar_dados_dic que deve receber um parâmetro(no caso o arquivo funcionarios.txt) e retorne um dicionário, onde a chave será o nome e o valor será o salário anual.
End of explanation
def calcular_salario_mensal(salario_anual):
try:
mes = float(salarios[salario_anual]/12)
except KeyError:
mes = "Este funcionário não existe"
return (mes)
mensal = calcular_salario_mensal('carlos')
print(mensal)
Explanation: b) Crie uma função chamada calcular_salario_mensal que irá calcular quanto o funcionário ganha por mês. Verifique o tipo do campo do valor do dicionário. Utilize a função float() para converter o valor.
End of explanation
for key in salarios:
print ("{} --- R$ {} --- R$ {}".format(key,salarios[key],round(calcular_salario_mensal(key),2)))
Explanation: c) Por fim, imprima para cada funcionário o nome, salário anual e mensal no seguinte formato:
Rose --- R$ 50000 --- R$ 4166.66
Dica: Lembre-se do .format para formatar uma string.
Utilize a função round para arredondar para 2 casas decimais.
```python
round(7166.67555, 2)
7166.67
```
End of explanation
#Tratamento do Nome
nome = (input('Digite o Nome: '))
sucesso = 0
while (sucesso == 0):
if len(nome) > 3:
sucesso = 1
else:
nome = (input('O Nome deve ter mais que 3 dígitos, Digite novamente: '))
#Tratamento da idade
yy = (input('Digite a Idade: '))
sucesso = 0
while (sucesso == 0):
try:
newyy = int (yy)
except:
newyy = yy
if type(newyy) == int:
if 18 < newyy < 65:
sucesso = 1
else:
yy = (input('A idade deve ser entre 18 e 65 anos, Digite novamente: '))
else:
yy = (input('A idade deve ser de formato inteiro, Digite novamente: '))
#Tratamento do salário
sss = (input('Digite o Salário: '))
sucesso = 0
while (sucesso == 0):
try:
newsss = round(float (sss),2)
except:
newsss = sss
if type(newsss) == float:
if newsss > 788:
sucesso = 1
else:
sss = (input('O salário deve ser maior que R$ 788, Digite novamente: '))
else:
sss = (input('O salário deve ter valor numérico, Digite novamente: '))
print ('{} tem {} anos, recebe R$ {} e seu nome tem {} caracteres.'.format(nome,newyy,newsss,len(nome)))
Explanation: Exercício 4
(1 ponto) Crie um programa que deve ler as informações do usuário (nome, idade e salário) através da função input. É necessário validar as informações com os seguintes critérios:
* O nome deve ter tamanho maior que 3
* A idade deve ser entre 18 e 65
* O salário deve ser maior que R$ 788
Caso as informações não estejam nos critérios definidos anteriormente, será necessário solicitar ao usuário para digitar novamente. Por fim o programa deverá imprimir o seguinte texto:
NOME tem YY anos, recebe R$ SSS e seu nome tem CC caracteres.
Onde,
NOME deve ser substituído pelo nome digitado.
YY deve ser a idade
SSS deve ser substituído pelo valor do salário.
CC deve ser substituído pela quantidade de caracteres.
Lembre-se de formatar o salário para duas casas decimais.
End of explanation
def palavras_frequentes(arquivo, palavras_freq):
arq = open(arquivo,'r')
linhas = arq.readlines()
arq.close()
for linha in linhas:
linha = linha.strip()
linha = linha.replace(' ',' ') #trata o erro de duplo espaço na separação
linha = linha.split(' ')
lista = []
disti = set(linha) #pega apenas as palavras distintas
for word in disti:
x = (word,linha.count(word)) #Cria a tupla com palavra e contador
lista.append (x) # coloca a tupla na lista
listasort = sorted(lista, key=lambda palavra: palavra[1], reverse = True)
topN = listasort[:palavras_freq]
return (topN)
Explanation: Exercicio 5
(1.5 ponto) Crie uma função que deverá receber dois parâmetros, sendo um o arquivo e o outro a quantidade de palavras que aparecem com mais frequência. O retorno deverá ser uma lista, sendo que cada elemento dessa lista, deve ser uma tupla (chave-valor), sendo chave o nome da palavra e o valor a quantidade de vezes que ela apareceu no texto.
Por exemplo,
palavras_frequentes('texto1.txt', 10) - pesquisa no arquivo texto1.txt as 10 palavras mais frequentes.
palavras_frequentes('texto2.txt', 5) - pesquisa no arquivo texto2.txt as 5 palavras mais frequentes.
<span style="color:red">Lembre-se de tratar possíveis erros!</span>
Exemplo de uso e saida:
```python
palavras10mais = palavras_frequentes('texto1.txt', 10)
print(palavras10mais)
[('programas', 662), ('codigos', 661), ('dinheiro', 661), ('fia', 586), ('python', 491), ('data', 434), ('big', 434), ('velocidade', 133), ('Moneyball', 113), ('dados', 95)]
```
Dica: Verifique o funcionamento da função sorted para ordernar um dicionário com base no valor.
End of explanation
palavras_frequentes('texto1.txt', 10)
palavras_frequentes('texto2.txt', 10)
palavras_frequentes('texto1.txt', 5)
palavras_frequentes('texto2.txt', 5)
palavras_frequentes('texto1.txt', 30)
Explanation: Faça o teste para as seguintes situações:
End of explanation
import tweepy
from time import time
Explanation: Exercício 6
(2 pontos) Utilizando o exemplo dado em aula sobre a Streaming API (Desafio 3 da Aula 5), recupere os tweets durante 10 minutos com os seguintes parâmetros:
fluxo.filter(track=['Big Data', 'Hadoop', 'Spark', 'Python', 'Data Science'], languages=['en', 'pt'])
Salve os tweets em um arquivo chamado tweets_10min.json.
Importe os módulos necessários
End of explanation
consumer_key = 'U8tDrebFQkzdMLIvyljut6LSA'
consumer_secret = 'rHhLLB16nroyXlyZi4GTAE5Gg8hBo4PC7mI8ebTLuIcOpdn76O'
access_token = '259907386-gyiTNCFVbosGkUinOAx2vP63o8obXhNxO57ZiDUO'
access_token_secret = 'JJ8D2oJpaNMNi7cj1n6lepT9gl08vVIVQ5eMkETpWdvpG'
Explanation: Crie as chaves de acesso
End of explanation
autorizar = tweepy.OAuthHandler(consumer_key, consumer_secret)
autorizar.set_access_token(access_token, access_token_secret)
Explanation: Realize a autorização e defina a token de acesso
End of explanation
class DadosPublicosTwitter(tweepy.StreamListener):
def __init__(self, nome_arq, limite):
self.tempo_inicial = time()
self.limite = limite # 10 minutos == 600 segundos
self.salvar_arquivo = open(nome_arq, 'a', newline='')
def on_data(self, dados):
if(time() - self.tempo_inicial < self.limite):
self.salvar_arquivo.write(dados)
return True
else:
self.salvar_arquivo.close()
return False
Explanation: Crie a classe DadosPublicosTwitter herdando da classe tweepy.StreamListener para rodar durante 10 minutos e salvar os tweets no arquivo.
End of explanation
nome_arq = 'tweets_10min.json'
dados_twitter = DadosPublicosTwitter(nome_arq, 600)
fluxo = tweepy.Stream(autorizar, dados_twitter)
fluxo.filter(track=['Big Data', 'Hadoop', 'Spark', 'Python', 'Data Science'], languages=['en', 'pt'])
Explanation: Crie a instância da classe, o fluxo e realize a filtragem com os parâmetros definidos no enunciado.
End of explanation
import simplejson as json
import pandas as pd
Explanation: Exercício 7
(3 pontos) Com os dados salvos no tweets_10min.json, crie um DataFrame pandas com as seguintes colunas:
text - Texto do Tweet
created_at - Data da criação do Tweet
coordinates - Coordenadas
retweet_count - Quantidade de vezes que o tweet foi "retweetado".
favorite_count - Quantidade de vezes que o tweet foi "favoritado".
screen_name - Nome na tela (exemplo: @prof_dinomagri)
location - Localização
lang - Idioma
followers_count - Quantidade de seguidores
geo_enabled - Se tem a geolocalização habilitade
statuses_count - Quantidade de tweets postados.
lat - Recuperada através da função desenvolvida em sala
long - Recuperada através da função desenvolvida em sala
hashtags - Recuperada através da função desenvolvida em sala
Lembre-se de utilizar as funções desenvolvidas em sala de aula para recuperar a latitude, longitude e hastags.
No final salve o arquivo no formato CSV, utilizando o separador ponto e virgula (;) e com a codificação 'utf-8'.
Importe os módulos necessários
End of explanation
dados = []
with open('tweets_10min.json') as arquivo:
for linha in arquivo:
dados.append(json.loads(linha))
Explanation: Utilize o comando with para abrir o arquivo 'tweets_10min.json' e salvar em uma lista
End of explanation
df = pd.DataFrame(dados)
df.head(3)
Explanation: Crie o DataFrame com os dados e imprima apenas os 3 primeiros tweets (linhas)
End of explanation
colunas = ['text', 'created_at', 'coordinates', 'retweet_count',
'favorite_count', 'screen_name', 'location', 'lang',
'followers_count', 'geo_enabled', 'statuses_count',
'lat', 'long', 'hashtags']
Explanation: Crie uma lista, onde cada elemento será o nome da coluna
End of explanation
df_aux = pd.DataFrame(columns=colunas)
df_aux
Explanation: Crie um DataFrame auxiliar passando as colunas por parâmetro
End of explanation
from geopy.geocoders import Nominatim
def pegar_lat_long(local):
try:
geolocalizador = Nominatim()
localizacao = geolocalizador.geocode(local)
if localizacao == None:
return('','')
else:
return (localizacao.latitude,localizacao.longitude)
except:
pass
weder = pegar_lat_long(df['user'][219]['location'])
print(weder)
Explanation: Crie a função de pegar_lat_long(local)
End of explanation
def salvar_hashtags(texto):
aux = []
for palavra in texto.split():
if palavra.startswith('#'):
aux.append(palavra)
converter = ' '.join(aux)
return converter
Explanation: Crie a função para salvar_hashtags(texto)
End of explanation
len(df_aux.columns)
Explanation: Certifique-se que o DataFrame auxiliar tem as 14 colunas necessárias
End of explanation
for i in range(215, len(df)):
if df['user'][i]['location'] != None:
lat_long = pegar_lat_long(df['user'][i]['location'])
if lat_long != 0 and lat_long != None:
dados = [
df['text'][i],
df['created_at'][i],
df['coordinates'][i],
df['retweet_count'][i],
df['favorite_count'][i],
df['user'][i]['screen_name'],
df['user'][i]['location'],
df['user'][i]['lang'],
df['user'][i]['followers_count'],
df['user'][i]['geo_enabled'],
df['user'][i]['statuses_count'],
lat_long[0], #Latitude
lat_long[1], #Longitude
salvar_hashtags(df['text'][i])
]
print(i,end=" ")
series = pd.Series(dados,index=colunas)
df_aux = df_aux.append(series, ignore_index=True)
Explanation: Por fim crie o laço que irá iterar em cada elemento do DataFrame original e salvar apenas o que queremos no DataFrame auxiliar.
Lembre-se de como queremos a latitude e longitude a localização tem que ser diferente de None.
End of explanation
print(len(df))
print(len(df_aux))
Explanation: Imprima o tamanho da DataFrame original e do auxiliar
End of explanation
percentual = round((1-(len(df_aux)/len(df)))*100,2)
print ('{}% de Tweets descartados por não terem geolocalização'.format(percentual))
Explanation: Responda:
1) Qual foi a porcentagem de Tweets descartados?
End of explanation
verdadeiro = 0
falso = 0
for i in range(0, len(df)):
if df['user'][i]['geo_enabled'] == True:
verdadeiro = verdadeiro + 1
else:
falso = falso + 1
percentual = round((1-(verdadeiro/len(df)))*100,2)
print ('{}% de Tweets com geo_enable'.format(percentual))
Explanation: 2) Qual a porcentagem de tweets que tinham habilitado o geo_enabled?
End of explanation
df_aux.to_csv('exercicio7_final.csv', sep=';', encoding='utf-8', index=False)
Explanation: Salve os dados em arquivos em arquivo CSV
End of explanation |
6,177 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lesson 36
Step2: Factorial Program
Create a program that can return the n! of a number (5! = 1 * 2 * 3 * 4 * 5 = 120).
Step4: The factorial program is returning a 0 output, which is invalid. We can use a log to find the error(s).
Build a log framerwork
Step6: logging.debug() function calls are a lot like print() calls, but they can provide a lot more information
Step8: This is now returning the proper value.
logging.debug() is used instead of print() because when the program is complete, every print() debug statement to be removed (and normal print() methods have to be ignored.)
Meanwhile, removing the debug() messages is as simple as changing the log level
Step10: There are 5 log levels defined in Python | Python Code:
import logging
logging.basicConfig(level=logging.DEBUG, format = '%(asctime)s - %(levelname)s - %(message)s') # Format for basic logging
Explanation: Lesson 36:
Logging
Logging involves printing code output to a file that can be reviewed later.
The logging module contains tools for logging in Python.
logging.basicCongif() is the setup code for logging in python.
End of explanation
# Creating a buggy program for testing
def factorial(n):
A function for finding factorials.
total = 1
for i in range(n+1):
total *= i # Total = total multiplied by ever element i added to itself
return total
print(factorial(5))
Explanation: Factorial Program
Create a program that can return the n! of a number (5! = 1 * 2 * 3 * 4 * 5 = 120).
End of explanation
import logging
# For iPython/Jupyter, logging is handled at the root level. There are two ways to reacces logging
# Way 1: (Easy way)
import imp # Import the 'import' internals module
imp.reload(logging) # Reload the logging module for this specific instance
# Way 2: (Less Easy Way)
# Must define a logging object for this instance:
# logger = logging.getLogger() # Logging object
# logger.setLevel(logging.DEBUG) # Setting logging level to the level defined in the program
# This can be used by passing the normal logging.debug('message') as usual
logging.basicConfig(level=logging.DEBUG, format = '%(asctime)s - %(levelname)s - %(message)s') # Format for basic logging
logging.debug('Start of Program') # Start of program in log
def factorial(n):
A function for finding factorials.
logging.debug('Start of the factorial(%s)' % (n)) # Start the program and pass the argument to the log
total = 1
for i in range(n+1):
total *= i # Total = total multiplied by ever element i added to itself
logging.debug('i is the %s, total is %s' % (i, total)) # Pass the actual function arguments to the log
logging.debug('Return value is %s' % (total)) # Return the total argument to the log
return total
print(factorial(5))
logging.debug('End of Program') # End of program in log
Explanation: The factorial program is returning a 0 output, which is invalid. We can use a log to find the error(s).
Build a log framerwork:
End of explanation
import logging
import imp # Import the 'import' internals module
imp.reload(logging) # Reload the logging module for this specific instance
logging.basicConfig(level=logging.DEBUG, format = '%(asctime)s - %(levelname)s - %(message)s') # Format for basic logging
logging.debug('Start of Program') # Start of program in log
def factorial(n):
A function for finding factorials.
logging.debug('Start of the factorial(%s)' % (n)) # Start the program and pass the argument to the log
total = 1
for i in range(1, n+1):
total *= i # Total = total multiplied by ever element i added to itself
logging.debug('i is the %s, total is %s' % (i, total)) # Pass the actual function arguments to the log
logging.debug('Return value is %s' % (total)) # Return the total argument to the log
return total
print(factorial(5))
logging.debug('End of Program') # End of program in log
Explanation: logging.debug() function calls are a lot like print() calls, but they can provide a lot more information:
They provide a timestamp via %(asctime)s.
A log level via %(levelname)s.
The given logging.debug() message via %(message)s.
From the log, it seems the issue is because the function starts at 0 and not 1, multiplying every following iteration by total = 0.
Fix this by passing the range() function a start of 1:
End of explanation
import logging
import imp
imp.reload(logging)
logging.disable(logging.CRITICAL) # Switch the log level from 'DEBUG' to 'CRITICAL'; only show 'critical' level debugs
logging.basicConfig(level=logging.DEBUG, format = '%(asctime)s - %(levelname)s - %(message)s') # Format for basic logging
logging.debug('Start of Program') # Start of program in log
def factorial(n):
A function for finding factorials.
logging.debug('Start of the factorial(%s)' % (n)) # Start the program and pass the argument to the log
total = 1
for i in range(1, n+1):
total *= i # Total = total multiplied by ever element i added to itself
logging.debug('i is the %s, total is %s' % (i, total)) # Pass the actual function arguments to the log
logging.debug('Return value is %s' % (total)) # Return the total argument to the log
return total
print(factorial(5))
logging.debug('End of Program') # End of program in log
Explanation: This is now returning the proper value.
logging.debug() is used instead of print() because when the program is complete, every print() debug statement to be removed (and normal print() methods have to be ignored.)
Meanwhile, removing the debug() messages is as simple as changing the log level:
End of explanation
import logging
import os
import imp
imp.reload(logging)
#logging.disable(logging.CRITICAL) # Switch the log level from 'DEBUG' to 'CRITICAL'; only show 'critical' level debugs
logging.basicConfig(filename = os.path.abspath('files/Factoriallog.txt'), level=logging.DEBUG, format = '%(asctime)s - %(levelname)s - %(message)s') # Format for basic logging
logging.debug('Start of Program') # Start of program in log
def factorial(n):
A function for finding factorials.
logging.debug('Start of the factorial(%s)' % (n)) # Start the program and pass the argument to the log
total = 1
for i in range(1, n+1):
total *= i # Total = total multiplied by ever element i added to itself
logging.debug('i is the %s, total is %s' % (i, total)) # Pass the actual function arguments to the log
logging.debug('Return value is %s' % (total)) # Return the total argument to the log
return total
print(factorial(5))
logging.debug('End of Program') # End of program in log
print(open(os.path.abspath('files/Factoriallog.txt'), 'r').read()) # Open and read the created log file
Explanation: There are 5 log levels defined in Python:
* Debug: The lowest level, for testing, accessed via logging.debug().
* Info: The information level, for informing the programmer, accessed via logging.info().
* Warning: The warning level, where something could go wrong, accessed via logging.warnig().
* Error: The error level, where something has gone wrong, accessed via logging.error().
* Critical: The highest level, where something has gone wrong, and might stop the program, accessed via logging.critical().
logging.disable() disables that level and every element below it.
logging.disable(logging.WARNING) would therefore disable the Warning, Info and Debug levels.
logging.disable(logging.CRITICAL) would disable all levels except Critical.
To put the log outputs in a text file, you can modify the logging.basicConfig() to take a filename parameter.
Formerly, the try and except statements were used to handle exceptions, but you can also create your own exceptions with raise.
End of explanation |
6,178 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Problem
Step1: Two candidate discretizations
Step2: Discretization_fast
Step3: Discretization_slow
Step4: Extract discrete trajectories
Step5: Cross-validation | Python Code:
# construct and simulate toy example: diffusive dynamics in a double-well potential
import numpy as np
import numpy.random as npr
import matplotlib.pyplot as plt
%matplotlib inline
offset = np.array([3,0])
def q(x):
''' unnormalized probability '''
return np.exp(-np.sum((x-offset)**2)) + np.exp(-np.sum((x+offset)**2))
def simulate_diffusion(x_0,q,step_size=0.01,max_steps=10000):
''' starting from x_0, simulate RW-MH '''
traj = np.zeros((max_steps+1,len(x_0)))
traj[0] = x_0
old_q = q(x_0)
for i in range(max_steps):
prop = traj[i]+npr.randn(len(x_0))*step_size
new_q = q(prop)
if new_q/old_q>npr.rand():
traj[i+1] = prop
old_q = new_q
else:
traj[i+1] = traj[i]
return traj
# collect some trajectories
npr.seed(0) # for repeatability
trajs = []
run_ids = []
for i,offset_ in enumerate([-offset,offset]): # analogous to 3 RUNs on Folding@Home
for _ in range(10): # for each RUN, collect 10 clones
trajs.append(simulate_diffusion(np.zeros(2)+offset_,q,max_steps=10000,step_size=0.1))
run_ids.append(i)
len(trajs)
# plot trajectories
r = 6
def plot_trajectories(trajs,alpha=1.0):
from matplotlib.pyplot import cm
cmap = cm.get_cmap('Spectral')
N = len(trajs)
for i,traj in enumerate(trajs):
c = cmap(float(i)/(N-1))
plt.plot(traj[:,0],traj[:,1],color=c,alpha=alpha)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title('Trajectories')
plt.xlim(-r,r)
plt.ylim(-r,r)
plot_trajectories(trajs)
Explanation: Problem: Given a collection of trajectories from a stochastic process, and some alternative discretizations, we would like to perform model selection using cross-validated GMRQ. What happens if our train / test trajectories are in separate metastable regions?
End of explanation
n_bins=50
Explanation: Two candidate discretizations
End of explanation
offsets = np.linspace(-r,r,n_bins)
plot_trajectories(trajs,alpha=0.3)
for offset in offsets:
plt.hlines(offset,-r,r,colors='grey')
plt.xlim(-r,r)
plt.ylim(-r,r)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title('Trajectories + discretization_fast')
Explanation: Discretization_fast: finely resolving a fast DOF
End of explanation
offsets = np.linspace(-r,r,n_bins)
plot_trajectories(trajs,alpha=0.3)
for offset in offsets:
plt.vlines(offset,-r,r,colors='grey')
plt.xlim(-r,r)
plt.ylim(-r,r)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title('Trajectories + discretization_slow')
Explanation: Discretization_slow: finely resolving a slow DOF
End of explanation
def axis_aligned_discretization(trajs,offsets,dim=0):
dtrajs = []
for traj in trajs:
ax = traj[:,dim]
bins = np.zeros((len(offsets)+1))
bins[0] = -np.inf
bins[1:] = offsets
dtraj = np.digitize(ax,bins)
dtrajs.append(dtraj)
return dtrajs
dtrajs_fast = axis_aligned_discretization(trajs,offsets,dim=1)
dtrajs_slow = axis_aligned_discretization(trajs,offsets,dim=0)
from msmbuilder.msm import MarkovStateModel
m = 6 # how to choose m beforehand?
msm = MarkovStateModel(n_timescales=m)
msm.fit(dtrajs_fast)
msm.score_
msm = MarkovStateModel(n_timescales=m)
msm.fit(dtrajs_slow)
msm.score_
Explanation: Extract discrete trajectories
End of explanation
def two_fold_cv(dtrajs,msm):
train_scores = []
test_scores = []
split = len(dtrajs)/2
A = dtrajs[:split]
B = dtrajs[split:]
msm.fit(A)
train_scores.append(msm.score_)
try:
test_scores.append(msm.score(B))
except:
test_scores.append(np.nan)
msm.fit(B)
train_scores.append(msm.score_)
try:
test_scores.append(msm.score(A))
except:
test_scores.append(np.nan)
return train_scores,test_scores
len(dtrajs_fast),len(dtrajs_slow)
train_scores_fast, test_scores_fast = two_fold_cv(dtrajs_fast,msm)
train_scores_slow, test_scores_slow = two_fold_cv(dtrajs_slow,msm)
train_scores_fast, test_scores_fast
train_scores_slow, test_scores_slow
np.mean(train_scores_fast), np.mean(test_scores_fast)
np.mean(train_scores_slow), np.mean(test_scores_slow)
def leave_one_out_gmrq(dtrajs,msm):
train_scores = []
test_scores = []
for i,test in enumerate(dtrajs):
train = dtrajs[:i]+dtrajs[i+1:]
msm.fit(train)
train_scores.append(msm.score_)
try:
test_scores.append(msm.score(test))
except:
test_scores.append(np.nan)
return train_scores,test_scores
train_scores_fast, test_scores_fast = leave_one_out_gmrq(dtrajs_fast,msm)
train_scores_slow, test_scores_slow = leave_one_out_gmrq(dtrajs_slow,msm)
np.mean(train_scores_fast), np.mean(test_scores_fast)
np.mean(train_scores_slow), np.mean(test_scores_slow)
Explanation: Cross-validation
End of explanation |
6,179 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intro to Snorkel
Step1: We repeat our definition of the Spouse Candidate subclass from Parts II and III.
Step2: Using a labeled development set
In our setting here, we will use the phrase "development set" to refer to a small set of examples (here, a subset of our training set) which we label by hand and use to help us develop and refine labeling functions. Unlike the test set, which we do not look at and use for final evaluation, we can inspect the development set while writing labeling functions.
In our case, we already loaded existing labels for a development set (split 1), so we can load them again now
Step3: Creating and Modeling a Noisy Training Set
Our biggest step in the data programming pipeline is the creation - and modeling - of a noisy training set. We'll approach this in three main steps
Step4: Pattern-based LFs
These LFs express some common sense text patterns which indicate that a person pair might be married. For example, LF_husband_wife looks for words in spouses between the person mentions, and LF_same_last_name checks to see if the two people have the same last name (but aren't the same whole name).
Step5: Distant Supervision LFs
In addition to writing labeling functions that describe text pattern-based heuristics for labeling training examples, we can also write labeling functions that distantly supervise examples. Here, we'll load in a list of known spouse pairs and check to see if the candidate pair matches one of these.
Step6: For later convenience we group the labeling functions into a list.
Step8: Developing Labeling Functions
Above, we've written a bunch of labeling functions already, which should give you some sense about how to go about it. While writing them, we probably want to check to make sure that they at least work as intended before adding to our set. Suppose we're thinking about writing a simple LF
Step9: One simple thing we can do is quickly test it on our development set (or any other set), without saving it to the database. This is simple to do. For example, we can easily get every candidate that this LF labels as true
Step10: We can then easily put this into the Viewer as usual (try it out!)
Step11: 2. Applying the Labeling Functions
Next, we need to actually run the LFs over all of our training candidates, producing a set of Labels and LabelKeys (just the names of the LFs) in the database. We'll do this using the LabelAnnotator class, a UDF which we will again run with UDFRunner. Note that this will delete any existing Labels and LabelKeys for this candidate set. We start by setting up the class
Step12: Finally, we run the labeler. Note that we set a random seed for reproducibility, since some of the LFs involve random number generators. Again, this can be run in parallel, given an appropriate database like Postgres is being used
Step13: If we've already created the labels (saved in the database), we can load them in as a sparse matrix here too
Step14: Note that the returned matrix is a special subclass of the scipy.sparse.csr_matrix class, with some special features which we demonstrate below
Step15: We can also view statistics about the resulting label matrix.
Coverage is the fraction of candidates that the labeling function emits a non-zero label for.
Overlap is the fraction candidates that the labeling function emits a non-zero label for and that another labeling function emits a non-zero label for.
Conflict is the fraction candidates that the labeling function emits a non-zero label for and that another labeling function emits a conflicting non-zero label for.
Step16: 3. Fitting the Generative Model
Now, we'll train a model of the LFs to estimate their accuracies. Once the model is trained, we can combine the outputs of the LFs into a single, noise-aware training label set for our extractor. Intuitively, we'll model the LFs by observing how they overlap and conflict with each other.
Step17: We now apply the generative model to the training candidates to get the noise-aware training label set. We'll refer to these as the training marginals
Step18: We'll look at the distribution of the training marginals
Step19: We can view the learned accuracy parameters, and other statistics about the LFs learned by the generative model
Step20: Using the Model to Iterate on Labeling Functions
Now that we have learned the generative model, we can stop here and use this to potentially debug and/or improve our labeling function set. First, we apply the LFs to our development set
Step21: And finally, we get the score of the generative model
Step22: Interpreting Generative Model Performance
At this point, we should be getting an F1 score of around 0.4 to 0.5 on the development set, which is pretty good! However, we should be very careful in interpreting this. Since we developed our labeling functions using this development set as a guide, and our generative model is composed of these labeling functions, we expect it to score very well here!
In fact, it is probably somewhat overfit to this set. However this is fine, since in the next tutorial, we'll train a more powerful end extraction model which will generalize beyond the development set, and which we will evaluate on a blind test set (i.e. one we never looked at during development).
Doing Some Error Analysis
At this point, we might want to look at some examples in one of the error buckets. For example, one of the false negatives that we did not correctly label as true mentions. To do this, we can again just use the Viewer
Step23: We can easily see the labels that the LFs gave to this candidate using simple ORM-enabled syntax
Step24: We can also now explore some of the additional functionalities of the lf_stats method for our dev set LF labels, L_dev
Step25: Note that for labeling functions with low coverage, our learned accuracies are closer to our prior of 70% accuracy.
Saving our training labels
Finally, we'll save the training_marginals, which are our probabilistic training labels, so that we can use them in the next tutorial to train our end extraction model | Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os
# TO USE A DATABASE OTHER THAN SQLITE, USE THIS LINE
# Note that this is necessary for parallel execution amongst other things...
# os.environ['SNORKELDB'] = 'postgres:///snorkel-intro'
import numpy as np
from snorkel import SnorkelSession
session = SnorkelSession()
Explanation: Intro to Snorkel: Extracting Spouse Relations from the News
Part II: Generating and modeling noisy training labels
In this part of the tutorial, we will write labeling functions which express various heuristics, patterns, and weak supervision strategies to label our data.
In most real-world settings, hand-labeled training data is prohibitively expensive and slow to collect. A common scenario, though, is to have access to tons of unlabeled training data, and have some idea of how to label it programmatically. For example:
We may be able to think of text patterns that would indicate two people mentioned in a sentence are married, such as seeing the word "spouse" between the mentions.
We may have access to an external knowledge base (KB) that lists some known pairs of married people, and can use these to heuristically label some subset of our data.
Our labeling functions will capture these types of strategies. We know that these labeling functions will not be perfect, and some may be quite low-quality, so we will model their accuracies with a generative model, which Snorkel will help us easily apply.
This will ultimately produce a single set of noise-aware training labels, which we will then use to train an end extraction model in the next notebook. For more technical details of this overall approach, see our NIPS 2016 paper.
End of explanation
from snorkel.models import candidate_subclass
Spouse = candidate_subclass('Spouse', ['person1', 'person2'])
Explanation: We repeat our definition of the Spouse Candidate subclass from Parts II and III.
End of explanation
from snorkel.annotations import load_gold_labels
L_gold_dev = load_gold_labels(session, annotator_name='gold', split=1)
Explanation: Using a labeled development set
In our setting here, we will use the phrase "development set" to refer to a small set of examples (here, a subset of our training set) which we label by hand and use to help us develop and refine labeling functions. Unlike the test set, which we do not look at and use for final evaluation, we can inspect the development set while writing labeling functions.
In our case, we already loaded existing labels for a development set (split 1), so we can load them again now:
End of explanation
import re
from snorkel.lf_helpers import (
get_left_tokens, get_right_tokens, get_between_tokens,
get_text_between, get_tagged_text,
)
Explanation: Creating and Modeling a Noisy Training Set
Our biggest step in the data programming pipeline is the creation - and modeling - of a noisy training set. We'll approach this in three main steps:
Creating labeling functions (LFs): This is where most of our development time would actually go into if this were a real application. Labeling functions encode our heuristics and weak supervision signals to generate (noisy) labels for our training candidates.
Applying the LFs: Here, we actually use them to label our candidates!
Training a generative model of our training set: Here we learn a model over our LFs, learning their respective accuracies automatically. This will allow us to combine them into a single, higher-quality label set.
We'll also add some detail on how to go about developing labeling functions and then debugging our model of them to improve performance.
1. Creating Labeling Functions
In Snorkel, our primary interface through which we provide training signal to the end extraction model we are training is by writing labeling functions (LFs) (as opposed to hand-labeling massive training sets). We'll go through some examples for our spouse extraction task below.
A labeling function is just a Python function that accepts a Candidate and returns 1 to mark the Candidate as true, -1 to mark the Candidate as false, and 0 to abstain from labeling the Candidate (note that the non-binary classification setting is covered in the advanced tutorials!).
In the next stages of the Snorkel pipeline, we'll train a model to learn the accuracies of the labeling functions and reweight them accordingly, and then use them to train a downstream model. It turns out by doing this, we can get high-quality models even with lower-quality labeling functions. So they don't need to be perfect! Now on to writing some:
End of explanation
spouses = {'spouse', 'wife', 'husband', 'ex-wife', 'ex-husband'}
family = {'father', 'mother', 'sister', 'brother', 'son', 'daughter',
'grandfather', 'grandmother', 'uncle', 'aunt', 'cousin'}
family = family | {f + '-in-law' for f in family}
other = {'boyfriend', 'girlfriend' 'boss', 'employee', 'secretary', 'co-worker'}
# Helper function to get last name
def last_name(s):
name_parts = s.split(' ')
return name_parts[-1] if len(name_parts) > 1 else None
def LF_husband_wife(c):
return 1 if len(spouses.intersection(get_between_tokens(c))) > 0 else 0
def LF_husband_wife_left_window(c):
if len(spouses.intersection(get_left_tokens(c[0], window=2))) > 0:
return 1
elif len(spouses.intersection(get_left_tokens(c[1], window=2))) > 0:
return 1
else:
return 0
def LF_same_last_name(c):
p1_last_name = last_name(c.person1.get_span())
p2_last_name = last_name(c.person2.get_span())
if p1_last_name and p2_last_name and p1_last_name == p2_last_name:
if c.person1.get_span() != c.person2.get_span():
return 1
return 0
def LF_no_spouse_in_sentence(c):
return -1 if np.random.rand() < 0.75 and len(spouses.intersection(c.get_parent().words)) == 0 else 0
def LF_and_married(c):
return 1 if 'and' in get_between_tokens(c) and 'married' in get_right_tokens(c) else 0
def LF_familial_relationship(c):
return -1 if len(family.intersection(get_between_tokens(c))) > 0 else 0
def LF_family_left_window(c):
if len(family.intersection(get_left_tokens(c[0], window=2))) > 0:
return -1
elif len(family.intersection(get_left_tokens(c[1], window=2))) > 0:
return -1
else:
return 0
def LF_other_relationship(c):
return -1 if len(other.intersection(get_between_tokens(c))) > 0 else 0
Explanation: Pattern-based LFs
These LFs express some common sense text patterns which indicate that a person pair might be married. For example, LF_husband_wife looks for words in spouses between the person mentions, and LF_same_last_name checks to see if the two people have the same last name (but aren't the same whole name).
End of explanation
import bz2
# Function to remove special characters from text
def strip_special(s):
return ''.join(c for c in s if ord(c) < 128)
# Read in known spouse pairs and save as set of tuples
with bz2.BZ2File('data/spouses_dbpedia.csv.bz2', 'rb') as f:
known_spouses = set(
tuple(strip_special(x.decode('utf-8')).strip().split(',')) for x in f.readlines()
)
# Last name pairs for known spouses
last_names = set([(last_name(x), last_name(y)) for x, y in known_spouses if last_name(x) and last_name(y)])
def LF_distant_supervision(c):
p1, p2 = c.person1.get_span(), c.person2.get_span()
return 1 if (p1, p2) in known_spouses or (p2, p1) in known_spouses else 0
def LF_distant_supervision_last_names(c):
p1, p2 = c.person1.get_span(), c.person2.get_span()
p1n, p2n = last_name(p1), last_name(p2)
return 1 if (p1 != p2) and ((p1n, p2n) in last_names or (p2n, p1n) in last_names) else 0
Explanation: Distant Supervision LFs
In addition to writing labeling functions that describe text pattern-based heuristics for labeling training examples, we can also write labeling functions that distantly supervise examples. Here, we'll load in a list of known spouse pairs and check to see if the candidate pair matches one of these.
End of explanation
LFs = [
LF_distant_supervision, LF_distant_supervision_last_names,
LF_husband_wife, LF_husband_wife_left_window, LF_same_last_name,
LF_no_spouse_in_sentence, LF_and_married, LF_familial_relationship,
LF_family_left_window, LF_other_relationship
]
Explanation: For later convenience we group the labeling functions into a list.
End of explanation
def LF_wife_in_sentence(c):
A simple example of a labeling function
return 1 if 'wife' in c.get_parent().words else 0
Explanation: Developing Labeling Functions
Above, we've written a bunch of labeling functions already, which should give you some sense about how to go about it. While writing them, we probably want to check to make sure that they at least work as intended before adding to our set. Suppose we're thinking about writing a simple LF:
End of explanation
labeled = []
for c in session.query(Spouse).filter(Spouse.split == 1).all():
if LF_wife_in_sentence(c) != 0:
labeled.append(c)
print("Number labeled:", len(labeled))
Explanation: One simple thing we can do is quickly test it on our development set (or any other set), without saving it to the database. This is simple to do. For example, we can easily get every candidate that this LF labels as true:
End of explanation
from snorkel.lf_helpers import test_LF
tp, fp, tn, fn = test_LF(session, LF_wife_in_sentence, split=1, annotator_name='gold')
Explanation: We can then easily put this into the Viewer as usual (try it out!):
SentenceNgramViewer(labeled, session)
We also have a simple helper function for getting the empirical accuracy of a single LF with respect to the development set labels for example. This function also returns the evaluation buckets of the candidates (true positive, false positive, true negative, false negative):
End of explanation
from snorkel.annotations import LabelAnnotator
labeler = LabelAnnotator(lfs=LFs)
Explanation: 2. Applying the Labeling Functions
Next, we need to actually run the LFs over all of our training candidates, producing a set of Labels and LabelKeys (just the names of the LFs) in the database. We'll do this using the LabelAnnotator class, a UDF which we will again run with UDFRunner. Note that this will delete any existing Labels and LabelKeys for this candidate set. We start by setting up the class:
End of explanation
np.random.seed(1701)
%time L_train = labeler.apply(split=0)
L_train
Explanation: Finally, we run the labeler. Note that we set a random seed for reproducibility, since some of the LFs involve random number generators. Again, this can be run in parallel, given an appropriate database like Postgres is being used:
End of explanation
%time L_train = labeler.load_matrix(session, split=0)
L_train
Explanation: If we've already created the labels (saved in the database), we can load them in as a sparse matrix here too:
End of explanation
L_train.get_candidate(session, 0)
L_train.get_key(session, 0)
Explanation: Note that the returned matrix is a special subclass of the scipy.sparse.csr_matrix class, with some special features which we demonstrate below:
End of explanation
L_train.lf_stats(session)
Explanation: We can also view statistics about the resulting label matrix.
Coverage is the fraction of candidates that the labeling function emits a non-zero label for.
Overlap is the fraction candidates that the labeling function emits a non-zero label for and that another labeling function emits a non-zero label for.
Conflict is the fraction candidates that the labeling function emits a non-zero label for and that another labeling function emits a conflicting non-zero label for.
End of explanation
from snorkel.learning import GenerativeModel
gen_model = GenerativeModel()
gen_model.train(L_train, epochs=100, decay=0.95, step_size=0.1 / L_train.shape[0], reg_param=1e-6)
gen_model.weights.lf_accuracy
Explanation: 3. Fitting the Generative Model
Now, we'll train a model of the LFs to estimate their accuracies. Once the model is trained, we can combine the outputs of the LFs into a single, noise-aware training label set for our extractor. Intuitively, we'll model the LFs by observing how they overlap and conflict with each other.
End of explanation
train_marginals = gen_model.marginals(L_train)
Explanation: We now apply the generative model to the training candidates to get the noise-aware training label set. We'll refer to these as the training marginals:
End of explanation
import matplotlib.pyplot as plt
plt.hist(train_marginals, bins=20)
plt.show()
Explanation: We'll look at the distribution of the training marginals:
End of explanation
gen_model.learned_lf_stats()
Explanation: We can view the learned accuracy parameters, and other statistics about the LFs learned by the generative model:
End of explanation
L_dev = labeler.apply_existing(split=1)
Explanation: Using the Model to Iterate on Labeling Functions
Now that we have learned the generative model, we can stop here and use this to potentially debug and/or improve our labeling function set. First, we apply the LFs to our development set:
End of explanation
tp, fp, tn, fn = gen_model.error_analysis(session, L_dev, L_gold_dev)
Explanation: And finally, we get the score of the generative model:
End of explanation
from snorkel.viewer import SentenceNgramViewer
# NOTE: This if-then statement is only to avoid opening the viewer during automated testing of this notebook
# You should ignore this!
import os
if 'CI' not in os.environ:
sv = SentenceNgramViewer(fn, session)
else:
sv = None
sv
c = sv.get_selected() if sv else list(fp.union(fn))[0]
c
Explanation: Interpreting Generative Model Performance
At this point, we should be getting an F1 score of around 0.4 to 0.5 on the development set, which is pretty good! However, we should be very careful in interpreting this. Since we developed our labeling functions using this development set as a guide, and our generative model is composed of these labeling functions, we expect it to score very well here!
In fact, it is probably somewhat overfit to this set. However this is fine, since in the next tutorial, we'll train a more powerful end extraction model which will generalize beyond the development set, and which we will evaluate on a blind test set (i.e. one we never looked at during development).
Doing Some Error Analysis
At this point, we might want to look at some examples in one of the error buckets. For example, one of the false negatives that we did not correctly label as true mentions. To do this, we can again just use the Viewer:
End of explanation
c.labels
Explanation: We can easily see the labels that the LFs gave to this candidate using simple ORM-enabled syntax:
End of explanation
L_dev.lf_stats(session, L_gold_dev, gen_model.learned_lf_stats()['Accuracy'])
Explanation: We can also now explore some of the additional functionalities of the lf_stats method for our dev set LF labels, L_dev: we can plug in the gold labels that we have, and the accuracies that our generative model has learned:
End of explanation
from snorkel.annotations import save_marginals
%time save_marginals(session, L_train, train_marginals)
Explanation: Note that for labeling functions with low coverage, our learned accuracies are closer to our prior of 70% accuracy.
Saving our training labels
Finally, we'll save the training_marginals, which are our probabilistic training labels, so that we can use them in the next tutorial to train our end extraction model:
End of explanation |
6,180 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to make a 3D web visualisation without a single line of code
In this notebook we use QGIS to create a shareable terrain model with a data overlay, which can be shared on a web server, without typing a single line of code. Let's go!
To inetract with the maps we make below, check this notebook out in the ipython notebook viewer
Step1: and there - your first 3D interactive map, made with no coding and using web data services!
...but it isn't very useful or informative. How can we fix that?
5. Add some more data
We might be interested in land cover attributes for our region, so let's get them! How about photosynthetic vegetation for 2015?
http
Step2: Now we have an elevation map coloured by green vegetation! But that's only a pretty picture.
6. Now lets do some analysis!
We will try to show
Step3: Now, let's add some complexity
We have some elevation data, we have some cadastral data, we have some data about photosynthetic vegetation cover. We can do our quick visualisation of whether block hilliness (as determined by SRTM height standard deviation for each block) is related to photosynthetic vegetation cover (as determined by the median of vegetation cover inside each block). We can set this up as follows | Python Code:
###ignore this block of code - it is required only to show the map in iPython - you won't need it!
from IPython.core.display import display, HTML
display(HTML('<iframe width="800" height="600" frameborder="1" scrolling ="no" src="./qgis2threejs/ACT_elevs_test_1.html"></iframe>'))
Explanation: How to make a 3D web visualisation without a single line of code
In this notebook we use QGIS to create a shareable terrain model with a data overlay, which can be shared on a web server, without typing a single line of code. Let's go!
To inetract with the maps we make below, check this notebook out in the ipython notebook viewer:
http://nbviewer.jupyter.org/github/adamsteer/nci-notebooks/blob/master/How%20to%20make%203D%20visualisations%20from%20NCI%20data%20without%20any%20coding%20%28nearly%29.ipynb
1. Decide on some datasets
Let's use Canberra. We need:
* a terrain model
* some data - what about... false colour LandSAT imagery? or NVDI? or how about vegetation types?
* some more data - how about OpenStreetMap buildings?
2. Get the topography data and import into QGIS
In this approach we download all our data from NCI to our machine (or could use it in the VDI), because we need to modify bands - which QGIS is not happy to do for web services (WMS or WCS).
For topography we need a terrain model in a raster format, e.g. GeoTIFF, which covers Canberra. We can head to the NCI Elevation collection here:
http://dapds00.nci.org.au/thredds/catalog/rr1/Elevation/catalog.html
...and look for SRTM 1 second elevation - good enough for this job. If you are happy with ESRI grids, navigate to the tile collection here:
http://dapds00.nci.org.au/thredds/catalog/rr1/Elevation/1secSRTM_DEMs_v1.0/DEM/Tiles/catalog.html
Data are organised in folders named by longitide and latitude of the south west (bottom left) corner. For Canberra we need 149, -36.
...and for now, this is probably the best method. We could make an OpenDAP or WCS request for a subset, but that would be coding! The pull of the dark side is strong - so here is a link that gets a GeoTIFF from the SRTM tile, using WCS:
http://dapds00.nci.org.au/thredds/wcs/rr1/Elevation/NetCDF/1secSRTM_DEMs_v1.0/DEM/Elevation_1secSRTM_DEMs_v1.0_DEM_Tiles_e149s36dem1_0.nc?service=WCS&version=1.0.0&request=GetCoverage&Coverage=elevation&bbox=149.0,-36,149.9,-35&format=GeoTIFF
Now, import the resulting GeoTIFF into QGIS as a raster data source:
3. Install a QGIS plugin and use it.
This is seriously not coding - just head to the "plugins" menu, click "manage and install plugins", and find the Qgis2threejs plugin. Install it:
4. Set up your first interactive map
Now zoom into the DEM so that it covers the entire display window, then head to the 'web' menu. Choose 'qgis2threejs'. In the resulting dialog box, click 'world' in the left pane, and find 'vertical exaggeration'. Set it to 10.
In 'DEM', the one active layer (your GeoTIFF) should be preselected. Click 'run', and you should get a web browser opening with a 3D model inside!
The expected output is shown below - try viewing the notebook here:
http://nbviewer.jupyter.org/github/adamsteer/nci-notebooks/blob/master/How%20to%20make%203D%20visualisations%20from%20NCI%20data%20without%20any%20coding%20%28nearly%29.ipynb
...you should be able to move the map, zoom in and out, and generally inspect a terrain model.
End of explanation
display(HTML('<iframe width="800" height="600" frameborder="1" scrolling ="no" src="./qgis2threejs/act_elevs_plus_greenveg.html"></iframe>'))
Explanation: and there - your first 3D interactive map, made with no coding and using web data services!
...but it isn't very useful or informative. How can we fix that?
5. Add some more data
We might be interested in land cover attributes for our region, so let's get them! How about photosynthetic vegetation for 2015?
http://dapds00.nci.org.au/thredds/catalog/ub8/au/FractCov/PV/catalog.html?dataset=ub8-au/FractCov/PV/FractCover.V3_0_1.2015.aust.005.PV.nc
Using WCS again - click on the WCS link, and look for a <name> tag - it says 'PV'. This is the coverage we need to get. So we form a WCS request like this:
the dataset: http://dapds00.nci.org.au/thredds/wcs/ub8/au/FractCov/PV/FractCover.V3_0_1.2015.aust.005.PV.nc
the service: service=WCS
the service version: version=1.0.0
the thing we want to do (get a coverage): request=GetCoverage
the coverage (or layer) we want to get: Coverage=PV
the boundary of the layer we want: bbox=149.0,-36,149.9,-35
the format we want to get our coverage as: format=GeoTIFF
...so we put a question mark after the dataset name, then add the rest of the labels describing the thing we want afterward, in any order, separated by ampersands:
http://dapds00.nci.org.au/thredds/wcs/ub8/au/FractCov/PV/FractCover.V3_0_1.2015.aust.005.PV.nc?service=WCS&version=1.0.0&request=GetCoverage&Coverage=PV&bbox=149.0,-36,149.9,-35&format=GeoTIFF
(woah. There was a glitch in the matrix - if we didn't write out that URL, you would have had to code just now).
Add the resulting GeoTIFF to QGIS as a raster data source, just like the DEM! Once you have a style you're happy with, soom to the desired extent and use the qgis2threejs plugin to make a new map.
ACT DEM coloured by green vegetation content
Click on the map to inspect features, click and drag to move, scroll to zoom.
End of explanation
display(HTML('<iframe width="800" height="600" frameborder="1" scrolling ="no" src="./qgis2threejs/act_block_hilliness_proxy.html"></iframe>'))
Explanation: Now we have an elevation map coloured by green vegetation! But that's only a pretty picture.
6. Now lets do some analysis!
We will try to show:
Standard deviaton of elevation of blocks as a proxy for hilliness, plotted as a volume on the elevation map
Sum of fractional PV cover (maybe bare ground, or tree cover?) for each block, also as a volume on the map
Interactive layer selection using three.js maps
...and no code, Only button clicking.
Back to QGIS. We need another plugin - zonal statistics! We know how to install plugins, so get to it. When ready, we'll make a hilliness proxy first.
Next we will make a vegetation cover proxy
..and finally, can we come up with a metric of vegetation cover as a function of block hilliness? A question here might be 'do land owners tend to clear flatter blocks more than hilly blocks?'. Can we answer it using web services data, QGIS, some clicks - and then make a very pretty, and interactive map?
On to the first question - how do we get ACT block data? Head to ACTMAPi and find ACT blocks. Here's a shortcut:
http://actmapi.actgov.opendata.arcgis.com/datasets/afa1d909a0ae427cb9c1963e0d2e80ca_4
Find the 'API' menu box, click the down arrow and then copy the URL inside the JSON option:
http://actmapi.actgov.opendata.arcgis.com/datasets/afa1d909a0ae427cb9c1963e0d2e80ca_4.geojson
In QGIS, add a new vector layer. Choose 'service' from the options, and paste the URL into the appropriate box:
Wait a while! Now because QGIS you'll need to save the result as a shapefile before we can do anything with it. Save the layer, close the GeoJSON layer and reopen your new ACT blocks shapefile.
...it's probably simpler to just download and open the shapefile - but now we know how to add vector layers from a web service> either way, the result is something like this:
We don't need to make the blocks pretty yet, We'll do something!
Block hilliness
Using the SRTM DEM, let's make a proxy of block hilliness. Check that you have the QGIS zonal statistics plugin (Raster -> zonal statistics). If not, install it the same way you installed qgis2three.js. Once you have it, open the plugin. Choose the DEM as the raster layer, and use band 1. Then choose your ACT blocks vector layer. In the statistics to calculate, pick an appropriate set - but include standard deviation - this is our roughness proxy.
Run the plugin, then open the properties box of the ACT blocks layer and colour your blocks by standard deviation. The ZOnal statistics plugin has looped through all the polygons in the blocks layer and computed descriptive statistics of the underlying DEM. And there you have it - ACT blocks coloured by the standard deviation of the elevation they contain, as a proxy for hilliness.
Now lets make a cool map! zoom in so that you have a region you're happy with occupying your map view, and open the qgis2threejs plugin.
In the left pane, under polygon option choose your ACT blocks layer. Options for styling it appear on the right - which I'll recreate again because I crashed QGIS again at a bad time (Save, fella, save!)
Result - 3D block hilliness visualised as height of a block polygon
What we see here are cadastral blocks, with our 'hilliness proxy' displayed by colour and extruded column height. Darker blue, and taller columns are hillier! Click on the map to inspect features, click and drag to move, scroll to zoom.
End of explanation
display(HTML('<iframe width="800" height="600" frameborder="1" scrolling ="no" src="./qgis2threejs/veg_mean_colours.html"></iframe>'))
Explanation: Now, let's add some complexity
We have some elevation data, we have some cadastral data, we have some data about photosynthetic vegetation cover. We can do our quick visualisation of whether block hilliness (as determined by SRTM height standard deviation for each block) is related to photosynthetic vegetation cover (as determined by the median of vegetation cover inside each block). We can set this up as follows:
classify and colour blocks by vegetation cover
visualise block hilliness as the height of an extruded column
In this scheme, if our hypothesis is that hillier blocks are less cleared, dark green blocks will visualise as taller columns. Lets test it out!
End of explanation |
6,181 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Environment and RL Agent Controller for a Thermostat
Author
Step1: Goal and Reward
The goal here is to make an agent that will take actions that will keep the temperature between 0.4 and 0.6.
We make a reward function to reflect our goal. When the temperature is between 0.4 and 0.6, we set the reward as 0.0. When the temperature is outside of this band, we set the reward to be the negative distance the temperature is from its closest band. So if the temperature is 0.1, then the reward is -(0.4 - 0.1) = -0.3, and if it is 0.8, then the reward is -(0.8 - 0.6) = -0.2.
Let's chart the reward vs. temperature to show what is meant
Step7: Environment Setup
The environment responds to actions. It is what keeps track of the temperature state of the room, returns the reward for being in that temperature state, and tells you if the episode is over or not (in this case, we just set a max episode length that can happen).
Here is the gist of the flow
Step8: Agent setup
Here we configure a type of agent to learn against this environment. There are many agent configurations to choose from, which we will not cover here. We will not discuss what type of agent to choose here -- we will just take a basic agent to train.
Step9: Check
Step10: Train the agent
Here we train the agent against episodes of interacting with the environment.
Step11: Check | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import math
## Compute the response for a given action and current temperature
def respond(action, current_temp, tau):
return action + (current_temp - action) * math.exp(-1.0/tau)
## Actions of a series of on, then off
sAction = pd.Series(np.array([1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0]))
sResponse = np.zeros(sAction.size)
## Update the response with the response to the action
for i in range(sAction.size):
## Get last response
if i == 0:
last_response = 0
else:
last_response = sResponse[i - 1]
sResponse[i] = respond(sAction[i], last_response, 3.0)
## Assemble and plot
df = pd.DataFrame(list(zip(sAction, sResponse)), columns=['action', 'response'])
df.plot()
Explanation: Environment and RL Agent Controller for a Thermostat
Author: Matt Pettis
Github: mpettis
Twitter: @mtpettis
Date: 2020-04-27
This is a toy example of a room with a heater. When the heater is off, the temperature will decay to 0.0, and when it is on, it will rise to 1.0. The decay and rise is not instantaneous, but has exponential decay behavior in time given by the following formula:
temperature[i + 1] = heater[i] + (temperature[i] - heater[i]) * exp(-1/tau)
Where:
temperature[i] is the temperature at timestep i (between 0 and 1).
heater[i] is the applied heater, 0 when not applied, 1 when applied.
tau is the characteristic heat decay constant.
So, when the heater is off, the temperature will decay towards 0, and when the heater is on, it will rise towards 1. When the heater is toggled on/off, it will drift towards 1/0.
Here is a sample plot of what the temperature response looks like when the heater is on for a while, then off for a while. You will see the characteristic rise and decay of the temperature to the response.
End of explanation
def reward(temp):
delta = abs(temp - 0.5)
if delta < 0.1:
return 0.0
else:
return -delta + 0.1
temps = [x * 0.01 for x in range(100)]
rewards = [reward(x) for x in temps]
fig=plt.figure(figsize=(12, 4))
plt.scatter(temps, rewards)
plt.xlabel('Temperature')
plt.ylabel('Reward')
plt.title('Reward vs. Temperature')
Explanation: Goal and Reward
The goal here is to make an agent that will take actions that will keep the temperature between 0.4 and 0.6.
We make a reward function to reflect our goal. When the temperature is between 0.4 and 0.6, we set the reward as 0.0. When the temperature is outside of this band, we set the reward to be the negative distance the temperature is from its closest band. So if the temperature is 0.1, then the reward is -(0.4 - 0.1) = -0.3, and if it is 0.8, then the reward is -(0.8 - 0.6) = -0.2.
Let's chart the reward vs. temperature to show what is meant:
End of explanation
###-----------------------------------------------------------------------------
## Imports
from tensorforce.environments import Environment
from tensorforce.agents import Agent
###-----------------------------------------------------------------------------
### Environment definition
class ThermostatEnvironment(Environment):
This class defines a simple thermostat environment. It is a room with
a heater, and when the heater is on, the room temperature will approach
the max heater temperature (usually 1.0), and when off, the room will
decay to a temperature of 0.0. The exponential constant that determines
how fast it approaches these temperatures over timesteps is tau.
def __init__(self):
## Some initializations. Will eventually parameterize this in the constructor.
self.tau = 3.0
self.current_temp = np.random.random(size=(1,))
super().__init__()
def states(self):
return dict(type='float', shape=(1,), min_value=0.0, max_value=1.0)
def actions(self):
Action 0 means no heater, temperature approaches 0.0. Action 1 means
the heater is on and the room temperature approaches 1.0.
return dict(type='int', num_values=2)
# Optional, should only be defined if environment has a natural maximum
# episode length
def max_episode_timesteps(self):
return super().max_episode_timesteps()
# Optional
def close(self):
super().close()
def reset(self):
Reset state.
# state = np.random.random(size=(1,))
self.timestep = 0
self.current_temp = np.random.random(size=(1,))
return self.current_temp
def response(self, action):
Respond to an action. When the action is 1, the temperature
exponentially decays approaches 1.0. When the action is 0,
the current temperature decays towards 0.0.
return action + (self.current_temp - action) * math.exp(-1.0 / self.tau)
def reward_compute(self):
The reward here is 0 if the current temp is between 0.4 and 0.6,
else it is distance the temp is away from the 0.4 or 0.6 boundary.
Return the value within the numpy array, not the numpy array.
delta = abs(self.current_temp - 0.5)
if delta < 0.1:
return 0.0
else:
return -delta[0] + 0.1
def execute(self, actions):
## Check the action is either 0 or 1 -- heater on or off.
assert actions == 0 or actions == 1
## Increment timestamp
self.timestep += 1
## Update the current_temp
self.current_temp = self.response(actions)
## Compute the reward
reward = self.reward_compute()
## The only way to go terminal is to exceed max_episode_timestamp.
## terminal == False means episode is not done
## terminal == True means it is done.
terminal = False
return self.current_temp, terminal, reward
###-----------------------------------------------------------------------------
### Create the environment
### - Tell it the environment class
### - Set the max timestamps that can happen per episode
environment = environment = Environment.create(
environment=ThermostatEnvironment,
max_episode_timesteps=100)
Explanation: Environment Setup
The environment responds to actions. It is what keeps track of the temperature state of the room, returns the reward for being in that temperature state, and tells you if the episode is over or not (in this case, we just set a max episode length that can happen).
Here is the gist of the flow:
Create an environment by calling Environment.create(), see below, telling it to use the class you created for this (here, the ThermostatEnvironment) and the max timesteps per episode. The enviroment is assigned to the name environment.
Initialize the environment environment by calling environment.reset(). This will do stuff, most importantly, it will initialize the timestep attribute to 0.
When you want to take an action on the current state of the environment, you will call environment.execute(<action-value>). If you want to have the heater off, you call environment.execute(0), and if you want to have the heater on, you call environment.execute(1).
What the execute() call returns is a tuple with 3 entries:
state. In this case, the state is the current temperature that results from taking the action. If you turn on the heater, the temperature will rise from the previous state, and if the heater was turned off, the temperature will fall from the previous state. This should be kept as a numpy array, even though it seems like overkill with a single value for the state coming back. For more complex examples beyond this thermostat, there will be more than 1 component to the state.
terminal. This is a True/False value. It is True if the episode terminated. In this case, that will happen once you exceed the max number of steps you have set. Otherwise, it will be False, which lets the agent know that it can take further steps.
reward. This is the reward for taking the action you took.
Below, to train the agent, you will have the agent take actions on the environment, and the environment will return these signals so that the agent can self-train to optimize its reward.
End of explanation
agent = Agent.create(
agent='tensorforce', environment=environment, update=64,
optimizer=dict(optimizer='adam', learning_rate=1e-3),
objective='policy_gradient', reward_estimation=dict(horizon=1)
)
Explanation: Agent setup
Here we configure a type of agent to learn against this environment. There are many agent configurations to choose from, which we will not cover here. We will not discuss what type of agent to choose here -- we will just take a basic agent to train.
End of explanation
### Initialize
environment.reset()
## Creation of the environment via Environment.create() creates
## a wrapper class around the original Environment defined here.
## That wrapper mainly keeps track of the number of timesteps.
## In order to alter the attributes of your instance of the original
## class, like to set the initial temp to a custom value, like here,
## you need to access the `environment` member of this wrapped class.
## That is why you see the way to set the current_temp like below.
environment.current_temp = np.array([0.5])
states = environment.current_temp
internals = agent.initial_internals()
terminal = False
### Run an episode
temp = [environment.current_temp[0]]
while not terminal:
actions, internals = agent.act(states=states, internals=internals, independent=True)
states, terminal, reward = environment.execute(actions=actions)
temp += [states[0]]
### Plot the run
plt.figure(figsize=(12, 4))
ax=plt.subplot()
ax.set_ylim([0.0, 1.0])
plt.plot(range(len(temp)), temp)
plt.hlines(y=0.4, xmin=0, xmax=99, color='r')
plt.hlines(y=0.6, xmin=0, xmax=99, color='r')
plt.xlabel('Timestep')
plt.ylabel('Temperature')
plt.title('Temperature vs. Timestep')
plt.show()
Explanation: Check: Untrained Agent Performance
Let's see how the untrained agent performs on the environment. The red horizontal lines are the target bands for the temperature.
The agent doesn't take actions to try and get the temperature within the bands. It either initializes a policy to the heater always off or always on.
End of explanation
# Train for 200 episodes
for _ in range(200):
states = environment.reset()
terminal = False
while not terminal:
actions = agent.act(states=states)
states, terminal, reward = environment.execute(actions=actions)
agent.observe(terminal=terminal, reward=reward)
Explanation: Train the agent
Here we train the agent against episodes of interacting with the environment.
End of explanation
### Initialize
environment.reset()
## Creation of the environment via Environment.create() creates
## a wrapper class around the original Environment defined here.
## That wrapper mainly keeps track of the number of timesteps.
## In order to alter the attributes of your instance of the original
## class, like to set the initial temp to a custom value, like here,
## you need to access the `environment` member of this wrapped class.
## That is why you see the way to set the current_temp like below.
environment.current_temp = np.array([1.0])
states = environment.current_temp
internals = agent.initial_internals()
terminal = False
### Run an episode
temp = [environment.current_temp[0]]
while not terminal:
actions, internals = agent.act(states=states, internals=internals, independent=True)
states, terminal, reward = environment.execute(actions=actions)
temp += [states[0]]
### Plot the run
plt.figure(figsize=(12, 4))
ax=plt.subplot()
ax.set_ylim([0.0, 1.0])
plt.plot(range(len(temp)), temp)
plt.hlines(y=0.4, xmin=0, xmax=99, color='r')
plt.hlines(y=0.6, xmin=0, xmax=99, color='r')
plt.xlabel('Timestep')
plt.ylabel('Temperature')
plt.title('Temperature vs. Timestep')
plt.show()
Explanation: Check: Trained Agent Performance
You can plainly see that this is toggling the heater on/off to keep the temperature within the target band!
End of explanation |
6,182 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Demo - Poisson equation 2D
Solve Poisson's equation in 2D with homogeneous Dirichlet bcs in one direction and periodicity in the other.
$$
\begin{align}
\nabla^2 u(x, y) &= f(x, y), \quad \forall \, (x, y) \in [-1, 1] \times [0, 2\pi]\
u(\pm 1, y) &= 0 \
u(x, 2\pi) &= u(x, 0)
\end{align}
$$
where $u(x, y)$ is the solution and $f(x, y)$ is some right hand side function.
Use either Chebyshev basis $P={T_k(x)}{k=0}^{N_0-1}$ or Legendre $P={L_k(x)}{k=0}^{N_0-1}$ and define Shen's composite Dirichlet basis as
$$
V^{N_0}(x) = {P_k(x) - P_{k+2}(x)\, | \, k=0, 1, \ldots, N_0-3}.
$$
For the periodic direction use Fourier exponentials
$$
V^{N_1}(y) = {\exp(i l y)\, | \, l=-N_1/2, -N_1/2+1, \ldots, N_1/2-1}.
$$
And then define tensor product space as an outer product of these spaces
$$
V^N(x, y) = V^{N_0}(x) \times V^{N_1}(y).
$$
We get the test function
$$
\phi_{kl}(x, y) = (P_k(x) - P_{k+2}(x))\exp(i l y),
$$
and define for simplicity
$$
\begin{align}
v(x, y) &= \phi_{kl}(x, y), \
u(x, y) &= \sum_{k=0}^{N_0-3}\sum_{l=-N_1/2}^{N_1/2-1} \hat{u}{kl} \phi{kl}(x, y),
\end{align}
$$
where $u(x, y)$ is the trial function.
The weighted inner product is defined almost exactly like in 1D, however, we now have to take into account that the solution is complex valued. The inner product is now
$$
(u, v)w = \int{-1}^{1}\int_{0}^{2\pi} u v^* w dxdy,
$$
where $v^*$ is the complex conjugate of $v$. Furthermore, we use the constant weight $w(x, y)=1/(2\pi)$ for Legendre/Fourier and get
Find $u \in V^N$ such that
$$ (\nabla u, \nabla v)_w = -(f, v)_w, \quad \forall \, v \in V^N.$$
For Chebyshev the weight is $1/\sqrt{1-x^2}/(2\pi)$ and we do not perform integration by parts
Step1: TPMatrix is a tensor product matrix. It is the outer product of two smaller matrices. Consider the inner product
Step2: The first item of the A[0].mats list is the $a_{km}$ matrix and the second is the identity matrix.
Now create a manufactured solution to test the implementation.
Step3: Assemble right hand side
Step4: Solve system of equations by fetching an efficient Helmholtz solver | Python Code:
from shenfun import *
import matplotlib.pyplot as plt
N = (16, 12)
BX = FunctionSpace(N[0], 'L', bc=(0, 0))
BY = FunctionSpace(N[1], 'F')
V = TensorProductSpace(comm, (BX, BY))
v = TestFunction(V)
u = TrialFunction(V)
A = inner(grad(u), grad(v))
print(A)
Explanation: Demo - Poisson equation 2D
Solve Poisson's equation in 2D with homogeneous Dirichlet bcs in one direction and periodicity in the other.
$$
\begin{align}
\nabla^2 u(x, y) &= f(x, y), \quad \forall \, (x, y) \in [-1, 1] \times [0, 2\pi]\
u(\pm 1, y) &= 0 \
u(x, 2\pi) &= u(x, 0)
\end{align}
$$
where $u(x, y)$ is the solution and $f(x, y)$ is some right hand side function.
Use either Chebyshev basis $P={T_k(x)}{k=0}^{N_0-1}$ or Legendre $P={L_k(x)}{k=0}^{N_0-1}$ and define Shen's composite Dirichlet basis as
$$
V^{N_0}(x) = {P_k(x) - P_{k+2}(x)\, | \, k=0, 1, \ldots, N_0-3}.
$$
For the periodic direction use Fourier exponentials
$$
V^{N_1}(y) = {\exp(i l y)\, | \, l=-N_1/2, -N_1/2+1, \ldots, N_1/2-1}.
$$
And then define tensor product space as an outer product of these spaces
$$
V^N(x, y) = V^{N_0}(x) \times V^{N_1}(y).
$$
We get the test function
$$
\phi_{kl}(x, y) = (P_k(x) - P_{k+2}(x))\exp(i l y),
$$
and define for simplicity
$$
\begin{align}
v(x, y) &= \phi_{kl}(x, y), \
u(x, y) &= \sum_{k=0}^{N_0-3}\sum_{l=-N_1/2}^{N_1/2-1} \hat{u}{kl} \phi{kl}(x, y),
\end{align}
$$
where $u(x, y)$ is the trial function.
The weighted inner product is defined almost exactly like in 1D, however, we now have to take into account that the solution is complex valued. The inner product is now
$$
(u, v)w = \int{-1}^{1}\int_{0}^{2\pi} u v^* w dxdy,
$$
where $v^*$ is the complex conjugate of $v$. Furthermore, we use the constant weight $w(x, y)=1/(2\pi)$ for Legendre/Fourier and get
Find $u \in V^N$ such that
$$ (\nabla u, \nabla v)_w = -(f, v)_w, \quad \forall \, v \in V^N.$$
For Chebyshev the weight is $1/\sqrt{1-x^2}/(2\pi)$ and we do not perform integration by parts:
Find $u \in V^N$ such that
$$ (\nabla^2 u, v)_w = (f, v)_w, \quad \forall \, v \in V^N.$$
Implementation using shenfun
End of explanation
print(A[0].mats)
Explanation: TPMatrix is a tensor product matrix. It is the outer product of two smaller matrices. Consider the inner product:
$$
\begin{align}
(\nabla u, \nabla v) &= \frac{1}{2\pi}\int_{-1}^{1}\int_{0}^{2\pi} \left(\frac{\partial u}{\partial x}, \frac{\partial u}{\partial y}\right) \cdot \left(\frac{\partial v^}{\partial x}, \frac{\partial v^}{\partial y}\right) {dxdy} \
(\nabla u, \nabla v) &= \frac{1}{2\pi} \int_{-1}^1 \int_{0}^{2\pi} \left( \frac{\partial u}{\partial x}\frac{\partial v^}{\partial x} + \frac{\partial u}{\partial y}\frac{\partial v^}{\partial y} \right) {dxdy} \
(\nabla u, \nabla v) &= \frac{1}{2\pi}\int_{-1}^1 \int_{0}^{2\pi} \frac{\partial u}{\partial x}\frac{\partial v^}{\partial x} {dxdy} + \int_{-1}^1 \int_{0}^{2\pi} \frac{\partial u}{\partial y}\frac{\partial v^}{\partial y} {dxdy}
\end{align}
$$
which is also a sum of two terms. These two terms are the two TPMatrixes returned by inner above.
Now each one of these two terms can be written as the outer product of two smaller matrices. Consider the first:
$$
\begin{align}
\frac{1}{2\pi}\int_{-1}^1 \int_{0}^{2\pi} \frac{\partial u}{\partial x}\frac{\partial v^}{\partial x} {dxdy} &= \frac{1}{2\pi}\int_{-1}^1 \int_{0}^{2\pi} \frac{\partial \sum_{m}\sum_{n} \hat{u}{mn} \phi{mn}}{\partial x}\frac{\partial \phi_{kl}^}{\partial x }{dxdy} \
&= \sum_{m}\sum_{n} \hat{u}{mn} \frac{1}{2\pi} \int{-1}^1 \int_{0}^{2\pi} \frac{\partial (P_m(x)-P_{m+2}(x))\exp(iny)}{\partial x}\frac{\partial (P_k(x)-P_{k+2}(x))\exp(-ily)}{\partial x} {dxdy} \
&= \sum_{m}\sum_{n} \hat{u}{mn} \frac{1}{2\pi} \int{-1}^1 \int_{0}^{2\pi} \frac{\partial (P_m(x)-P_{m+2}(x))}{\partial x}\frac{\partial (P_k(x)-P_{k+2}(x))}{\partial x} \exp(iny) \exp(-ily) {dxdy} \
&= \sum_{m}\sum_{n} \hat{u}{mn} \underbrace{\int{-1}^1 \frac{\partial (P_m(x)-P_{m+2}(x))}{\partial x}\frac{\partial (P_k(x)-P_{k+2}(x))}{\partial x} {dx}}{a{km}} \underbrace{\frac{1}{2\pi}\int_{0}^{2\pi} \exp(iny) \exp(-ily) {dy}}{\delta{ln}} \
&= a_{km} \delta_{ln} \hat{u}{mn} \
&= a{km} \hat{u}_{ml}
\end{align}
$$
End of explanation
import sympy as sp
x, y = sp.symbols('x,y')
ue = (sp.cos(4*x) + sp.sin(2*y))*(1 - x**2)
fe = ue.diff(x, 2) + ue.diff(y, 2)
fl = sp.lambdify((x, y), fe, 'numpy')
fj = Array(V, buffer=fl(*V.mesh()))
Explanation: The first item of the A[0].mats list is the $a_{km}$ matrix and the second is the identity matrix.
Now create a manufactured solution to test the implementation.
End of explanation
f_tilde = Function(V)
f_tilde = inner(v, -fj, output_array=f_tilde)
Explanation: Assemble right hand side
End of explanation
u_hat = Function(V)
solver = legendre.la.Helmholtz(*A)
u_hat = solver(u_hat, f_tilde)
X = V.local_mesh(True)
plt.contourf(X[0], X[1], u_hat.backward(), 100)
plt.colorbar()
Explanation: Solve system of equations by fetching an efficient Helmholtz solver
End of explanation |
6,183 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analyzing patient data
Words are useful, but what’s more useful are the sentences and stories we build with them.
A lot of powerful tools are built into languages like Python, even more live in the libraries they are used to build
We need to import a library called NumPy
Use this library to do fancy things with numbers (e.g. if you have matrices or arrays).
Step1: Importing a library akin to getting lab equipment out of a locker and setting up on bench
Libraries provide additional functionality
With NumPy loaded we can read the CSV into python.
Step2: numpy.loadtex() is a function call, runs loadtxt in numpy
uses dot notation to access thing.component
two parameters
Step3: print above shows several things at once by separating with commas
variable as putting sticky note on value
means assigning a value to one variable does not chage the value of other variables.
Step4: whos #ipython command to see what variables & mods you have
Step5: What does the following program print out?
python
first, second = 'Grace', 'Hopper'
third, fourth = second, first
print(third, fourth)
Step6: data refers to N-dimensional array
data corres. to patients' inflammation
let's look at the shape of the data
Step7: data has 60 rows and 40 columns
when we created data with numpy it also creates members or attributes
extra info describes data like adjective does a noun
dot notation to access members
Step8: programming languages like MATLAB and R start counting at 1
languages in C family (C++, Java, Perl & python)
we have MxN array in python, indices go from 0 to M-1 on the first axis and 0 to N-1 on second
indices are (row, column)
Step9: slice 0
Step10: dont' have to include uper and lower bound
python uses 0 by default if we don't include lower
no upper slice runs to the axis
Step11: A section of an array is called a slice. We can take slices of character strings as well
Step12: operation on arrays is done on each individual element of the array
Step13: we can also do arithmetic operation with another array of same shape (same dims)
Step14: we can do more than simple arithmetic
let's take average inflammation for patients
Step15: mean is a method of the array (function)
variables are nouns, methods are verbs - they are what the thing knows how to do
for mean we need empty () parense even if we aren't passing in parameters to tell python to go do something
data.shape doesn't need () because it's just a description
NumPy arrays have lots of useful methods
Step16: however, we are usually more interested in partial stats, e.g. max value per patient or the avg value per day
we can create a new subset array of the data we want
Step17: let's visualize this data with matplotlib library
first we import the plyplot module from matplotlib
Step18: nice, but ipython/jupyter proved us with 'magic' functions and one lets us display our plot inline
% indicates an ipython magic function
what if we need max inflammation for all patients, or the average for each day?
most array methods let us specify the axis we want to work on
Step19: now let's look at avg inflammation over days (columns)
Step20: avg per day across all patients in the var day_avg_plot
matplotlib create and display a line graph of those values | Python Code:
import numpy
Explanation: Analyzing patient data
Words are useful, but what’s more useful are the sentences and stories we build with them.
A lot of powerful tools are built into languages like Python, even more live in the libraries they are used to build
We need to import a library called NumPy
Use this library to do fancy things with numbers (e.g. if you have matrices or arrays).
End of explanation
#assuming the data file is in the data/ folder
numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
Explanation: Importing a library akin to getting lab equipment out of a locker and setting up on bench
Libraries provide additional functionality
With NumPy loaded we can read the CSV into python.
End of explanation
data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
print(data)
weight_kg = 55 #assigns value 55 to weight_kg
print(weight_kg) #we can print to the screen
print("weight in kg", weight_kg)
weight_kg = 70
print("weight in kg", weight_kg)
Explanation: numpy.loadtex() is a function call, runs loadtxt in numpy
uses dot notation to access thing.component
two parameters: filename and delimiter - both character strings (")
we didn't save in memory using a variable
variables in python must start with letter & are case sensitive
assignment operator is =
let's look at assigning this inflammation data to a variable
End of explanation
weight_kg * 2
weight_lb = weight_kg * 2.2
print('weigh in lb:', weight_lb)
print("weight in lb:", weight_kg*2.2)
print(data)
Explanation: print above shows several things at once by separating with commas
variable as putting sticky note on value
means assigning a value to one variable does not chage the value of other variables.
End of explanation
whos
Explanation: whos #ipython command to see what variables & mods you have
End of explanation
print(data)
print(type(data)) #we can get type of object
Explanation: What does the following program print out?
python
first, second = 'Grace', 'Hopper'
third, fourth = second, first
print(third, fourth)
End of explanation
print(data.shape)
Explanation: data refers to N-dimensional array
data corres. to patients' inflammation
let's look at the shape of the data
End of explanation
print('first value in data', data[0,0]) #use index in square brackets
print('4th value in data', data[0,3]) #use index in square brackets
print('first value in 3rd row data', data[3,0]) #use index in square brackets
!head -3 data/inflammation-01.csv
print('middle value in data', data[30,20]) # get the middle value - notice here i didn't use print
Explanation: data has 60 rows and 40 columns
when we created data with numpy it also creates members or attributes
extra info describes data like adjective does a noun
dot notation to access members
End of explanation
data[0:4, 0:10] #select whole sections of matrix, 1st 10 days & 4 patients
Explanation: programming languages like MATLAB and R start counting at 1
languages in C family (C++, Java, Perl & python)
we have MxN array in python, indices go from 0 to M-1 on the first axis and 0 to N-1 on second
indices are (row, column)
End of explanation
data[5:10,0:10]
Explanation: slice 0:4 means start at 0 and go up to but not include 4
up-to-but-not-including takes a bit of getting used to
End of explanation
data[:3, 36:]
Explanation: dont' have to include uper and lower bound
python uses 0 by default if we don't include lower
no upper slice runs to the axis
: will include everything
End of explanation
element = 'oxygen'
print('first three characters:', element[0:3])
print('last three characters:', element[3:6])
print(element[:4])
print(element[4:])
print(:)
#oxygen
print(element[-1])
print(element[-2])
print(element[2:-1])
doubledata = data * 2.0 #we can perform math on array
Explanation: A section of an array is called a slice. We can take slices of character strings as well:
python
element = 'oxygen'
print('first three characters:', element[0:3])
print('last three characters:', element[3:6])
first three characters: oxy
last three characters: gen
What is the value of element[:4]? What about element[4:]? Or element[:]?
What is element[-1]? What is element[-2]? Given those answers, explain what element[1:-1] does.
End of explanation
doubledata
data[:3, 36:]
doubledata[:3, 36:]
Explanation: operation on arrays is done on each individual element of the array
End of explanation
tripledata = doubledata + data
print('tripledata:')
print(tripledata[:3, 36:])
Explanation: we can also do arithmetic operation with another array of same shape (same dims)
End of explanation
print(data.mean())
Explanation: we can do more than simple arithmetic
let's take average inflammation for patients
End of explanation
print('maximum inflammation: ', data.max())
print('minimum inflammation: ', data.min())
print('standard deviation:', data.std())
Explanation: mean is a method of the array (function)
variables are nouns, methods are verbs - they are what the thing knows how to do
for mean we need empty () parense even if we aren't passing in parameters to tell python to go do something
data.shape doesn't need () because it's just a description
NumPy arrays have lots of useful methods:
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
data
Explanation: however, we are usually more interested in partial stats, e.g. max value per patient or the avg value per day
we can create a new subset array of the data we want
End of explanation
plt.imshow(data)
image = plt.imshow(data)
plt.savefig('timsheatmap.png')
Explanation: let's visualize this data with matplotlib library
first we import the plyplot module from matplotlib
End of explanation
avg_inflam = data.mean(axis=0) #asix zero is by each day
print(data.mean(axis=0))
print(data.mean(axis=0).shape) #Nx1 vector of averages
print(data.mean(axis=1)) #avg inflam per patient across all days
print(data.mean(axis=1).shape)
Explanation: nice, but ipython/jupyter proved us with 'magic' functions and one lets us display our plot inline
% indicates an ipython magic function
what if we need max inflammation for all patients, or the average for each day?
most array methods let us specify the axis we want to work on
End of explanation
print(avg_inflam)
day_avg_plot = plt.plot(avg_inflam)
Explanation: now let's look at avg inflammation over days (columns)
End of explanation
data.mean(axis=0).shape
data.shape
data.mean(axis=1).shape
max_plot = plt.plot(data.max(axis=0))
Explanation: avg per day across all patients in the var day_avg_plot
matplotlib create and display a line graph of those values
End of explanation |
6,184 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Service Creation
Metadata
| Metadata | Value |
|
Step1: Download & Process Security Dataset
Step2: Analytic I
Look for new services being created in your environment and stack the values of it
| Data source | Event Provider | Relationship | Event |
| | Python Code:
from openhunt.mordorutils import *
spark = get_spark()
Explanation: Service Creation
Metadata
| Metadata | Value |
|:------------------|:---|
| collaborators | ['@Cyb3rWard0g', '@Cyb3rPandaH'] |
| creation date | 2019/08/13 |
| modification date | 2020/09/20 |
| playbook related | [] |
Hypothesis
Adversaries might be creating new services to execute code on a compromised endpoint in my environment
Technical Context
None
Offensive Tradecraft
Adversaries may execute a binary, command, or script via a method that interacts with Windows services, such as the Service Control Manager.
This can be done by by adversaries creating a new service.
Security Datasets
| Metadata | Value |
|:----------|:----------|
| docs | https://securitydatasets.com/notebooks/atomic/windows/lateral_movement/SDWIN-190518210652.html |
| link | https://raw.githubusercontent.com/OTRF/Security-Datasets/master/datasets/atomic/windows/lateral_movement/host/empire_psexec_dcerpc_tcp_svcctl.zip |
Analytics
Initialize Analytics Engine
End of explanation
sd_file = "https://raw.githubusercontent.com/OTRF/Security-Datasets/master/datasets/atomic/windows/lateral_movement/host/empire_psexec_dcerpc_tcp_svcctl.zip"
registerMordorSQLTable(spark, sd_file, "sdTable")
Explanation: Download & Process Security Dataset
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, SubjectUserName ServiceName, ServiceType, ServiceStartType, ServiceAccount
FROM sdTable
WHERE LOWER(Channel) = "security" AND EventID = 4697
'''
)
df.show(10,False)
Explanation: Analytic I
Look for new services being created in your environment and stack the values of it
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Service | Microsoft-Windows-Security-Auditing | User created Service | 4697 |
End of explanation |
6,185 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fitting censored data
Experimental measurements are sometimes censored such that we only know partial information about a particular data point. For example, in measuring the lifespan of mice, a portion of them might live through the duration of the study, in which case we only know the lower bound.
One of the ways we can deal with this is to use Maximum Likelihood Estimation (MLE). However, censoring often make analytical solutions difficult even for well known distributions.
We can overcome this challenge by converting the MLE into a convex optimization problem and solving it using CVXPY.
This example is adapted from a homework problem from Boyd's CVX 101
Step1: Regular OLS
Let's see what the OLS result looks like. We'll use the np.linalg.lstsq function to solve for our coefficients.
Step2: We can see that we are systematically overestimating low values of $y$ and vice versa (red vs. cyan). This is caused by our use of censored (blue) observations, which are exerting a lot of leverage and pulling down the trendline to reduce the error between the red and blue points.
OLS using uncensored data
A simple way to deal with this while maintaining analytical tractability is to simply ignore all censored observations.
$$
\begin{array}{ll}
\underset{c}{\mbox{minimize}} & \sum_{i=1}^M (y^{(i)} - c^T x^{(i)})^2
\end{array}
$$
Give that our $M$ is much smaller than $K$, we are throwing away the majority of the dataset in order to accomplish this, let's see how this new regression does.
Step3: We can see that the fit for the uncensored portion is now vastly improved. Even the fit for the censored data is now relatively unbiased i.e. the fitted values (red points) are now centered around the uncensored obsevations (cyan points).
The one glaring issue with this arrangement is that we are now predicting many observations to be below $D$ (orange) even though we are well aware that this is not the case. Let's try to fix this.
Using constraints to take into account of censored data
Instead of throwing away all censored observations, lets leverage these observations to enforce the additional information that we know, namely that $y$ is bounded from below. We can do this by setting additional constraints
Step4: Qualitatively, this already looks better than before as it no longer predicts inconsistent values with respect to the censored portion of the data. But does it do a good job of actually finding coefficients $c$ that are close to our original data?
We'll use a simple Euclidean distance $\|c_\mbox{true} - c\|_2$ to compare | Python Code:
import numpy as np
n = 30 # number of variables
M = 50 # number of censored observations
K = 200 # total number of observations
np.random.seed(n*M*K)
X = np.random.randn(K*n).reshape(K, n)
c_true = np.random.rand(n)
# generating the y variable
y = X.dot(c_true) + .3*np.sqrt(n)*np.random.randn(K)
# ordering them based on y
order = np.argsort(y)
y_ordered = y[order]
X_ordered = X[order,:]
#finding boundary
D = (y_ordered[M-1] + y_ordered[M])/2.
# applying censoring
y_censored = np.concatenate((y_ordered[:M], np.ones(K-M)*D))
import matplotlib.pyplot as plt
# Show plot inline in ipython.
%matplotlib inline
def plot_fit(fit, fit_label):
plt.figure(figsize=(10,6))
plt.grid()
plt.plot(y_censored, 'bo', label = 'censored data')
plt.plot(y_ordered, 'co', label = 'uncensored data')
plt.plot(fit, 'ro', label=fit_label)
plt.ylabel('y')
plt.legend(loc=0)
plt.xlabel('observations');
Explanation: Fitting censored data
Experimental measurements are sometimes censored such that we only know partial information about a particular data point. For example, in measuring the lifespan of mice, a portion of them might live through the duration of the study, in which case we only know the lower bound.
One of the ways we can deal with this is to use Maximum Likelihood Estimation (MLE). However, censoring often make analytical solutions difficult even for well known distributions.
We can overcome this challenge by converting the MLE into a convex optimization problem and solving it using CVXPY.
This example is adapted from a homework problem from Boyd's CVX 101: Convex Optimization Course.
Setup
We will use similar notation here. Suppose we have a linear model:
$$ y^{(i)} = c^Tx^{(i)} +\epsilon^{(i)} $$
where $y^{(i)} \in \mathbf{R}$, $c \in \mathbf{R}^n$, $k^{(i)} \in \mathbf{R}^n$, and $\epsilon^{(i)}$ is the error and has a normal distribution $N(0, \sigma^2)$ for $ i = 1,\ldots,K$.
Then the MLE estimator $c$ is the vector that minimizes the sum of squares of the errors $\epsilon^{(i)}$, namely:
$$
\begin{array}{ll}
\underset{c}{\mbox{minimize}} & \sum_{i=1}^K (y^{(i)} - c^T x^{(i)})^2
\end{array}
$$
In the case of right censored data, only $M$ observations are fully observed and all that is known for the remaining observations is that $y^{(i)} \geq D$ for $i=\mbox{M+1},\ldots,K$ and some constant $D$.
Now let's see how this would work in practice.
Data Generation
End of explanation
c_ols = np.linalg.lstsq(X_ordered, y_censored, rcond=None)[0]
fit_ols = X_ordered.dot(c_ols)
plot_fit(fit_ols, 'OLS fit')
Explanation: Regular OLS
Let's see what the OLS result looks like. We'll use the np.linalg.lstsq function to solve for our coefficients.
End of explanation
c_ols_uncensored = np.linalg.lstsq(X_ordered[:M], y_censored[:M], rcond=None)[0]
fit_ols_uncensored = X_ordered.dot(c_ols_uncensored)
plot_fit(fit_ols_uncensored, 'OLS fit with uncensored data only')
bad_predictions = (fit_ols_uncensored<=D) & (np.arange(K)>=M)
plt.plot(np.arange(K)[bad_predictions], fit_ols_uncensored[bad_predictions], color='orange', marker='o', lw=0);
Explanation: We can see that we are systematically overestimating low values of $y$ and vice versa (red vs. cyan). This is caused by our use of censored (blue) observations, which are exerting a lot of leverage and pulling down the trendline to reduce the error between the red and blue points.
OLS using uncensored data
A simple way to deal with this while maintaining analytical tractability is to simply ignore all censored observations.
$$
\begin{array}{ll}
\underset{c}{\mbox{minimize}} & \sum_{i=1}^M (y^{(i)} - c^T x^{(i)})^2
\end{array}
$$
Give that our $M$ is much smaller than $K$, we are throwing away the majority of the dataset in order to accomplish this, let's see how this new regression does.
End of explanation
import cvxpy as cp
X_uncensored = X_ordered[:M, :]
c = cp.Variable(shape=n)
objective = cp.Minimize(cp.sum_squares(X_uncensored*c - y_ordered[:M]))
constraints = [ X_ordered[M:,:]*c >= D]
prob = cp.Problem(objective, constraints)
result = prob.solve()
c_cvx = np.array(c.value).flatten()
fit_cvx = X_ordered.dot(c_cvx)
plot_fit(fit_cvx, 'CVX fit')
Explanation: We can see that the fit for the uncensored portion is now vastly improved. Even the fit for the censored data is now relatively unbiased i.e. the fitted values (red points) are now centered around the uncensored obsevations (cyan points).
The one glaring issue with this arrangement is that we are now predicting many observations to be below $D$ (orange) even though we are well aware that this is not the case. Let's try to fix this.
Using constraints to take into account of censored data
Instead of throwing away all censored observations, lets leverage these observations to enforce the additional information that we know, namely that $y$ is bounded from below. We can do this by setting additional constraints:
$$
\begin{array}{ll}
\underset{c}{\mbox{minimize}} & \sum_{i=1}^M (y^{(i)} - c^T x^{(i)})^2 \
\mbox{subject to} & c^T x^{(i)} \geq D\
& \mbox{for } i=\mbox{M+1},\ldots,K
\end{array}
$$
End of explanation
print("norm(c_true - c_cvx): {:.2f}".format(np.linalg.norm((c_true - c_cvx))))
print("norm(c_true - c_ols_uncensored): {:.2f}".format(np.linalg.norm((c_true - c_ols_uncensored))))
Explanation: Qualitatively, this already looks better than before as it no longer predicts inconsistent values with respect to the censored portion of the data. But does it do a good job of actually finding coefficients $c$ that are close to our original data?
We'll use a simple Euclidean distance $\|c_\mbox{true} - c\|_2$ to compare:
End of explanation |
6,186 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
Step2: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following
Step5: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
Step8: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint
Step10: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
Step12: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step17: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note
Step20: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling
Step23: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option
Step26: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step29: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step32: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model
Step35: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following
Step37: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
Step38: Hyperparameters
Tune the following parameters
Step40: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
Step42: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
Step45: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
_x = np.array(x)
return _x / 256
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
import pandas as pd
category_list = ['airplane','automobile','bird','cat','deer','dog','frog','horse','ship','truck']
category_indicies = list(range(len(category_list)))
category_encodings = pd.Series(category_indicies)
category_encodings = pd.get_dummies(category_encodings)
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
return np.array([np.array(category_encodings[label]) for label in x])
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
import numpy as np
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
return tf.placeholder(tf.float32, [None, image_shape[0], image_shape[1], image_shape[2]], name = 'x')
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
return tf.placeholder(tf.float32, [None, n_classes], name = 'y')
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
return tf.placeholder(tf.float32, name = 'keep_prob')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): # , dropout = None):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
weights = tf.Variable(tf.truncated_normal(
[conv_ksize[0], conv_ksize[1], int(x_tensor.shape[3]), conv_num_outputs],
stddev = 0.05,
seed = 1234.56))
biass = tf.Variable(tf.zeros(conv_num_outputs))
flow = tf.nn.conv2d(x_tensor, weights, [1, conv_strides[0], conv_strides[1], 1], padding = 'SAME')
flow = tf.nn.bias_add(flow, biass)
flow = tf.nn.relu(flow)
flow = tf.nn.max_pool(flow, [1, pool_ksize[0], pool_ksize[1], 1], [1, pool_strides[0], pool_strides[1], 1], padding = 'SAME')
# flow = tf.layers.dropout(flow, rate = dropout) if dropout != None else flow
return flow
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
batch = -1 # How to extract this???
flattened_image = int(np.product([x_tensor.shape[1], x_tensor.shape[2], x_tensor.shape[3]]))
return tf.reshape(x_tensor, [batch, flattened_image])
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def fully_conn(x_tensor, num_outputs): # , dropout = None):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
flow = tf.layers.dense(x_tensor, units = num_outputs, activation = tf.nn.relu)
# flow = tf.layers.dropout(flow, rate = dropout) if dropout != None else flow
return flow
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
return tf.layers.dense(x_tensor, units = num_outputs, activation = None)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
dropout_rate = 1 - keep_prob
flow = conv2d_maxpool(x, 32, [3, 3], [1, 1], [2, 2], [2, 2])
flow = tf.layers.dropout(flow, rate = dropout_rate)
flow = conv2d_maxpool(flow, 64, [3, 3], [1, 1], [2, 2], [2, 2]) # , dropout = 1 - keep_prob)
flow = tf.layers.dropout(flow, rate = dropout_rate)
flow = conv2d_maxpool(flow, 128, [3, 3], [1, 1], [1, 1], [1, 1]) # , dropout = 1 - keep_prob)
flow = tf.layers.dropout(flow, rate = dropout_rate)
# flow = conv2d_maxpool(flow, 256, [1, 1], [1, 1], [1, 1], [1, 1])
flow = flatten(flow)
flow = fully_conn(flow, 128) # , dropout = 1 - keep_prob)
flow = tf.layers.dropout(flow, rate = dropout_rate)
flow = fully_conn(flow, 64) # , dropout = 1 - keep_prob)
flow = tf.layers.dropout(flow, rate = dropout_rate)
# flow = fully_conn(flow, 32)
flow = output(flow, 10)
return flow
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
session.run(
optimizer,
feed_dict = \
{
x: feature_batch,
y: label_batch,
keep_prob: keep_probability
})
return
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
_cost = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})
_acc = session.run(accuracy, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})
_valid_acc = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})
print('Cost: %s, Accuracy: %s, Validation Accuracy: %s' % (_cost, _acc, _valid_acc))
return
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
# TODO: Tune Parameters
epochs = 12
# epochs = 10 -- hadn't stopped getting more accurate on validation
# epochs = 32 -- generally stopped getting more accurate on validation set after 12-15
# epochs = 128 -- no higher than after 12-15 epochs
batch_size = 256
keep_probability = .75
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation |
6,187 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
练习 1:写程序,可由键盘读入用户姓名例如Mr. right,让用户输入出生的月份与日期,判断用户星座,假设用户是金牛座,则输出,Mr. right,你是非常有性格的金牛座!。
Step1: 练习 2:写程序,可由键盘读入两个整数m与n(n不等于0),询问用户意图,如果要求和则计算从m到n的和输出,如果要乘积则计算从m到n的积并输出,如果要求余数则计算m除以n的余数的值并输出,否则则计算m整除n的值并输出。
Step2: 练习 3:写程序,能够根据北京雾霾PM2.5数值给出对应的防护建议。如当PM2.5数值大于500,则应该打开空气净化器,戴防雾霾口罩等。
Step3: 练习 4:英文单词单数转复数,要求输入一个英文动词(单数形式),能够得到其复数形式,或给出单数转复数形式的建议(提示,some_string.endswith(some_letter)函数可以判断某字符串结尾字符,可尝试运行:'myname'.endswith('me'),'liupengyuan'.endswith('n')`)。
Step4: 尝试性练习:写程序,能够在屏幕上显示空行。
Step5: 挑战性练习:写程序,由用户输入一些整数,能够得到几个整数中的次大值(第二大的值)并输出。 | Python Code:
name=input('请输入你的名字,回车结束:')
birthday = float(input('请输入你的出生日期(如5月20日,则输入5.20),按下回车键结束:'))
if 3.21<= birthday <=3.31 or 4.1<= birthday <=4.19:
print(name,'你是热情自信的白羊座喔!',sep='!')
elif 4.20<= birthday <= 4.30 or 5.1<= birthday <=5.20:
print(name,'你是固执又有韧性的金牛座喔!',sep='!')
elif 5.21<= birthday <=5.31 or 6.1<= birthday <=6.21:
print(name,'你是充满好奇心又多变的双子座喔!',sep='!')
elif 6.22<= birthday <=6.30 or 7.1<= birthday <=7.22:
print(name,'你是敏锐又体贴的巨蟹座喔!',sep='!')
elif 7.23<= birthday <=7.31 or 8.1<= birthday <=8.22:
print(name,'你是霸道又慷慨的狮子座喔!',sep='!')
elif 8.23<= birthday <=8.31 or 9.1<= birthday <=9.22:
print(name,'你是具有完美主义的处女座喔!',sep='!')
elif 9.23<= birthday <=9.30 or 10.1<= birthday <=10.23:
print(name,'你是优雅又追求和平的天秤座喔!',sep='!')
elif 10.24<= birthday <=10.31 or 11.1<= birthday <=11.22:
print(name,'你是冷酷又神秘的天蝎座喔!',sep='!')
elif 11.23<= birthday <=11.30 or 12.1<= birthday <=12.21:
print(name,'你是热爱自由又粗心的射手座喔!',sep='!')
elif 12.22<= birthday <=12.31 or 1.1<= birthday <=1.19:
print(name,'你是稳重严肃的摩羯座喔!',sep='!')
elif 1.20<= birthday <=1.31 or 2.1<= birthday <=2.18:
print(name,'你是智慧又叛逆的水瓶座喔!',sep='!')
elif 2.19<= birthday <=2.29 or 3.1<= birthday <=3.20:
print(name,'你是喜欢幻想又多情的双鱼座喔!',sep='!')
else:
print(name,'你一定是在乱输数字逗我玩~',sep=',')
Explanation: 练习 1:写程序,可由键盘读入用户姓名例如Mr. right,让用户输入出生的月份与日期,判断用户星座,假设用户是金牛座,则输出,Mr. right,你是非常有性格的金牛座!。
End of explanation
m = int(input('请输入一个整数,按下回车键结束:'))
n = int(input('请输入一个不为0的整数,按下回车键结束:'))
ask = input('你想让这两个数做什么运算?(如:求和、乘积、余数),回车结束:')
if ask == '求和':
if m<n:
sum = n
while m<n:
sum = sum+m
m = m+1
elif n<m:
sum = m
while n<m:
sum = sum+n
n = n+1
elif m==n:
sum=m+n
print(sum)
elif ask == '乘积':
if m<n:
product = n
while m<n:
product *= m
m = m+1
elif n<m:
product = m
while n<m:
product *= n
n = n+1
elif m==n:
product =m*n
print(product)
elif ask == '余数':
remainder=m%n
print(remainder)
else:
print(m//n)
Explanation: 练习 2:写程序,可由键盘读入两个整数m与n(n不等于0),询问用户意图,如果要求和则计算从m到n的和输出,如果要乘积则计算从m到n的积并输出,如果要求余数则计算m除以n的余数的值并输出,否则则计算m整除n的值并输出。
End of explanation
AQI = int(input('请输入今天的北京雾霾PM2.5指数,回车结束:'))
if AQI>500:
print('今天空气污染十分严重,应避免出门,在室内应该打开空气净化器,如出门一定要戴上防雾霾口罩喔!')
elif 200<= AQI <=500:
print('今天空气污染严重,出门记得戴上防雾霾口罩喔~')
elif 100<= AQI <200:
print('今天空气轻度污染,也要把口罩带在身上以防喉咙不适喔~')
elif 50<= AQI <100:
print('今天空气质量良好,可以进行户外活动~')
elif AQI<50:
print('今天空气质量优良,抓紧时间到户外活动吧!')
Explanation: 练习 3:写程序,能够根据北京雾霾PM2.5数值给出对应的防护建议。如当PM2.5数值大于500,则应该打开空气净化器,戴防雾霾口罩等。
End of explanation
word = str(input('Please type in an English word(singular form):'))
if word.endswith('s') or word.endswith('sh') or word.endswith('ch') or word.endswith('x'):
print(word,'es',sep ='')
elif word.endswith('y'):
print('若该单词为 辅音字母加y结尾,去掉y加ies;否则直接加s')
elif word == 'hero'or word == 'negro' or word =='potato' or word =='tamato':
print(word,'es',sep='')
elif word=='half'or word =='knife' or word =='leaf' or word == 'wolf' or word == 'wife'or word =='life' or word=='thief':
print('去掉f或fe加ves')
else:
print(word,'s',sep ='')
Explanation: 练习 4:英文单词单数转复数,要求输入一个英文动词(单数形式),能够得到其复数形式,或给出单数转复数形式的建议(提示,some_string.endswith(some_letter)函数可以判断某字符串结尾字符,可尝试运行:'myname'.endswith('me'),'liupengyuan'.endswith('n')`)。
End of explanation
name = input('请输入你的名字,回车键结束:')
print()
print('你好',name,sep = '!')
print('空行在下面')
print('空行真的在下面')
print()
print('空行在上面')
print('没骗你吧空行真的在上面')
Explanation: 尝试性练习:写程序,能够在屏幕上显示空行。
End of explanation
n = int(input('请输入要输入的整数个数,按下回车键结束:'))
max_num = int(input('请输入一个整数,回车结束'))
submax = 0
i = 1
while i < n:
i += 1
x = int(input('请输入一个整数,回车结束'))
if x > max_num:
submax = max_num
max_num = x
elif x < max_num and x > submax:
submax = x
print('次大值是:', submax)
Explanation: 挑战性练习:写程序,由用户输入一些整数,能够得到几个整数中的次大值(第二大的值)并输出。
End of explanation |
6,188 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modeling Protein-Ligand Interactions with Atomic Convolutions
By Nathan C. Frey | Twitter and Bharath Ramsundar | Twitter
This DeepChem tutorial introduces the Atomic Convolutional Neural Network. We'll see the structure of the AtomicConvModel and write a simple program to run Atomic Convolutions.
ACNN Architecture
ACNN’s directly exploit the local three-dimensional structure of molecules to hierarchically learn more complex chemical features by optimizing both the model and featurization simultaneously in an end-to-end fashion.
The atom type convolution makes use of a neighbor-listed distance matrix to extract features encoding local chemical environments from an input representation (Cartesian atomic coordinates) that does not necessarily contain spatial locality. The following methods are used to build the ACNN architecture
Step1: Getting protein-ligand data
If you worked through Tutorial 13 on modeling protein-ligand interactions, you'll already be familiar with how to obtain a set of data from PDBbind for training our model. Since we explored molecular complexes in detail in the previous tutorial, this time we'll simply initialize an AtomicConvFeaturizer and load the PDBbind dataset directly using MolNet.
Step2: load_pdbbind allows us to specify if we want to use the entire protein or only the binding pocket (pocket=True) for featurization. Using only the pocket saves memory and speeds up the featurization. We can also use the "core" dataset of ~200 high-quality complexes for rapidly testing our model, or the larger "refined" set of nearly 5000 complexes for more datapoints and more robust training/validation. On Colab, it takes only a minute to featurize the core PDBbind set! This is pretty incredible, and it means you can quickly experiment with different featurizations and model architectures.
Step3: Training the model
Now that we've got our dataset, let's go ahead and initialize an AtomicConvModel to train. Keep the input parameters the same as those used in AtomicConvFeaturizer, or else we'll get errors. layer_sizes controls the number of layers and the size of each dense layer in the network. We choose these hyperparameters to be the same as those used in the original paper.
Step4: The loss curves are not exactly smooth, which is unsurprising because we are using 154 training and 19 validation datapoints. Increasing the dataset size may help with this, but will also require greater computational resources.
Step5: The ACNN paper showed a Pearson $R^2$ score of 0.912 and 0.448 for a random 80/20 split of the PDBbind core train/test sets. Here, we've used an 80/10/10 training/validation/test split and achieved similar performance for the training set (0.943). We can see from the performance on the training, validation, and test sets (and from the results in the paper) that the ACNN can learn chemical interactions from small training datasets, but struggles to generalize. Still, it is pretty amazing that we can train an AtomicConvModel with only a few lines of code and start predicting binding affinities!
From here, you can experiment with different hyperparameters, more challenging splits, and the "refined" set of PDBbind to see if you can reduce overfitting and come up with a more robust model. | Python Code:
!curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py
import conda_installer
conda_installer.install()
!/root/miniconda/bin/conda info -e
!/root/miniconda/bin/conda install -c conda-forge mdtraj -y -q # needed for AtomicConvs
!pip install --pre deepchem
import deepchem
deepchem.__version__
import deepchem as dc
import os
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from rdkit import Chem
from deepchem.molnet import load_pdbbind
from deepchem.models import AtomicConvModel
from deepchem.feat import AtomicConvFeaturizer
Explanation: Modeling Protein-Ligand Interactions with Atomic Convolutions
By Nathan C. Frey | Twitter and Bharath Ramsundar | Twitter
This DeepChem tutorial introduces the Atomic Convolutional Neural Network. We'll see the structure of the AtomicConvModel and write a simple program to run Atomic Convolutions.
ACNN Architecture
ACNN’s directly exploit the local three-dimensional structure of molecules to hierarchically learn more complex chemical features by optimizing both the model and featurization simultaneously in an end-to-end fashion.
The atom type convolution makes use of a neighbor-listed distance matrix to extract features encoding local chemical environments from an input representation (Cartesian atomic coordinates) that does not necessarily contain spatial locality. The following methods are used to build the ACNN architecture:
Distance Matrix
The distance matrix $R$ is constructed from the Cartesian atomic coordinates $X$. It calculates distances from the distance tensor $D$. The distance matrix construction accepts as input a $(N, 3)$ coordinate matrix $C$. This matrix is “neighbor listed” into a $(N, M)$ matrix $R$.
python
R = tf.reduce_sum(tf.multiply(D, D), 3) # D: Distance Tensor
R = tf.sqrt(R) # R: Distance Matrix
return R
Atom type convolution
The output of the atom type convolution is constructed from the distance matrix $R$ and atomic number matrix $Z$. The matrix $R$ is fed into a (1x1) filter with stride 1 and depth of $N_{at}$ , where $N_{at}$ is the number of unique atomic numbers (atom types) present in the molecular system. The atom type convolution kernel is a step function that operates on the neighbor distance matrix $R$.
Radial Pooling layer
Radial Pooling is basically a dimensionality reduction process that down-samples the output of the atom type convolutions. The reduction process prevents overfitting by providing an abstracted form of representation through feature binning, as well as reducing the number of parameters learned.
Mathematically, radial pooling layers pool over tensor slices (receptive fields) of size (1x$M$x1) with stride 1 and a depth of $N_r$, where $N_r$ is the number of desired radial filters and $M$ is the maximum number of neighbors.
Atomistic fully connected network
Atomic Convolution layers are stacked by feeding the flattened ($N$, $N_{at}$ $\cdot$ $N_r$) output of the radial pooling layer into the atom type convolution operation. Finally, we feed the tensor row-wise (per-atom) into a fully-connected network. The
same fully connected weights and biases are used for each atom in a given molecule.
Now that we have seen the structural overview of ACNNs, we'll try to get deeper into the model and see how we can train it and what we expect as the output.
For the training, we will use the publicly available PDBbind dataset. In this example, every row reflects a protein-ligand complex and the target is the binding affinity ($K_i$) of the ligand to the protein in the complex.
Colab
This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
Setup
To run DeepChem within Colab, you'll need to run the following cell of installation commands. This will take about 5 minutes to run to completion and install your environment.
End of explanation
f1_num_atoms = 100 # maximum number of atoms to consider in the ligand
f2_num_atoms = 1000 # maximum number of atoms to consider in the protein
max_num_neighbors = 12 # maximum number of spatial neighbors for an atom
acf = AtomicConvFeaturizer(frag1_num_atoms=f1_num_atoms,
frag2_num_atoms=f2_num_atoms,
complex_num_atoms=f1_num_atoms+f2_num_atoms,
max_num_neighbors=max_num_neighbors,
neighbor_cutoff=4)
Explanation: Getting protein-ligand data
If you worked through Tutorial 13 on modeling protein-ligand interactions, you'll already be familiar with how to obtain a set of data from PDBbind for training our model. Since we explored molecular complexes in detail in the previous tutorial, this time we'll simply initialize an AtomicConvFeaturizer and load the PDBbind dataset directly using MolNet.
End of explanation
%%time
tasks, datasets, transformers = load_pdbbind(featurizer=acf,
save_dir='.',
data_dir='.',
pocket=True,
reload=False,
set_name='core')
datasets
train, val, test = datasets
Explanation: load_pdbbind allows us to specify if we want to use the entire protein or only the binding pocket (pocket=True) for featurization. Using only the pocket saves memory and speeds up the featurization. We can also use the "core" dataset of ~200 high-quality complexes for rapidly testing our model, or the larger "refined" set of nearly 5000 complexes for more datapoints and more robust training/validation. On Colab, it takes only a minute to featurize the core PDBbind set! This is pretty incredible, and it means you can quickly experiment with different featurizations and model architectures.
End of explanation
acm = AtomicConvModel(n_tasks=1,
frag1_num_atoms=f1_num_atoms,
frag2_num_atoms=f2_num_atoms,
complex_num_atoms=f1_num_atoms+f2_num_atoms,
max_num_neighbors=max_num_neighbors,
batch_size=12,
layer_sizes=[32, 32, 16],
learning_rate=0.003,
)
losses, val_losses = [], []
%%time
max_epochs = 50
for epoch in range(max_epochs):
loss = acm.fit(train, nb_epoch=1, max_checkpoints_to_keep=1, all_losses=losses)
metric = dc.metrics.Metric(dc.metrics.score_function.rms_score)
val_losses.append(acm.evaluate(val, metrics=[metric])['rms_score']**2) # L2 Loss
Explanation: Training the model
Now that we've got our dataset, let's go ahead and initialize an AtomicConvModel to train. Keep the input parameters the same as those used in AtomicConvFeaturizer, or else we'll get errors. layer_sizes controls the number of layers and the size of each dense layer in the network. We choose these hyperparameters to be the same as those used in the original paper.
End of explanation
f, ax = plt.subplots()
ax.scatter(range(len(losses)), losses, label='train loss')
ax.scatter(range(len(val_losses)), val_losses, label='val loss')
plt.legend(loc='upper right');
Explanation: The loss curves are not exactly smooth, which is unsurprising because we are using 154 training and 19 validation datapoints. Increasing the dataset size may help with this, but will also require greater computational resources.
End of explanation
score = dc.metrics.Metric(dc.metrics.score_function.pearson_r2_score)
for tvt, ds in zip(['train', 'val', 'test'], datasets):
print(tvt, acm.evaluate(ds, metrics=[score]))
Explanation: The ACNN paper showed a Pearson $R^2$ score of 0.912 and 0.448 for a random 80/20 split of the PDBbind core train/test sets. Here, we've used an 80/10/10 training/validation/test split and achieved similar performance for the training set (0.943). We can see from the performance on the training, validation, and test sets (and from the results in the paper) that the ACNN can learn chemical interactions from small training datasets, but struggles to generalize. Still, it is pretty amazing that we can train an AtomicConvModel with only a few lines of code and start predicting binding affinities!
From here, you can experiment with different hyperparameters, more challenging splits, and the "refined" set of PDBbind to see if you can reduce overfitting and come up with a more robust model.
End of explanation |
6,189 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Test script to find all locations with large swirl
Aim is to take a velocity field, find all locations with large swirl, and then identify distinct blobs of swirl.
This script makes use of the Source Extraction and Photometry (SEP) library
Step1: Estimate background
Step2: Now extract objects
Step3: np.ascontiguousarray(Swirl[ | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import h5py
from importlib import reload
import sep
f = h5py.File('/Users/Owen/Dropbox/Data/ABL/SBL PIV data/RNV45-RI2.mat')
#list(f.keys())
Swirl = np.asarray(f['Swirl'])
X = np.asarray(f['X'])
Y = np.asarray(f['Y'])
X = np.transpose(X,(1,0))
Y = np.transpose(Y,(1,0))
Swirl = np.transpose(Swirl,(2,1,0))
NanLocs = np.isnan(Swirl)
uSize = Swirl.shape
plt.figure(figsize = [8,3])
plt.pcolor(X,Y,Swirl[:,:,1], cmap='RdBu');
plt.clim([-50, 50])
plt.axis('scaled')
plt.xlim([X.min(), X.max()])
plt.ylim([Y.min(), Y.max()])
plt.colorbar()
#Find profile of swirl std
SwirlStd = np.std(np.nanmean(Swirl,axis=2),axis = 1)
plt.plot(SwirlStd,Y[:,1])
plt.ylabel('y(m)')
plt.xlabel('Swirl rms')
Y[1].shape
SwirlStd.shape
#Normalize field by the std of Swirl
Swirl = Swirl/SwirlStd.reshape(uSize[0],1,1) #match the SwirlStd length (123) with the correct index in Swirl (also 123)
plt.figure(figsize = [8,3])
plt.pcolor(X,Y,Swirl[:,:,1], cmap='RdBu');
plt.clim([-200, 200])
plt.axis('scaled')
plt.xlim([X.min(), X.max()])
plt.ylim([Y.min(), Y.max()])
plt.colorbar()
Swirl[NanLocs] = 0 #Get rid of nans for now
Explanation: Test script to find all locations with large swirl
Aim is to take a velocity field, find all locations with large swirl, and then identify distinct blobs of swirl.
This script makes use of the Source Extraction and Photometry (SEP) library
End of explanation
bkg = sep.Background(np.ascontiguousarray(Swirl[:,:,1]))
bkg_image = bkg.back()
plt.imshow(bkg_image, interpolation='nearest', cmap='gray', origin='lower')
plt.colorbar();
bkg_rms = bkg.rms()
plt.imshow(bkg_rms, interpolation='nearest', cmap='gray', origin='lower')
plt.colorbar();
Explanation: Estimate background
End of explanation
#creat filter kernal
kern = np.array([[1,2,1], [2,4,2], [1,2,1]]) #Basic default kernal
kern = np.array([[1,2,4,2,1],[2,3,5,3,2],[3,6,8,6,3],[2,3,5,3,2],[1,2,4,2,1]]) #Basic default kernal
from scipy.stats import multivariate_normal as mvnorm
x = np.linspace(-5, 5, 100)
y = mvnorm.pdf(x, mean=0, cov=1)
#plt.plot(x,y)
#mvnorm.pdf(
x = np.mgrid[-1:1:.01]
y = x;
r = (x**2+y**2)**0.5
kern = np.empty(x.shape)
#for i in kern.shape[0]
# kern[i,:] = mvnorm.pdf(r[i,:], mean=0, cov=1)
#plt.imshow(kern)
#y = mvnorm.pdf(x, mean=0, cov=1)
#pos = np.empty(x.shape + (2,))l
#pos[:, :, 0] = x; pos[:, :, 1] = y
x = np.mgrid[-10:10:1]
x.shape
objects = sep.extract(np.ascontiguousarray(Swirl[:,:,1]), 1.5, err=bkg.globalrms,filter_kernel=kern)
Explanation: Now extract objects
End of explanation
len(objects)
from matplotlib.patches import Ellipse
#fig, ax = plt.subplots()
plt.figure(figsize = [8,3])
plt.pcolor(X,Y,Swirl[:,:,1], cmap='RdBu_r');
ax = plt.gca()
plt.clim([-50, 50])
plt.axis('scaled')
plt.xlim([X.min(), X.max()])
plt.ylim([Y.min(), Y.max()])
plt.colorbar()
scale = (X[1,-1]-X[1,1])/uSize[1]
#plt.plot(objects['x']*scale,objects['y']*scale,'go')
for i in range(len(objects)):
e = Ellipse(xy=(objects['x'][i]*scale, objects['y'][i]*scale),
width=6*objects['a'][i]*scale,
height=6*objects['b'][i]*scale,
angle=objects['theta'][i] * 180. / np.pi)
e.set_facecolor('none')
e.set_edgecolor('red')
ax.add_artist(e)
#objects['x']
scale = (X[1,-1]-X[1,1])/uSize[1]
objects['x']*scale
X[objects['x'],objects['y']]
objects['x']
Explanation: np.ascontiguousarray(Swirl[:,:,1]).flags how to make array C contiguous
End of explanation |
6,190 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tables illustration of working with computational models of probability
David Culler
This notebook seeks to illustrate simple datascience.Table operations as part of a basic lesson on probability.
Documentation on the datascience module is at http
Step1: Create a table as a model of a stochastic phenomenom
Here we create a single column table as a computational model of a die
with each element of the table containing the number of dots on the side.
This illustrates the simplest way of constructing a table, Table.with_column.
Then we define a function that models rolling a die. This illustrates the
use of Table.sample
to take random sample of a table.
Step2: Composition
Build a computational model of rolling a die many times using our roll_die function as a building block. It happens to utilize tables internally, but we have abstracted away from that. Here it is a black box that yields a random roll of a die. Again, we create a table to model the result.
Step3: Visualization
Above we see just the tip of the table. And, of course, it would be tedious to look at all those rolls. Instead, we want to look at some descriptive statistics of the process. We can do that with Table.hist, which
can be used to produce a histogram or a discrete distribution (the default, i.e., normed = True).
The histogram of the rolls shows what we mean by 'uniform at random'. All sides are equally likely to come up on each roll. Thus the number of times each comes up in a large number of rolls is nearly constant. But not quite.
The rolls table it self won't change on its own, but every time you run the cell above, you will get a slightly different picture.
Step4: Computing on distributions
While visualization is useful for humans in the data exploration process, everything you see you should be able to compute upon. The analog of Table.hist that yields a table, rather than a chart is table.bin. It returns a new table with a row for each bin.
Here we also illustrate doing some computing on the distribution table
Step6: Statistical thinking
They say "life is about rolling dice". The statistical perspective on the rolls table above would be captured by sampling many times from the die table. We can capture than naturally in a computational abstraction that rolls a die n times.
Step8: Interactive visualization
The central concept of computational thinking - abstraction. Here it is illustrated again by wrapping up the process of rolling many die and visualizing the resulting distribution into a function.
Once we have it as a function, we can illustrate the central concept of inferential thinking - the law of large numbers - through interactive visualization. When a dies is rolled only a few times, the resulting distribution may be very uneven. But when it is rolled many many times, it is extremely rare for the result to be uneven.
Step9: Likelihood
If we really roll the dice several times in life, what might we expect the overall outcome to be like?
We can extend our computational approach further by simulating the rolling of several die many many times. | Python Code:
# HIDDEN - generic nonsense for setting up environment
from datascience import *
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plots
plots.style.use('fivethirtyeight')
from ipywidgets import interact
# datascience version number of last run of this notebook
version.__version__
Explanation: Tables illustration of working with computational models of probability
David Culler
This notebook seeks to illustrate simple datascience.Table operations as part of a basic lesson on probability.
Documentation on the datascience module is at http://data8.org/datascience/index.html and of Tables as http://data8.org/datascience/tables.html.
End of explanation
die = Table().with_column('side', [1,2,3,4,5,6])
die
# Simulate the roll of a die by sampling from the die table
def roll_die():
return die.sample(1)['side'][0]
# roll it. Try this over and over and see what you get
roll_die()
Explanation: Create a table as a model of a stochastic phenomenom
Here we create a single column table as a computational model of a die
with each element of the table containing the number of dots on the side.
This illustrates the simplest way of constructing a table, Table.with_column.
Then we define a function that models rolling a die. This illustrates the
use of Table.sample
to take random sample of a table.
End of explanation
# Simulate rolling it many times, creating a table that records the rolls
num_rolls = 600
rolls = Table().with_column('roll', [roll_die() for i in range(num_rolls)])
rolls
Explanation: Composition
Build a computational model of rolling a die many times using our roll_die function as a building block. It happens to utilize tables internally, but we have abstracted away from that. Here it is a black box that yields a random roll of a die. Again, we create a table to model the result.
End of explanation
bins = np.arange(1,8)
rolls.hist(bins=bins, normed=False)
# Normalize this gives a distribution. The probability of each side appearing. 1/6.
rolls.hist(normed=True,bins=bins)
Explanation: Visualization
Above we see just the tip of the table. And, of course, it would be tedious to look at all those rolls. Instead, we want to look at some descriptive statistics of the process. We can do that with Table.hist, which
can be used to produce a histogram or a discrete distribution (the default, i.e., normed = True).
The histogram of the rolls shows what we mean by 'uniform at random'. All sides are equally likely to come up on each roll. Thus the number of times each comes up in a large number of rolls is nearly constant. But not quite.
The rolls table it self won't change on its own, but every time you run the cell above, you will get a slightly different picture.
End of explanation
roll_dist = rolls.bin(normed=True,bins=bins).take(range(6))
roll_dist
roll_dist['roll density']
roll_dist['Variation'] = (roll_dist['roll density'] - 1/6)/(1/6)
roll_dist
# What is the average value of a roll?
sum(roll_dist['bin']*roll_dist['roll density'])
np.mean(rolls['roll'])
Explanation: Computing on distributions
While visualization is useful for humans in the data exploration process, everything you see you should be able to compute upon. The analog of Table.hist that yields a table, rather than a chart is table.bin. It returns a new table with a row for each bin.
Here we also illustrate doing some computing on the distribution table:
* A column of a table is accessed using the standard python get syntax: <object> [ <key> ]. This actually yields an object that is a numpy array, but part of the beauty of tables is you don't have to worry about what that is. The beauty of numpy arrays is that you can work with them pretty much like values, i.e., you can scale them by a constant, add them together and things like that.
* A column is inserted in the table using the standard python set syntax for objects <object> [ <key> ] = <value>. Note that this modifies the table, adding a column if it does not exist ro updating it if it does. The transformations on tables are functional, they produce new tables. Set treats a table like an object and modifies it.
End of explanation
# Life is about rolling lots of dice.
# Simulate rolling n dice.
def roll(n):
Roll n die. Return a table of the rolls
return die.sample(n, with_replacement=True)
# try it out. many times
roll(10)
Explanation: Statistical thinking
They say "life is about rolling dice". The statistical perspective on the rolls table above would be captured by sampling many times from the die table. We can capture than naturally in a computational abstraction that rolls a die n times.
End of explanation
def show_die_dist(n):
Roll a die n times and show the distribution of sides that appear.
roll(n).hist(bins=np.arange(1,8))
# We can now use the ipywidget we had included at the beginning.
interact(show_die_dist, n=(10, 1000, 10))
Explanation: Interactive visualization
The central concept of computational thinking - abstraction. Here it is illustrated again by wrapping up the process of rolling many die and visualizing the resulting distribution into a function.
Once we have it as a function, we can illustrate the central concept of inferential thinking - the law of large numbers - through interactive visualization. When a dies is rolled only a few times, the resulting distribution may be very uneven. But when it is rolled many many times, it is extremely rare for the result to be uneven.
End of explanation
num_die = 10
num_rolls = 100
# Remember - referencing a column gives an array
roll(num_die)['side']
# Simulate rolling num_die dice num_rolls times and build a table of the result
rolls = Table(["die_"+str(i) for i in range(num_die)]).with_rows([roll(num_die)['side'] for i in range(num_rolls)])
rolls
# If we think of each row as a life experience, what is the life like?
label = "{}_dice".format(num_die)
sum_rolls = Table().with_column(label, [np.sum(roll(num_die)['side']) for i in range(num_rolls)])
sum_rolls.hist(range=[10,6*num_die], normed=False)
sum_rolls.stats()
# Or as a distribution
sum_rolls.hist(range=[10,6*num_die],normed=True)
# Or normalize by the number of die ...
#
Table().with_column(label, [np.sum(roll(num_die)['side'])/num_die for i in range(num_rolls)]).hist(normed=False)
Explanation: Likelihood
If we really roll the dice several times in life, what might we expect the overall outcome to be like?
We can extend our computational approach further by simulating the rolling of several die many many times.
End of explanation |
6,191 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SymPyとチャート式で復習する高校数学I - PyLadies Tokyo Meetup #6 LT
お前だれよ?
@iktakahiro
blog
Step1: 式の計算
次の計算をせよ。(13頁, 基礎例題5)
(1) $(5x^3+3x-2x^2-4)+(3x^3-3x^2+5)$
Step2: 答え
Step3: 答え
Step4: 答え
Step5: 答え
Step6: 答え
Step7: (SymPyは出てきませんでした..)
2次方程式
次の2次方程式を解け。(141頁, 基礎例題85)
(1) $x^2-x-20$
(2) $x^2-12x-36$
Step8: 答え | Python Code:
import sympy
# 記号の定義
x, y = sympy.symbols('x y')
# 式の定義
expr = 2 * x + y
print('定義された式:\n', expr)
# x, y に数値を代入
a1 = expr.subs([(x, 4), (y, 3)])
print('\nx=4, Y=3の場合:\n', a1)
a2 = expr - y
print('\nexpr から y をマイナス:\n', a2)
Explanation: SymPyとチャート式で復習する高校数学I - PyLadies Tokyo Meetup #6 LT
お前だれよ?
@iktakahiro
blog: https://librabuch.jp
PyData.Tokyo オーガナイザー
Python 2014 チュートリアル PyData 担当
Pythonエンジニア養成読本 第四章PyData入門 執筆
<a href="http://www.amazon.co.jp/gp/product/4774173207/ref=as_li_ss_il?ie=UTF8&camp=247&creative=7399&creativeASIN=4774173207&linkCode=as2&tag=librabuch-22"><img border="0" src="http://ws-fe.amazon-adsystem.com/widgets/q?_encoding=UTF8&ASIN=4774173207&Format=_SL250_&ID=AsinImage&MarketPlace=JP&ServiceVersion=20070822&WS=1&tag=librabuch-22" ></a><img src="http://ir-jp.amazon-adsystem.com/e/ir?t=librabuch-22&l=as2&o=9&a=4774173207" width="1" height="1" border="0" alt="" style="border:none !important; margin:0px !important;" />
本日のテキスト
<a href="http://www.amazon.co.jp/gp/product/4410102044/ref=as_li_ss_il?ie=UTF8&camp=247&creative=7399&creativeASIN=4410102044&linkCode=as2&tag=librabuch-22"><img border="0" src="http://ws-fe.amazon-adsystem.com/widgets/q?_encoding=UTF8&ASIN=4410102044&Format=_SL250_&ID=AsinImage&MarketPlace=JP&ServiceVersion=20070822&WS=1&tag=librabuch-22" ></a><img src="http://ir-jp.amazon-adsystem.com/e/ir?t=librabuch-22&l=as2&o=9&a=4410102044" width="1" height="1" border="0" alt="" style="border:none !important; margin:0px !important;" />
数学1の範囲
式の計算
実数, 一次不等式
集合と命題
2次関数
2次方程式と2次不等式
三角比
三角形への応用
データの分析
本日の復習範囲
式の計算
一次不等式
2次関数
2次方程式と2次不等式
SymPyとは
SymPyとは、数式処理(symbolic mathematis)を行うためのパッケージです。
pipインストール可能です。
sh
pip install sympy
次のような計算が数式処理の最もベーシックなものです。
End of explanation
l = 5 * x ** 3 + 3 * x - 2 * x ** 2 - 4
r = 3 * x ** 3 - 3 * x ** 2 + 5
l + r
Explanation: 式の計算
次の計算をせよ。(13頁, 基礎例題5)
(1) $(5x^3+3x-2x^2-4)+(3x^3-3x^2+5)$
End of explanation
sympy.expand((x-2*y+1)*(x-2*y-2))
Explanation: 答え: $8x^3-5x^2+3x+1$
式の展開
次の式を展開せよ。(20頁, 基礎例題10)
(1) $(x-2y+1)(x-2y-2)$
(2) $(a+b+c)^2$
End of explanation
a, b, c = sympy.symbols('a b c')
sympy.expand((a+b+c)**2)
Explanation: 答え: $x^2-4xy-x+4y^2+2y-2$
End of explanation
sympy.factor(x**2+8*x+15)
Explanation: 答え: $a^2+b^2+c^2+2ab+2bc+2ca$
因数分解
次の式を因数分解せよ(26頁, 基礎例題14)
(1) $x^2+8x+15$
End of explanation
from sympy.solvers.inequalities import reduce_rational_inequalities
reduce_rational_inequalities([[4 * x + 5 > 2 * x -3]], x)
Explanation: 答え: $(x+3)(x+5)$
不等式
次の不等式を解け。(58頁, 基礎例題35)
(1) $4x+5>2x-3$
End of explanation
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
x = np.arange(-1.8, 4 , 0.2)
y = 2 * x ** 2 - 4 * x - 1
plt.style.use('ggplot')
plt.plot(x,y)
plt.axhline(y=0, color='gray')
plt.axvline(x=0, color='gray')
plt.xlabel("x")
plt.ylabel("y")
plt.show()
Explanation: 答え: $x > -4$
二次関数
次の二次関数のグラフをかけ。(109頁, 基礎例題66)
(1) $y = 2x^2-4x-1$
End of explanation
import sympy
from sympy.solvers import solve
x = sympy.symbols('x')
expr = x**2 - x - 20
solve(expr, x)
Explanation: (SymPyは出てきませんでした..)
2次方程式
次の2次方程式を解け。(141頁, 基礎例題85)
(1) $x^2-x-20$
(2) $x^2-12x-36$
End of explanation
expr = x**2 - 12 * x + 36
solve(expr, x)
Explanation: 答え: $x = -4, 5$
End of explanation |
6,192 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Seminar 14
Newton method
Reminder
Descent methods
Descent directions
Gradient descent
Step size selection rules
Convergence theorem
Experiments
Drawbacks of gradient descent
Linear convergence
Dependence on the condition number
Can we fix both of them?
Idea of Newton method
Consider the problem
$$
\min\limits_{x\ \in \mathbb{R}^n} f(x).
$$
Gradient descent $\equiv$ linear approximation of $f$
Newton method $\equiv$ quadratic approximation of $f$
Step1: Exact solution with CVXPy
Step2: Auxilliary functions
Step3: Implementation of Newton method
Step4: Comparison with gradient descent
Step5: Comparison of running time | Python Code:
import numpy as np
USE_COLAB = False
if USE_COLAB:
!pip install git+https://github.com/amkatrutsa/liboptpy
import liboptpy.unconstr_solvers as methods
import liboptpy.step_size as ss
n = 1000
m = 200
x0 = np.zeros((n,))
A = np.random.rand(n, m) * 10
Explanation: Seminar 14
Newton method
Reminder
Descent methods
Descent directions
Gradient descent
Step size selection rules
Convergence theorem
Experiments
Drawbacks of gradient descent
Linear convergence
Dependence on the condition number
Can we fix both of them?
Idea of Newton method
Consider the problem
$$
\min\limits_{x\ \in \mathbb{R}^n} f(x).
$$
Gradient descent $\equiv$ linear approximation of $f$
Newton method $\equiv$ quadratic approximation of $f$:
$$
f(x + h) \approx f(x) + \langle f'(x), h \rangle + \frac{1}{2}h^{\top}f''(x)h \to \min_{h}
$$
From the necessary condition follows:
$$
f'(x) + f''(x) h = 0, \qquad h^* = -(f''(x))^{-1} f'(x)
$$
Is the found direction descent?
Check sign of the scalar product $\langle f'(x), h^* \rangle$.
$$
\langle f'(x), h^* \rangle = -(f')^{\top}(x) (f''(x))^{-1} f'(x) < 0 \Leftarrow f''(x) \succ 0
$$
Q: what if for some $k^*$ hessian becomes indefinite?
Newton method
Classical Newton method: $\alpha_k \equiv 1$
Damped Newton method: $\alpha_k$ is selected in every iteration according to given rule
```python
def NewtonMethod(f, x0, epsilon, **kwargs):
x = x0
while True:
h = ComputeNewtonStep(x, f, **kwargs)
if StopCriterion(x, f, h, **kwargs) < epsilon:
break
alpha = SelectStepSize(x, h, f, **kwargs)
x = x + alpha * h
return x
```
Convergence theorem (Y. E. Nesterov Introduction to convex optimization, $\S$ 1.2)
Theorem. Assue that $f(x)$ is
- twice differentiable and its hessian is Lipschitz with constant $M$
- there exists local minimizer where the hessian is positive definite
$$
f''(x^*) \succeq l\mathbf{I}, \; l > 0
$$
starting point $x_0$ is sufficiently close to the minimizer
$$
\|x_0 - x^*\|_2 \leq \frac{2l}{3M}
$$
Then Newton method convrges quadratically:
$$
\|x_{k+1} - x^ \|_2 \leq \dfrac{M\|x_k - x^\|^2_2}{2 (l - M\|x_k - x^*\|_2)}
$$
Example
Use Newton method to find root of the following function
$$
\varphi(t) = \dfrac{t}{\sqrt{1+t^2}}
$$
and find from what interval of $t_0$ it converges
Affine invariance
Consider function $f(x)$ и non-singular transformation with matrix $A$.
Check how Newton method direction will be changed after transformation $A$.
Let $x = Ay$ and $g(y) = f(Ay)$. Then
$$
g(y + u) \approx g(y) + \langle g'(y), u \rangle + \frac{1}{2} u^{\top} g''(y) u \to \min_{u}
$$
and
$$
u^* = -(g''(y))^{-1} g'(y) \qquad y_{k+1} = y_k - (g''(y_k))^{-1} g'(y_k)
$$
or
\begin{align}
y_{k+1} & = y_k - (A^{\top}f''(Ay_k)A)^{-1} A^{\top}f'(Ay_k)\
& = y_k - A^{-1}(f''(Ay_k))^{-1}f'(Ay_k)
\end{align}
Thus,
$$
Ay_{k+1} = Ay_k - (f''(Ay_k))^{-1}f'(Ay_k) \quad x_{k+1} = x_k - (f''(x_k))^{-1}f'(x_k)
$$
Therefore, direction given by Newton method is transformed in the similar way as coordinates!
Newton method with hessian modification
How to deal with possible not positive definiteness of the hessian in some iteration?
If $f''(x)$ is not positive definite, use positive definite matrix $f''(x) + \Delta E$
Matrix $\Delta E$ can be chosen in different ways using the following problem
$$
\Delta E = \arg\min \|\Delta E\|, \quad \text{s.t. } f''(x) + \Delta E \succ 0
$$
$\|\cdot\|2$: $\Delta E = \tau I$, where $\tau = \max(0, \delta - \lambda{\min}(f''(x)))$, where $\delta > 0$ - given estimate of the minimal eigenvalue of the matrix $f''(x) + \Delta E$
What is $\Delta E$ if one uses $\|\cdot\|_F$?
As far as $\lambda(f''(x))$ is usually unavailable in every iteration, it is possible to modify Cholesky factorization algorithm such that it gives factor for the matrix $f''(x) + \Delta E$ instead of the initial matrix $f''(x)$
Computational complexity and experiments
Bottlenecks in Newton method:
composing and storing of hessian
solving of linear system
$$
f''(x_k)h = -f'(x_k)
$$
Test problem
Remember problem of finding analytical center of the inequality sustem $Ax \leq 1$ subject to $|x_i| \leq 1$
$$
f(x) = - \sum_{i=1}^m \log(1 - a_i^{\top}x) - \sum\limits_{i = 1}^n \log (1 - x^2_i) \to \min_x
$$
$$
f'(x) - ? \quad f''(x) - ?
$$
End of explanation
import cvxpy as cvx
x = cvx.Variable((n, 1))
obj = cvx.Minimize(cvx.sum(-cvx.log(1 - A.T * x)) -
cvx.sum(cvx.log(1 - cvx.square(x))))
prob = cvx.Problem(obj)
prob.solve(solver="SCS", verbose=True, max_iters=1000)
print("Optimal value =", prob.value)
Explanation: Exact solution with CVXPy
End of explanation
f = lambda x: -np.sum(np.log(1 - A.T.dot(x))) - np.sum(np.log(1 - x*x))
grad_f = lambda x: np.sum(A.dot(np.diagflat(1 / (1 - A.T.dot(x)))), axis=1) + 2 * x / (1 - np.power(x, 2))
hess_f = lambda x: (A.dot(np.diagflat(1 / (1 - A.T.dot(x))**2))).dot(A.T) + np.diagflat(2 * (1 + x**2) / (1 - x**2)**2)
Explanation: Auxilliary functions
End of explanation
def Newton(f, gradf, hessf, x0, epsilon, num_iter, line_search,
disp=False, callback=None, **kwargs):
x = x0.copy()
iteration = 0
opt_arg = {"f": f, "grad_f": gradf}
for key in kwargs:
opt_arg[key] = kwargs[key]
while True:
gradient = gradf(x)
hess = hessf(x)
h = np.linalg.solve(hess, -gradient)
alpha = line_search(x, h, **opt_arg)
x = x + alpha * h
if callback is not None:
callback(x)
iteration += 1
if disp:
print("Current function val =", f(x))
print("Current gradient norm = ", np.linalg.norm(gradf(x)))
if np.linalg.norm(gradf(x)) < epsilon:
break
if iteration >= num_iter:
break
res = {"x": x, "num_iter": iteration, "tol": np.linalg.norm(gradf(x))}
return res
Explanation: Implementation of Newton method
End of explanation
newton = methods.so.NewtonMethod(f, grad_f, hess_f, ss.Backtracking("Armijo", rho=0.9, beta=0.1, init_alpha=1.))
x_newton = newton.solve(x0, tol=1e-6, max_iter=50, disp=True)
gd = methods.fo.GradientDescent(f, grad_f, ss.Backtracking("Armijo", rho=0.9, beta=0.1, init_alpha=1.))
x_gd = gd.solve(x0, tol=1e-6, max_iter=50, disp=True)
%matplotlib inline
import matplotlib.pyplot as plt
if not USE_COLAB:
plt.rc("text", usetex=True)
plt.figure(figsize=(12, 8))
# Newton
plt.semilogy([np.linalg.norm(grad_f(x)) for x in newton.get_convergence()], label="$\| f'(x_k) \|^{N}_2$")
# Gradient
plt.semilogy([np.linalg.norm(grad_f(x)) for x in gd.get_convergence()], label="$\| f'(x_k) \|^{G}_2$")
plt.xlabel(r"Number of iterations, $k$", fontsize=26)
plt.ylabel(r"Convergence rate", fontsize=26)
plt.xticks(fontsize = 24)
plt.yticks(fontsize = 24)
plt.legend(loc="best", fontsize=24)
Explanation: Comparison with gradient descent
End of explanation
%timeit newton.solve(x0, tol=1e-6, max_iter=50)
%timeit gd.solve(x0, tol=1e-6, max_iter=50)
Explanation: Comparison of running time
End of explanation |
6,193 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TensorFlow Similarity Authors.
Step1: Tensorflow Similarity Sampler I/O Cookbook
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: <hr>
MultiShotMemorySampler
Step3: Generating Batches
The Tensorflow Similarity memory samplers are a subclass of tf.keras.utils.Sequence, overriding the __getitem__ and __len__ methods.
Additionally, Tensorflow Similarity provides a generate_batch() method that takes a batch ID and yields a single batch.
We verify that the batch batch only conatins the classes defined in CLASS_LIST and that each class has ms_classes_per_batch * ms_examples_per_class_per_batch examples.
Step4: Sampler Sizes
MultiShotMemorySampler() provides various attributes for accessing info about the data
Step5: Accessing the Examples
Additionaly, the MultiShotMemorySampler() provides get_slice() for manually accessing examples within the Sampler.
NOTE
Step7: <hr>
SingleShotMemorySampler
Step8: Sampler Sizes
SingleShotMemorySampler() provides various attributes for accessing info about the data
Step9: Accessing the Examples
Additionaly, the SingleShotMemorySampler() provides get_slice() for manually accessing examples within the Sampler.
The method returns slice size plus the augmented examples returned by the augmenter function.
Step10: <hr>
TFDatasetMultiShotMemorySampler
Step11: Generating Batches
The Tensorflow Similarity memory samplers are a subclass of tf.keras.utils.Sequence, overriding the __getitem__ and __len__ methods.
Additionally, Tensorflow Similarity provides a generate_batch() method that takes a batch ID and yields a single batch.
We verify that the batch batch only conatins the classes defined in CLASS_LIST and that each class has tfds_classes_per_batch * tfds_examples_per_class_per_batch examples.
Step12: Sampler Sizes
TFDatasetMultiShotMemorySampler() provides various attributes for accessing info about the data
Step13: Accessing the Examples
Additionaly, the SingleShotMemorySampler() provides get_slice() for manually accessing examples within the Sampler.
The method returns slice size plus the augmented examples returned by the augmenter function. | Python Code:
# @title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TensorFlow Similarity Authors.
End of explanation
import os
import random
from typing import Tuple
import numpy as np
from matplotlib import pyplot as plt
# INFO messages are not printed.
# This must be run before loading other modules.
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "1"
import tensorflow as tf
# install TF similarity if needed
try:
import tensorflow_similarity as tfsim # main package
except ModuleNotFoundError:
!pip install tensorflow_similarity
import tensorflow_similarity as tfsim
tfsim.utils.tf_cap_memory() # Avoid GPU memory blow up
Explanation: Tensorflow Similarity Sampler I/O Cookbook
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/similarity/blob/master/examples/sampler_io_cookbook.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/similarity/blob/master/examples/sampler_io_cookbook.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Tensorflow Similarity's Samplers ensure that each batch contains a target number of examples per class per batch. This ensures that the loss functions are able to construct tuples of anchor, positive, and negatives within each batch of examples.
In this notebook you will learn how to use the:
MultiShotMemorySampler() for fitting to a sequence of data, such as a dataset.
SingleShotMemorySampler() to treat each example as a seperate class and generate augmented versions within each batch.
TFDatasetMultiShotMemorySampler() to directly integrate with the Tensorflow dataset catalog.
Imports
End of explanation
num_ms_examples = 100000 # @param {type:"slider", min:1000, max:1000000}
num_ms_features = 784 # @param {type:"slider", min:10, max:1000}
num_ms_classes = 10 # @param {type:"slider", min:2, max:1000}
# We use random floats here to represent a dense feature vector
X_ms = np.random.rand(num_ms_examples, num_ms_features)
# We use random ints to represent N different classes
y_ms = np.random.randint(low=0, high=10, size=num_ms_examples)
num_known_ms_classes = 5 # @param {type:"slider", min:2, max:1000}
ms_classes_per_batch = num_known_ms_classes
ms_examples_per_class_per_batch = 2 # @param {type:"integer"}
ms_class_list = random.sample(range(num_ms_classes), k=num_known_ms_classes)
ms_sampler = tfsim.samplers.MultiShotMemorySampler(
X_ms,
y_ms,
classes_per_batch=ms_classes_per_batch,
examples_per_class_per_batch=ms_examples_per_class_per_batch,
class_list=ms_class_list,
)
Explanation: <hr>
MultiShotMemorySampler: Load Random Numpy Arrays
The following cell loads random numpy data using TensorFlow similarityMultiShotMemorySampler().
Using a sampler is required to ensure that each batch contains at least N samples for each class included in a batch.
This batch strucutre is required for the contrastive loss to properly compute positive pairwise distances.
End of explanation
X_ms_batch, y_ms_batch = ms_sampler.generate_batch(100)
print("#" * 10 + " X " + "#" * 10)
print(X_ms_batch)
print("\n" + "#" * 10 + " y " + "#" * 10)
print(y_ms_batch)
# Check that the batch size is equal to the target number of classes * target number of examples per class.
assert tf.shape(X_ms_batch)[0] == (ms_classes_per_batch * ms_examples_per_class_per_batch)
# Check that the number of columns matches the number of expected features.
assert tf.shape(X_ms_batch)[1] == (num_ms_features)
# Check that classes in the batch are from the allowed set in CLASS_LIST
assert set(tf.unique(y_ms_batch)[0].numpy()) - set(ms_class_list) == set()
# Check that we only have NUM_CLASSES_PER_BATCH
assert len(tf.unique(y_ms_batch)[0]) == ms_classes_per_batch
Explanation: Generating Batches
The Tensorflow Similarity memory samplers are a subclass of tf.keras.utils.Sequence, overriding the __getitem__ and __len__ methods.
Additionally, Tensorflow Similarity provides a generate_batch() method that takes a batch ID and yields a single batch.
We verify that the batch batch only conatins the classes defined in CLASS_LIST and that each class has ms_classes_per_batch * ms_examples_per_class_per_batch examples.
End of explanation
print(f"The sampler contains {len(ms_sampler)} steps per epoch.")
print(f"The sampler is using {ms_sampler.num_examples} examples out of the original {num_ms_examples}.")
print(f"Each examples has the following shape: {ms_sampler.example_shape}.")
Explanation: Sampler Sizes
MultiShotMemorySampler() provides various attributes for accessing info about the data:
* __len__ provides the number of steps per epoch.
* num_examples provides the total number of examples within the sampler.
* example_shape provides the shape of the examples.
The num_examples attribute represents the subset of X and y where y is in the class_list with each class limited to num_examples_per_class.
End of explanation
# Get 10 examples starting at example 200.
X_ms_slice, y_ms_slice = ms_sampler.get_slice(begin=200, size=10)
print("#" * 10 + " X " + "#" * 10)
print(X_ms_slice)
print("\n" + "#" * 10 + " y " + "#" * 10)
print(y_ms_slice)
# Check that the batch size is equal to our get_slice size.
assert tf.shape(X_ms_slice)[0] == 10
# Check that the number of columns matches the number of expected features.
assert tf.shape(X_ms_slice)[1] == (num_ms_features)
# Check that classes in the batch are from the allowed set in CLASS_LIST
assert set(tf.unique(y_ms_slice)[0].numpy()) - set(ms_class_list) == set()
Explanation: Accessing the Examples
Additionaly, the MultiShotMemorySampler() provides get_slice() for manually accessing examples within the Sampler.
NOTE: the examples are shuffled when creating the Sampler but will yield the same examples for each call to get_slice(begin, size).
End of explanation
(aug_x, _), _ = tf.keras.datasets.mnist.load_data()
# Normalize the image data.
aug_x = tf.cast(aug_x / 255.0, dtype="float32")
aug_num_examples_per_batch = 18 # @param {type:"slider", min:18, max:512}
aug_num_augmentations_per_example = 1 # @param {type:"slider", min:1, max:3}
data_augmentation = tf.keras.Sequential(
[
tf.keras.layers.experimental.preprocessing.RandomRotation(0.12),
tf.keras.layers.experimental.preprocessing.RandomZoom(0.25),
]
)
def augmenter(
x: tfsim.types.FloatTensor, y: tfsim.types.IntTensor, examples_per_class: int, is_warmup: bool, stddev=0.025
) -> Tuple[tfsim.types.FloatTensor, tfsim.types.IntTensor]:
Image augmentation function.
Args:
X: FloatTensor representing the example features.
y: IntTensor representing the class id. In this case
the example index will be used as the class id.
examples_per_class: The number of examples per class.
Not used here.
is_warmup: If True, the training is still in a warm
up state. Not used here.
stddev: Sets the amount of gaussian noise added to
the image.
_ = examples_per_class
_ = is_warmup
aug = tf.squeeze(data_augmentation(tf.expand_dims(x, -1)))
aug = aug + tf.random.normal(tf.shape(aug), stddev=stddev)
x = tf.concat((x, aug), axis=0)
y = tf.concat((y, y), axis=0)
idxs = tf.range(start=0, limit=tf.shape(x)[0])
idxs = tf.random.shuffle(idxs)
x = tf.gather(x, idxs)
y = tf.gather(y, idxs)
return x, y
aug_sampler = tfsim.samplers.SingleShotMemorySampler(
aug_x,
augmenter=augmenter,
examples_per_batch=aug_num_examples_per_batch,
num_augmentations_per_example=aug_num_augmentations_per_example,
)
# Plot the first 36 examples
num_imgs = 36
num_row = num_col = 6
aug_batch_x, aug_batch_y = aug_sampler[0]
# Sort the class ids so we can see the original
# and augmented versions as pairs.
sorted_idx = np.argsort(aug_batch_y)
plt.figure(figsize=(10, 10))
for i in range(num_imgs):
idx = sorted_idx[i]
ax = plt.subplot(num_row, num_col, i + 1)
plt.imshow(aug_batch_x[idx])
plt.title(int(aug_batch_y[idx]))
plt.axis("off")
plt.tight_layout()
Explanation: <hr>
SingleShotMemorySampler: Augmented MNIST Examples
The following cell loads and augments MNIST examples using the SingleShotMemorySampler().
The Sampler treats each example as it's own class and adds augmented versions of each image to the batch.
This means the final batch size is the number of examples per batch * (1 + the number of augmentations).
End of explanation
print(f"The sampler contains {len(aug_sampler)} steps per epoch.")
print(f"The sampler is using {aug_sampler.num_examples} examples out of the original {len(aug_x)}.")
print(f"Each examples has the following shape: {aug_sampler.example_shape}.")
Explanation: Sampler Sizes
SingleShotMemorySampler() provides various attributes for accessing info about the data:
* __len__ provides the number of steps per epoch.
* num_examples provides the number of examples within the sampler.
* example_shape provides the shape of the examples.
The num_examples attribute represents the unaugmented examples within the sampler.
End of explanation
# Get 10 examples starting at example 200.
X_aug_slice, y_aug_slice = aug_sampler.get_slice(begin=200, size=10)
print("#" * 10 + " X " + "#" * 10)
print(tf.reshape(X_aug_slice, (10, -1)))
print("\n" + "#" * 10 + " y " + "#" * 10)
print(y_aug_slice)
# Check that the batch size is double our get_slice size (original examples + augmented examples).
assert tf.shape(X_aug_slice)[0] == 10 + 10
# Check that the number of columns matches the number of expected features.
assert tf.shape(X_aug_slice)[1] == (28)
# Check that the number of columns matches the number of expected features.
assert tf.shape(X_aug_slice)[2] == (28)
Explanation: Accessing the Examples
Additionaly, the SingleShotMemorySampler() provides get_slice() for manually accessing examples within the Sampler.
The method returns slice size plus the augmented examples returned by the augmenter function.
End of explanation
IMG_SIZE = 300 # @param {type:"integer"}
# preprocessing function that resizes images to ensure all images are the same shape
def resize(img, label):
with tf.device("/cpu:0"):
img = tf.cast(img, dtype="int32")
img = tf.image.resize_with_pad(img, IMG_SIZE, IMG_SIZE)
return img, label
training_classes = 16 # @param {type:"slider", min:1, max:37}
tfds_examples_per_class_per_batch = 4 # @param {type:"integer"}
tfds_class_list = random.sample(range(37), k=training_classes)
tfds_classes_per_batch = max(16, training_classes)
print(f"Class IDs seen during training {tfds_class_list}\n")
# use the train split for training
print("#" * 10 + " Train Sampler " + "#" * 10)
train_ds = tfsim.samplers.TFDatasetMultiShotMemorySampler(
"oxford_iiit_pet",
splits="train",
examples_per_class_per_batch=tfds_examples_per_class_per_batch,
classes_per_batch=tfds_classes_per_batch,
preprocess_fn=resize,
class_list=tfds_class_list,
) # We filter train data to only keep the train classes.
# use the test split for indexing and querying
print("\n" + "#" * 10 + " Test Sampler " + "#" * 10)
test_ds = tfsim.samplers.TFDatasetMultiShotMemorySampler(
"oxford_iiit_pet", splits="test", total_examples_per_class=20, classes_per_batch=tfds_classes_per_batch, preprocess_fn=resize
)
Explanation: <hr>
TFDatasetMultiShotMemorySampler: Load data from TF Dataset
The following cell loads data directly from the TensorFlow catalog using TensorFlow similarity
TFDatasetMultiShotMemorySampler().
Using a sampler is required to ensure that each batch contains at least N samples of each class incuded in a batch. Otherwise the contrastive loss does not work properly as it can't compute positive distances.
End of explanation
X_tfds_batch, y_tfds_batch = train_ds.generate_batch(100)
print("#" * 10 + " X " + "#" * 10)
print(f"Actual Tensor Shape {X_tfds_batch.shape}")
print(tf.reshape(X_tfds_batch, (len(X_tfds_batch), -1)))
print("\n" + "#" * 10 + " y " + "#" * 10)
print(y_tfds_batch)
# Check that the batch size is equal to the target number of classes * target number of examples per class.
assert tf.shape(X_tfds_batch)[0] == (tfds_classes_per_batch * tfds_examples_per_class_per_batch)
# Check that the number of columns matches the number of expected features.
assert tf.shape(X_tfds_batch)[1] == (300)
# Check that the number of columns matches the number of expected features.
assert tf.shape(X_tfds_batch)[2] == (300)
# Check that the number of columns matches the number of expected features.
assert tf.shape(X_tfds_batch)[3] == (3)
# Check that classes in the batch are from the allowed set in CLASS_LIST
assert set(tf.unique(y_tfds_batch)[0].numpy()) - set(tfds_class_list) == set()
# Check that we only have NUM_CLASSES_PER_BATCH
assert len(tf.unique(y_tfds_batch)[0]) == tfds_classes_per_batch
Explanation: Generating Batches
The Tensorflow Similarity memory samplers are a subclass of tf.keras.utils.Sequence, overriding the __getitem__ and __len__ methods.
Additionally, Tensorflow Similarity provides a generate_batch() method that takes a batch ID and yields a single batch.
We verify that the batch batch only conatins the classes defined in CLASS_LIST and that each class has tfds_classes_per_batch * tfds_examples_per_class_per_batch examples.
End of explanation
print(f"The Train sampler contains {len(train_ds)} steps per epoch.")
print(f"The Train sampler is using {train_ds.num_examples} examples.")
print(f"Each examples has the following shape: {train_ds.example_shape}.")
print(f"The Test sampler contains {len(test_ds)} steps per epoch.")
print(f"The Test sampler is using {test_ds.num_examples} examples.")
print(f"Each examples has the following shape: {test_ds.example_shape}.")
Explanation: Sampler Sizes
TFDatasetMultiShotMemorySampler() provides various attributes for accessing info about the data:
* __len__ provides the number of steps per epoch.
* num_examples provides the number of examples within the sampler.
* example_shape provides the shape of the examples.
End of explanation
# Get 10 examples starting at example 200.
X_tfds_slice, y_tfds_slice = train_ds.get_slice(begin=200, size=10)
print("#" * 10 + " X " + "#" * 10)
print(f"Actual Tensor Shape {X_tfds_slice.shape}")
print(tf.reshape(X_tfds_slice, (len(X_tfds_slice), -1)))
print("\n" + "#" * 10 + " y " + "#" * 10)
print(y_tfds_slice)
# Check that the batch size.
assert tf.shape(X_tfds_slice)[0] == 10
# Check that the number of columns matches the number of expected features.
assert tf.shape(X_tfds_slice)[1] == (300)
# Check that the number of columns matches the number of expected features.
assert tf.shape(X_tfds_slice)[2] == (300)
# Check that the number of columns matches the number of expected features.
assert tf.shape(X_tfds_slice)[3] == (3)
# Get 10 examples starting at example 200.
X_tfds_slice, y_tfds_slice = test_ds.get_slice(begin=200, size=10)
print("#" * 10 + " X " + "#" * 10)
print(f"Actual Tensor Shape {X_tfds_slice.shape}")
print(tf.reshape(X_tfds_slice, (len(X_tfds_slice), -1)))
print("\n" + "#" * 10 + " y " + "#" * 10)
print(y_tfds_slice)
# Check that the batch size.
assert tf.shape(X_tfds_slice)[0] == 10
# Check that the number of columns matches the number of expected features.
assert tf.shape(X_tfds_slice)[1] == (300)
# Check that the number of columns matches the number of expected features.
assert tf.shape(X_tfds_slice)[2] == (300)
# Check that the number of columns matches the number of expected features.
assert tf.shape(X_tfds_slice)[3] == (3)
Explanation: Accessing the Examples
Additionaly, the SingleShotMemorySampler() provides get_slice() for manually accessing examples within the Sampler.
The method returns slice size plus the augmented examples returned by the augmenter function.
End of explanation |
6,194 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced Pandas
Step1: <a id=wants></a>
Example
Step2: Reminders
What kind of object does each of the following produce?
Step3: Wants
We might imagine doing several different things with this data
Step4: Comments. The problem here is that the columns include both the numbers (which we want to plot) and some descriptive information (which we don't).
<a id='index'></a>
Setting and resetting the index
We start by setting and resetting the index. That may sound like a step backwards -- haven't we done this already? -- but it reminds us of some things that will be handy later.
Take the dataframe dd. What would we like in the index? Evenutally we'd like the dates llke [2011, 2012, 2013], but right now the row labels are more naturally the variable or country. Here are some varriants.
Setting the index
Step5: Exercise. Set Variable as the index.
Comment. Note that the new index brought its name along
Step6: Let's take a closer look at the index
Step7: That's a lot to process, so we break it into pieces.
ddi.index.names contains a list of level names. (Remind yourself that lists are ordered, so this tracks levels.)
ddi.index.levels contains the values in each level.
Here's what they like like here
Step8: Knowing the order of the index components and being able to inspect their values and names is fundamental to working with a multi-index.
Exercise
Step9: Comment. By default, reset_index pushes one or more index levels into columns. If we want to discard that level of the index altogether, we use the parameter drop=True.
Step10: Exercise. For the dataframe ddi do the following in separate code cells
Step11: Comment. We see here that the multi-index for the rows has been turned into a multi-index for the columns. Works the same way.
The only problem here is that the column labels are more complicated than we might want. Here, for example, is what we get with the plot method. As usual, .plot() plots all the columns of the dataframe, but here that means we're mixing variables. And the legend contains all the levels of the column labels.
Step12: Referring to variables with a multi-index
Can we refer to variables in the same way? Sort of, as long as we refer to the top level of the column index. It gives us a dataframe that's a subset of the original one.
Let's try each of these
Step13: What's going on? The theme is that we can reference the top level, which in ddi is the Variable. If we try to access a lower level, it bombs.
Exercise. With the dataframe ddt
Step14: Swapping levels
Since variables refer to the first level of the column index, it's not clear how we would group data by country. Suppose, for example, we wanted to plot Debt and Surplus for a specific country. What would we do?
One way to do that is to make the country the top level with the swaplevel method. Note the axis parameter. With axis=1 we swap column levels, with axis=0 (the default) we swap row levels.
Step15: Exercise. Use the dataframe ddts to plot Debt and Surplus across time for Argentina. Hint
Step16: Exercise. Use a combination of xs and standard slicing with [...] to extract the variable Debt for Greece.
SOL
<!--
ddt.xs("Greece", axis=1, level="Country")["Debt"]
-->
Exercise. Use the dataframe ddt -- and the xs method -- to plot Debt and Surplus across time for Argentina.
SOL
<!--
fig, ax = plt.subplots()
ddt.xs('Argentina', axis=1, level='Country').plot(ax=ax)
ax.legend(['Surplus', 'Debt'])
-->
<a id='stack'></a>
Stacking and unstacking
The set_index and reset_index methods work on the row labels -- the index. They move columns to the index and the reverse. The stack and unstack methods move index levels to and from column levels
Step17: Single level index
Step18: Multi-index
Step19: Let's get a smaller subset of this data to work with so we can see things a bit more clearly
Step20: Let's remind ourselves what we want. We want to
move the column index (Year) into the row index
move the Variable and ISO levels the other way, into the column labels.
The first one uses stack, the second one unstack.
Stacking
We stack our data, one variable on top of another, with a multi-index to keep track of what's what. In simple terms, we change the data from a wide format to a long format. The stack method takes the inner most column level and makes it the lowest row level.
Step21: Unstacking
Stacking moves columns into the index, "stacking" the data up into longer columns. Unstacking does the reverse, taking levels of the row index and turning them into column labels. Roughly speaking we're rotating or pivoting the data.
Step22: Exercise. Run the code below and explain what each line of code does.
Step23: Exercise (challenging). Take the unstacked dataframe dds. Use some combination of stack, unstack, and plot to plot the variable Surplus against Year for all three countries. Challenging mostly because you need to work out the steps by yourself.
SOL
<!--
ddse = dds.stack().unstack(level=['Variable', 'ISO'])
ddse['Surplus'].plot()
-->
<a id='pivot'></a>
Pivoting
The pivot method
Step24: Pivoting the data
Let's think specifically about what we want. We want to graph Emp against fsize for (say) 2013. This calls for
Step25: Comment. Note that all the parameters here are columns. That's not a choice, it's the way the the pivot method is written.
We do a plot for fun
Step26: <a id='review'></a>
Review
We return to the OECD's healthcare data, specifically a subset of their table on the number of doctors per one thousand population. This loads and cleans the data
Step27: Use this data to | Python Code:
%matplotlib inline
import pandas as pd # data package
import matplotlib.pyplot as plt # graphics module
import datetime as dt # date and time module
import numpy as np # foundation for Pandas
Explanation: Advanced Pandas: Shaping data
The second in a series of notebooks that describe Pandas' powerful data management tools. This one covers shaping methods: switching rows and columns, pivoting, and stacking. We'll see that this is all about the indexes: the row and column labels.
Outline:
Example: WEO debt and deficits. Something to work with.
Indexing. Setting and resetting the index. Multi-indexes.
Switching rows and columns. Transpose. Referring to variables with multi-indexes.
Stack and unstack. Managing column structure and labels.
Pivot. Unstack shortcut if we start with wide data.
Review. Apply what we've learned.
More data management topics coming.
Note: requires internet access to run.
<!--
internal links http://sebastianraschka.com/Articles/2014_ipython_internal_links.html
-->
This Jupyter notebook was created by Dave Backus, Chase Coleman, and Spencer Lyon for the NYU Stern course Data Bootcamp.
tl;dr
Let df be a DataFrame
We use df.set_index to move columns into the index of df
We use df.reset_index to move one or more levels of the index back to columns. If we set drop=True, the requested index levels are simply thrown away instead of made into columns
We use df.stack to move column index levels into the row index
We use df.unstack to move row index levels into the colunm index (Helpful mnemonic: unstack moves index levels up)
<a id=prelims></a>
Preliminaries
Import packages, etc
End of explanation
url = 'http://www.imf.org/external/pubs/ft/weo/2016/02/weodata/WEOOct2016all.xls'
# (1) define the column indices
col_indices = [1, 2, 3, 4, 6] + list(range(9, 46))
# (2) download the dataset
weo = pd.read_csv(url,
sep = '\t',
#index_col='ISO',
usecols=col_indices,
skipfooter=1, engine='python',
na_values=['n/a', '--'],
thousands =',',encoding='windows-1252')
# (3) turn the types of year variables into float
years = [str(year) for year in range(1980, 2017)]
weo[years] = weo[years].astype(float)
print('Variable dtypes:\n', weo.dtypes, sep='')
# create debt and deficits dataframe: two variables and three countries
variables = ['GGXWDG_NGDP', 'GGXCNL_NGDP']
countries = ['ARG', 'DEU', 'GRC']
dd = weo[weo['WEO Subject Code'].isin(variables) & weo['ISO'].isin(countries)]
# change column labels to something more intuitive
dd = dd.rename(columns={'WEO Subject Code': 'Variable',
'Subject Descriptor': 'Description'})
# rename variables (i.e. values of observables)
dd['Variable'] = dd['Variable'].replace(to_replace=['GGXWDG_NGDP', 'GGXCNL_NGDP'], value=['Debt', 'Surplus'])
dd
Explanation: <a id=wants></a>
Example: WEO debt and deficits
We spend most of our time on one of the examples from the previous notebook. The problem in this example is that variables run across rows, rather than down columns. Our want is to flip some of the rows and columns so that we can plot the data against time. The question is how.
We use a small subset of the IMF's World Economic Outlook database that contains two variables and three countries.
End of explanation
dd.index
dd.columns
dd['ISO']
dd[['ISO', 'Variable']]
dd[dd['ISO'] == 'ARG']
Explanation: Reminders
What kind of object does each of the following produce?
End of explanation
dd.T
Explanation: Wants
We might imagine doing several different things with this data:
Plot a specific variable (debt or surplus) for a given date.
Time series plots for a specific country.
Time series plots for a specific variable.
Depending on which we want, we might organize the data differently. We'll focus on the last two.
Here's a brute force approach to the problem: simply transpose the data. This is where that leads:
End of explanation
dd.set_index('Country')
# we can do the same thing with a list, which will be meaningful soon...
dd.set_index(['Country'])
Explanation: Comments. The problem here is that the columns include both the numbers (which we want to plot) and some descriptive information (which we don't).
<a id='index'></a>
Setting and resetting the index
We start by setting and resetting the index. That may sound like a step backwards -- haven't we done this already? -- but it reminds us of some things that will be handy later.
Take the dataframe dd. What would we like in the index? Evenutally we'd like the dates llke [2011, 2012, 2013], but right now the row labels are more naturally the variable or country. Here are some varriants.
Setting the index
End of explanation
ddi = dd.set_index(['Variable', 'Country', 'ISO', 'Description', 'Units'])
ddi
Explanation: Exercise. Set Variable as the index.
Comment. Note that the new index brought its name along: Country in the two examples, Variable in the exercise. That's incredibly useful because we can refer to index levels by name. If we happen to have an index without a name, we can set it with
python
df.index.name = 'Whatever name we like'
Multi-indexes
We can put more than one variable in an index, which gives us a multi-index. This is sometimes called a hierarchical index because the levels of the index (as they're called) are ordered.
Multi-indexes are more common than you might think. One reason is that data itself is often multi-dimensional. A typical spreadsheet has two dimensions: the variable and the observation. The WEO data is naturally three dimensional: the variable, the year, and the country. (Think about that for a minute, it's deeper than it sounds.)
The problem we're having is fitting this nicely into two dimensions. A multi-index allows us to manage that. A two-dimensional index would work here -- the country and the variable code -- but right now we have some redundancy.
Example. We push all the descriptive, non-numerical columns into the index, leaving the dataframe itself with only numbers, which seems like a step in thee right direction.
End of explanation
ddi.index
Explanation: Let's take a closer look at the index
End of explanation
# Chase and Spencer like double quotes
print("The level names are:\n", ddi.index.names, "\n", sep="")
print("The levels (aka level values) are:\n", ddi.index.levels, sep="")
Explanation: That's a lot to process, so we break it into pieces.
ddi.index.names contains a list of level names. (Remind yourself that lists are ordered, so this tracks levels.)
ddi.index.levels contains the values in each level.
Here's what they like like here:
End of explanation
ddi.head(2)
ddi.reset_index()
# or we can reset the index by level
ddi.reset_index(level=1).head(2)
# or by name
ddi.reset_index(level='Units').head(2)
# or do more than one at a time
ddi.reset_index(level=[1, 3]).head(2)
Explanation: Knowing the order of the index components and being able to inspect their values and names is fundamental to working with a multi-index.
Exercise: What would happen if we had switched the order of the strings in the list when we called dd.set_index? Try it with this list to find out: ['ISO', 'Country', 'Variable', 'Description', 'Units']
Resetting the index
We've seen that set_index pushes columns into the index. Here we see that reset_index does the reverse: it pushes components of the index back to the columns.
Example.
End of explanation
ddi.reset_index(level=[1, 3], drop=True).head(2)
Explanation: Comment. By default, reset_index pushes one or more index levels into columns. If we want to discard that level of the index altogether, we use the parameter drop=True.
End of explanation
ddt = ddi.T
ddt
Explanation: Exercise. For the dataframe ddi do the following in separate code cells:
Use the reset_index method to move the Units level of the index to a column of the dataframe.
Use the drop parameter of reset_index to delete Units from the dataframe.
Switching rows and columns
If we take the dataframe ddi, we see that the everything's been put into the index but the data itself. Perhaps we can get what we want if we just flip the rows and columns. Roughly speaking, we refer to this as pivoting.
First look at switching rows and columns
The simplest way to flip rows and columns is to use the T or transpose property. When we do that, we end up with a lot of stuff in the column labels, as the multi-index for the rows gets rotated into the columns. Other than that, we're good. We can even do a plot. The only problem is all the stuff we've pushed into the column labels -- it's kind of a mess.
End of explanation
ddt.plot()
Explanation: Comment. We see here that the multi-index for the rows has been turned into a multi-index for the columns. Works the same way.
The only problem here is that the column labels are more complicated than we might want. Here, for example, is what we get with the plot method. As usual, .plot() plots all the columns of the dataframe, but here that means we're mixing variables. And the legend contains all the levels of the column labels.
End of explanation
# indexing by variable
debt = ddt['Debt']
debt
ddt['Debt']['Argentina']
ddt['Debt', 'Argentina']
#ddt['ARG']
Explanation: Referring to variables with a multi-index
Can we refer to variables in the same way? Sort of, as long as we refer to the top level of the column index. It gives us a dataframe that's a subset of the original one.
Let's try each of these:
ddt['Debt']
ddt['Debt']['Argentina']
ddt['Debt', 'Argentina']
ddt['ARG']
What do you see?
End of explanation
fig, ax = plt.subplots()
ddt['Debt'].plot(ax=ax)
ax.legend(['ARG', 'DEU', 'GRE'], loc='best')
#ax.axhline(100, color='k', linestyle='--', alpha=.5)
Explanation: What's going on? The theme is that we can reference the top level, which in ddi is the Variable. If we try to access a lower level, it bombs.
Exercise. With the dataframe ddt:
What type of object is ddt["Debt"]?
Construct a line plot of Debt over time with one line for each country.
SOL
<!--
ddt['Debt'].dtypes
-->
SOL
<!--
ddt['Debt'].plot()
-->
Example. Let's do this together. How would we fix up the legend? What approaches cross your mind? (No code, just the general approach.)
End of explanation
ddts = ddt.swaplevel(0, 1, axis=1)
ddts
Explanation: Swapping levels
Since variables refer to the first level of the column index, it's not clear how we would group data by country. Suppose, for example, we wanted to plot Debt and Surplus for a specific country. What would we do?
One way to do that is to make the country the top level with the swaplevel method. Note the axis parameter. With axis=1 we swap column levels, with axis=0 (the default) we swap row levels.
End of explanation
# ddt.xs?
ddt.xs("Argentina", axis=1, level="Country")
ddt.xs("Argentina", axis=1, level="Country")["Debt"]
Explanation: Exercise. Use the dataframe ddts to plot Debt and Surplus across time for Argentina. Hint: In the plot method, set subplots=True so that each variable is in a separate subplot.
SOL
<!--
fig, ax = plt.subplots(1, 2, figsize=(12, 4))
ddts['Argentina']['Surplus'].plot(ax=ax[0])
ax[0].legend(['Surplus'])
ddts['Argentina']['Debt'].plot(ax=ax[1])
ax[1].legend(['Debt'])
ax[0].axhline(0, color='k')
ax[0].set_ylim([-10, 10])
-->
The xs method
Another approach to extracting data that cuts across levels of the row or column index: the xs method. This is recent addition to Pandas and an extremely good method once you get the hang of it.
The basic syntax is
python
df.xs(item, axis=X, level=N)
where N is the name or number of an index level and X describes if we are extracting from the index or column names. Setting X=0 (so axis=0) will slice up the data along the index, X=1 extracts data for column labels.
Here's how we could use xs to get the Argentina data without swapping the level of the column labels
End of explanation
ddi.stack?
Explanation: Exercise. Use a combination of xs and standard slicing with [...] to extract the variable Debt for Greece.
SOL
<!--
ddt.xs("Greece", axis=1, level="Country")["Debt"]
-->
Exercise. Use the dataframe ddt -- and the xs method -- to plot Debt and Surplus across time for Argentina.
SOL
<!--
fig, ax = plt.subplots()
ddt.xs('Argentina', axis=1, level='Country').plot(ax=ax)
ax.legend(['Surplus', 'Debt'])
-->
<a id='stack'></a>
Stacking and unstacking
The set_index and reset_index methods work on the row labels -- the index. They move columns to the index and the reverse. The stack and unstack methods move index levels to and from column levels:
stack moves the "inner most" (closest to the data when printed) column label into a row label. This creates a long dataframe.
unstack does the reverse, it moves the inner most level of the index up to become the inner most column label. This creates a wide dataframe.
We use both to shape (or reshape) our data. We use set_index to push things into the index. And then use reset_index to push some of them back to the columns. That gives us pretty fine-grainded control over the shape of our data. Intuitively
stacking (vertically): wide table $\rightarrow$ long table
unstacking: long table $\rightarrow$ wide table
End of explanation
# example from docstring
dic = {'a': [1, 3], 'b': [2, 4]}
s = pd.DataFrame(data=dic, index=['one', 'two'])
print(s)
s.stack()
Explanation: Single level index
End of explanation
ddi.index
ddi.unstack() # Units variable has only one value, so this doesn't do much
ddi.unstack(level='ISO')
Explanation: Multi-index
End of explanation
# drop some of the index levels (think s for small)
dds = ddi.reset_index(level=[1, 3, 4], drop=True)
dds
# give a name to the column labels
dds.columns.name = 'Year'
dds
Explanation: Let's get a smaller subset of this data to work with so we can see things a bit more clearly
End of explanation
# convert to long format. Notice printing is different... what `type` is ds?
ds = dds.stack()
ds
# same thing with explicit reference to column name
dds.stack(level='Year').head(8)
# or with level number
dds.stack(level=0).head(8)
Explanation: Let's remind ourselves what we want. We want to
move the column index (Year) into the row index
move the Variable and ISO levels the other way, into the column labels.
The first one uses stack, the second one unstack.
Stacking
We stack our data, one variable on top of another, with a multi-index to keep track of what's what. In simple terms, we change the data from a wide format to a long format. The stack method takes the inner most column level and makes it the lowest row level.
End of explanation
# now go long to wide
ds.unstack() # default is lowest value wich is year now
# different level
ds.unstack(level='Variable')
# or two at once
ds.unstack(level=['Variable', 'ISO'])
Explanation: Unstacking
Stacking moves columns into the index, "stacking" the data up into longer columns. Unstacking does the reverse, taking levels of the row index and turning them into column labels. Roughly speaking we're rotating or pivoting the data.
End of explanation
# stacked dataframe
ds.head(8)
du1 = ds.unstack()
du2 = du1.unstack()
Explanation: Exercise. Run the code below and explain what each line of code does.
End of explanation
url = 'http://www2.census.gov/ces/bds/firm/bds_f_sz_release.csv'
raw = pd.read_csv(url)
raw.head()
# Four size categories
sizes = ['a) 1 to 4', 'b) 5 to 9', 'c) 10 to 19', 'd) 20 to 49']
# only defined size categories and only period since 2012
restricted_sample = (raw['year2']>=2012) & raw['fsize'].isin(sizes)
# don't need all variables
var_names = ['year2', 'fsize', 'Firms', 'Emp']
bds = raw[restricted_sample][var_names]
bds
Explanation: Exercise (challenging). Take the unstacked dataframe dds. Use some combination of stack, unstack, and plot to plot the variable Surplus against Year for all three countries. Challenging mostly because you need to work out the steps by yourself.
SOL
<!--
ddse = dds.stack().unstack(level=['Variable', 'ISO'])
ddse['Surplus'].plot()
-->
<a id='pivot'></a>
Pivoting
The pivot method: a short cut to some kinds of unstacking. In rough terms, it takes a wide dataframe and constructs a long one. The inputs are columns, not index levels.
Example: BDS data
The Census's Business Dynamnics Statistics collects annual information about the hiring decisions of firms by size and age. This table list the number of firms and total employment by employment size categories: 1 to 4 employees, 5 to 9, and so on.
Apply want operator. Our want is to plot total employment (the variable Emp) against size (variable fsize). Both are columns in the original data.
Here we construct a subset of the data, where we look at two years rather than the whole 1976-2013 period.
End of explanation
bdsp = bds.pivot(index='fsize', columns='year2', values='Emp')
# divide by a million so bars aren't too long
bdsp = bdsp/10**6
bdsp
Explanation: Pivoting the data
Let's think specifically about what we want. We want to graph Emp against fsize for (say) 2013. This calls for:
The index should be the size categories fsize.
The column labels should be the entries of year2, namely 2012, 2013 and `2014.
The data should come from the variable Emp.
These inputs translate directly into the following pivot method:
End of explanation
# plot 2013 as bar chart
fig, ax = plt.subplots()
bdsp[2013].plot(ax=ax, kind='barh')
ax.set_ylabel('')
ax.set_xlabel('Number of Employees (millions)')
Explanation: Comment. Note that all the parameters here are columns. That's not a choice, it's the way the the pivot method is written.
We do a plot for fun:
End of explanation
url1 = 'http://www.oecd.org/health/health-systems/'
url2 = 'OECD-Health-Statistics-2017-Frequently-Requested-Data.xls'
docs = pd.read_excel(url1+url2,
skiprows=3,
usecols=[0, 51, 52, 53, 54, 55, 57],
sheetname='Physicians',
na_values=['..'],
skip_footer=21)
# rename country variable
names = list(docs)
docs = docs.rename(columns={names[0]: 'Country'})
# strip footnote numbers from country names
docs['Country'] = docs['Country'].str.rsplit(n=1).str.get(0)
docs = docs.head()
docs
Explanation: <a id='review'></a>
Review
We return to the OECD's healthcare data, specifically a subset of their table on the number of doctors per one thousand population. This loads and cleans the data:
End of explanation
#
Explanation: Use this data to:
Set the index as Country.
Construct a horizontal bar chart of the number of doctors in each country in "2013 (or nearest year)".
Apply the drop method to docs to create a dataframe new that's missing the last column.
Challenging. Use stack and unstack to "pivot" the data so that columns are labeled by country names and rows are labeled by year. This is challenging because we have left out the intermediate steps.
Plot the number of doctors over time in each country as a line in the same plot.
Comment. In the last plot, the x axis labels are non-intuitive. Ignore that.
Resources
Far and away the best material on this subject is Brandon Rhodes' 2015 Pycon presentation. 2 hours and 25 minutes and worth every second.
Video: https://youtu.be/5JnMutdy6Fw
Materials: https://github.com/brandon-rhodes/pycon-pandas-tutorial
Outline: https://github.com/brandon-rhodes/pycon-pandas-tutorial/blob/master/script.txt
End of explanation |
6,195 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial Goal
This tutorial aims to show how RTApp performance metrics are computed
and reported by the perf analysis module provided by LISA.
Step1: Collected results
Step2: Trace inspection
Step3: RTApp task performance plots | Python Code:
import logging
reload(logging)
logging.basicConfig(
format='%(asctime)-9s %(levelname)-8s: %(message)s',
datefmt='%I:%M:%S')
# Enable logging at INFO level
logging.getLogger().setLevel(logging.INFO)
# Execute this cell to report devlib debugging information
logging.getLogger('ssh').setLevel(logging.DEBUG)
# Generate plots inline
%pylab inline
import json
import os
Explanation: Tutorial Goal
This tutorial aims to show how RTApp performance metrics are computed
and reported by the perf analysis module provided by LISA.
End of explanation
# Let's use an example trace
res_dir = './example_rtapp'
trace_file = os.path.join(res_dir, 'trace.dat')
platform_file = os.path.join(res_dir, 'platform.json')
!tree {res_dir}
# Inspect the JSON file used to run the application
with open('{}/simple_00.json'.format(res_dir), 'r') as fh:
rtapp_json = json.load(fh, )
logging.info('Generated RTApp JSON file:')
print json.dumps(rtapp_json, indent=4, sort_keys=True)
Explanation: Collected results
End of explanation
# Suport for FTrace events parsing and visualization
import trappy
# NOTE: The interactive trace visualization is available only if you run
# the workload to generate a new trace-file
trappy.plotter.plot_trace(res_dir)
Explanation: Trace inspection
End of explanation
# Support for performance analysis of RTApp workloads
from perf_analysis import PerfAnalysis
# Parse the RT-App generate log files to compute performance metrics
pa = PerfAnalysis(res_dir)
# For each task which has generated a logfile, plot its performance metrics
for task in pa.tasks():
pa.plotPerf(task, "Performance plots for task [{}] ".format(task))
Explanation: RTApp task performance plots
End of explanation |
6,196 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear Regression
written by Gene Kogan
Now we will introduce the task of linear regression, the simplest type of machine learning problem. The goal of linear regression is to fit a line to a set of points. Consider the following dataset
Step1: We have a series of {x,y} pairs. We can plot them with a scatterplot.
Step2: The goal of linear regression is to find a line, $y = mx + b$ which fits the data points as well as possible.
What does it mean for it to fit the points "as well as possible"? Let's try three random lines and compare them. We will define the following three functions as candidates
Step3: Let's plot all three of these functions and see how well each of them fit the points in our dataset.
Step4: Intuitively, it looks like $f1$ and $f3$ come closest to a good fit, with $f3$ looking somewhat better than $f1$, and $f2$ showing the worst fit. But how do we formally express how good a fit is? We define an "error" or "cost" function, which expresses the total error between the points predicted by the line, and those of the actual dataset.
One very popular measure of this called the "sum squared error," which we will denote as $J$, which is the sum of the square differences between the data points and the line.
$$ J = \sum{(y_i - f(x_i))^2} $$
Intuitively, for each pair of points $(x_i, y_i)$, the quantity $y_i - f(x_i)$ is the difference between the actual point $y_i$ and the y-value predicted by the line at $x_i$. Then we square them (to penalize large distances more) and sum them together.
For $f1$, we plot the error bars, $y_i - f_1(x_i)$ with red dashed lines in the next cell.
Step5: Another name for sum squared error is the "quadratic cost." The square root of the sum squared error, $ \sqrt {\sum{(y_i - f(x_i))^2}} $ is the famous "distance formula" or "euclidean distance." Dividing by the number of elements $n$, we get the mean squared error, $\frac{1}{n} \sum{(y_i - f(x_i))^2}$, which is the average of the squared differences. All of these cost functions are closely related and are often used interchangeably.
For convenience, sum squared error is very often multiplied by $0.5$. The reason for this will be clear later in this notebook. We redefine the sum squared error as
Step6: As we expected, the third function has the lowest cost for this dataset, and the second has the highest.
What is a good method for finding the optimal $m$ and $b$ to get the lowest error rate? The simplest way would be the brute force method
Step7: Our loss surface looks a bit like an elongated bowl. Because our goal is to find the parameters $m$ and $b$ which give us the lowest possible error, this translates to finding the point at the bottom of that bowl. By eye, it looks like it's roughly around $m=0, b=2$.
Gradient descent
Let's now define a better method for actually finding this point. The method we are going to introduce is called "gradient descent." Note, there is a much better way to do linear regression than to use gradient descent, but we will use gradient descent anyway, because later on, when we introduce neural networks, we shall see that gradient descent is the best way by which to find the optimal parameters for them. So we introduce gradient descent in the context of a simpler problem first.
For a more thorough introduction, gradient descent is discussed in more detail in this chapter of the book.
The basic idea behind gradient descent is as follows. Start with a random guess about the parameters, and then calculate the gradient of the loss function with respect to the parameters. Recall from the previous guide that the gradient of a function is the vector containing each of the partial derivatives of its variables, i.e.
$$
\nabla f(X) = \left[ \frac{\partial f}{\partial x_1}, \frac{\partial f}{\partial x_2}, ..., \frac{\partial f}{\partial x_n} \right]
$$
Our loss function $J$ has two parameters
Step8: We can plot the line of best fit | Python Code:
import numpy as np
import matplotlib.pyplot as plt
# x, y
data = np.array([
[2.4, 1.7],
[2.8, 1.85],
[3.2, 1.79],
[3.6, 1.95],
[4.0, 2.1],
[4.2, 2.0],
[5.0, 2.7]
])
Explanation: Linear Regression
written by Gene Kogan
Now we will introduce the task of linear regression, the simplest type of machine learning problem. The goal of linear regression is to fit a line to a set of points. Consider the following dataset:
End of explanation
x, y = data[:,0], data[:,1]
plt.figure(figsize=(4, 3))
plt.scatter(x, y)
Explanation: We have a series of {x,y} pairs. We can plot them with a scatterplot.
End of explanation
def f1(x):
return 0.92 * x - 1.0
def f2(x):
return -0.21 * x + 3.4
def f3(x):
return 0.52 * x + 0.1
# try some examples
print("f1(-1.0) = %0.2f" % f1(-1))
print("f2( 0.0) = %0.2f" % f2(0))
print("f3( 2.0) = %0.2f" % f3(2))
Explanation: The goal of linear regression is to find a line, $y = mx + b$ which fits the data points as well as possible.
What does it mean for it to fit the points "as well as possible"? Let's try three random lines and compare them. We will define the following three functions as candidates:
$$f1(x) = 0.92x-1.0$$
$$f2(x) = -0.21x+3.4$$
$$f3(x) = 0.52x+0.1$$
End of explanation
min_x, max_x = min(x), max(x)
fig = plt.figure(figsize=(10,3))
fig.add_subplot(131)
plt.scatter(x, y)
plt.plot([min_x, max_x], [f1(min_x), f1(max_x)], 'k-')
plt.title("f1")
fig.add_subplot(132)
plt.scatter(x, y)
plt.plot([min_x, max_x], [f2(min_x), f2(max_x)], 'k-')
plt.title("f2")
fig.add_subplot(133)
plt.scatter(x, y)
plt.plot([min_x, max_x], [f3(min_x), f3(max_x)], 'k-')
plt.title("f3")
Explanation: Let's plot all three of these functions and see how well each of them fit the points in our dataset.
End of explanation
min_x, max_x = min(x), max(x)
fig = plt.figure(figsize=(4,3))
plt.scatter(x, y) # original data points
plt.plot([min_x, max_x], [f1(min_x), f1(max_x)], 'k-') # line of f1
plt.scatter(x, f1(x), color='black') # points predicted by f1
for x_, y_ in zip(x, y):
plt.plot([x_, x_], [y_, f1(x_)], '--', c='red') # error bars
plt.title("error bars: $y_i-f_1(x_i)$")
Explanation: Intuitively, it looks like $f1$ and $f3$ come closest to a good fit, with $f3$ looking somewhat better than $f1$, and $f2$ showing the worst fit. But how do we formally express how good a fit is? We define an "error" or "cost" function, which expresses the total error between the points predicted by the line, and those of the actual dataset.
One very popular measure of this called the "sum squared error," which we will denote as $J$, which is the sum of the square differences between the data points and the line.
$$ J = \sum{(y_i - f(x_i))^2} $$
Intuitively, for each pair of points $(x_i, y_i)$, the quantity $y_i - f(x_i)$ is the difference between the actual point $y_i$ and the y-value predicted by the line at $x_i$. Then we square them (to penalize large distances more) and sum them together.
For $f1$, we plot the error bars, $y_i - f_1(x_i)$ with red dashed lines in the next cell.
End of explanation
# sum squared error
def cost(y_pred, y_actual):
return 0.5 * np.sum((y_actual-y_pred)**2)
x, y = data[:,0], data[:,1]
J1 = cost(f1(x), y)
J2 = cost(f2(x), y)
J3 = cost(f3(x), y)
print("J1=%0.2f, J2=%0.2f, J3=%0.2f" % (J1, J2, J3))
Explanation: Another name for sum squared error is the "quadratic cost." The square root of the sum squared error, $ \sqrt {\sum{(y_i - f(x_i))^2}} $ is the famous "distance formula" or "euclidean distance." Dividing by the number of elements $n$, we get the mean squared error, $\frac{1}{n} \sum{(y_i - f(x_i))^2}$, which is the average of the squared differences. All of these cost functions are closely related and are often used interchangeably.
For convenience, sum squared error is very often multiplied by $0.5$. The reason for this will be clear later in this notebook. We redefine the sum squared error as:
$$ J = \frac{1}{2} \sum{(y_i - f(x_i))^2} $$
Since the function $f(x_i) = m x_i + b$, we can substitute that into the cost, and get:
$$ J = \frac{1}{2} \sum{(y_i - (mx_i + b))^2} $$
End of explanation
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.gca(projection='3d')
# check all combinations of m between [-2, 4] and b between [-6, 8], to precision of 0.1
M = np.arange(-2, 4, 0.1)
B = np.arange(-6, 8, 0.1)
# get MSE at every combination
J = np.zeros((len(M), len(B)))
for i, m_ in enumerate(M):
for j, b_ in enumerate(B):
J[i][j] = cost(m_*x+b_, y)
# plot loss surface
B, M = np.meshgrid(B, M)
ax.plot_surface(B, M, J, rstride=1, cstride=1, cmap=plt.cm.coolwarm, linewidth=0, antialiased=False)
plt.title("cost for different m, b")
plt.xlabel("b")
plt.ylabel("m")
Explanation: As we expected, the third function has the lowest cost for this dataset, and the second has the highest.
What is a good method for finding the optimal $m$ and $b$ to get the lowest error rate? The simplest way would be the brute force method: have the computer make millions of guesses and keep the one that happens to have the lowest error. Computers are pretty fast, so why not? For our simple problem with two parameters, this would work fine. But in real-world problms, we often have dozens, hundreds, or even millions of parameters which have to be optimized at the same time. Making guesses does not scale to a large number of dimensions; no computer is fast enough to try enough guesses to get a good solution in a reasonable amount of time.
So we need a more formal method for this. We can get an idea of how we might do this by first observing the loss surface. We'll plot the MSE of every combination of $m$ and $b$ within some range and look at it first. Note: recall that for this toy problem, calculating the cost for all combinations of our two parameters is easy to do because there's a small number of combinations. In problems where we have thousands or more parameters, it will be infeasible to do this practically because there will be too many parameter combinations to try (which is the reason we need a better method than brute force guessing to begin with). For 2 parameters, we observe the loss surface just for demonstration purposes.
End of explanation
import random
# get our data
x, y = data[:,0], data[:,1]
# it is a good idea to normalize the data
x = x / np.amax(x, axis=0)
y = y / np.amax(y, axis=0)
# choose a random initial m, b
m, b = random.random(), random.random()#0.8, -0.5
def F(x, m, b):
return m * x + b
# what is our error?
y_pred = F(x, m, b)
init_cost = cost(y_pred, y)
print("initial parameters: m=%0.3f, b=%0.3f"%(m, b))
print("initial cost = %0.3f" % init_cost)
# implement partial derivatives of our parameters
def dJdm(x, y, m, b):
return -np.dot(x, y - F(x, m, b))
def dJdb(x, y, m, b):
return -np.sum(y - F(x, m, b))
# choose the alpha parameter and number of iterations
alpha = 0.01
n_iters = 2000
# keep track of error
errors = []
for i in range(n_iters):
m = m - alpha * dJdm(x, y, m, b)
b = b - alpha * dJdb(x, y, m, b)
y_pred = F(x, m, b)
j = cost(y_pred, y)
errors.append(j)
# plot it
plt.figure(figsize=(16, 3))
plt.plot(range(n_iters), errors, linewidth=2)
plt.title("Cost by iteration")
plt.ylabel("Cost")
plt.xlabel("iterations")
# what is our final error rate
y_pred = F(x, m, b)
final_cost = cost(y_pred, y)
print("final parameters: m=%0.3f, b=%0.3f"%(m, b))
print("final cost = %0.3f" % final_cost)
Explanation: Our loss surface looks a bit like an elongated bowl. Because our goal is to find the parameters $m$ and $b$ which give us the lowest possible error, this translates to finding the point at the bottom of that bowl. By eye, it looks like it's roughly around $m=0, b=2$.
Gradient descent
Let's now define a better method for actually finding this point. The method we are going to introduce is called "gradient descent." Note, there is a much better way to do linear regression than to use gradient descent, but we will use gradient descent anyway, because later on, when we introduce neural networks, we shall see that gradient descent is the best way by which to find the optimal parameters for them. So we introduce gradient descent in the context of a simpler problem first.
For a more thorough introduction, gradient descent is discussed in more detail in this chapter of the book.
The basic idea behind gradient descent is as follows. Start with a random guess about the parameters, and then calculate the gradient of the loss function with respect to the parameters. Recall from the previous guide that the gradient of a function is the vector containing each of the partial derivatives of its variables, i.e.
$$
\nabla f(X) = \left[ \frac{\partial f}{\partial x_1}, \frac{\partial f}{\partial x_2}, ..., \frac{\partial f}{\partial x_n} \right]
$$
Our loss function $J$ has two parameters: the slope $m$ and y-intercept $b$. Thus its gradient is:
$$
\nabla J = \left[ \frac{\partial J}{\partial m}, \frac{\partial J}{\partial b} \right]
$$
The interpretation of the gradient is that it gives us the slope in every dimension of the loss function at any $m$ and $b$. What gradient descent does is it evaluates the slope of the loss function at the current parameters, and then takes a small step in the exact opposite direction (because the slope is upward). This has the effect of moving $m$ and $b$ to a place where the error is a bit lower than it was before. Repeat this process many times until the loss stops descending (because we have reached the bottom) and you are finished. This is the basic idea.
How can we actually calculate the partial derivatives: $\frac{\partial J}{\partial m}$ and $\frac{\partial J}{\partial b}$? We must differentiate $J$ with respect to these two parameters.
Recall:
$$ J(m,b) = \frac{1}{2} \sum{(y_i - (mx_i + b))^2} $$
Let's start with $\frac{\partial J}{\partial m}$. We can derive its partial derivative with the following steps:
$$ \frac{\partial J}{\partial m} = \frac{\partial}{\partial m} \left[ \frac{1}{2} \sum{(y_i - (mx_i + b))^2} \right] $$
We can factor out the $\frac{1}{2}$ and apply the sum rule of derivatives.
$$ \frac{\partial J}{\partial m} = \frac{1}{2} \sum{ \frac{\partial}{\partial m} (y_i - (mx_i + b))^2} $$
Using chain rule, we bring the exponent down, and find that the inner derivative is just $-x_i$.
$$ \frac{\partial J}{\partial m} = \frac{1}{2} \sum{ -2 x_i \cdot (y_i - (mx_i + b))} $$
$$ \frac{\partial J}{\partial m} = -\sum{x_i \cdot (y_i - (mx_i + b))} $$
For $\frac{\partial J}{\partial b}$, the partial derivative is found:
$$ \frac{\partial J}{\partial b} = \frac{\partial}{\partial b} \left[ \frac{1}{2} \sum{(y_i - (mx_i + b))^2} \right] $$
Again factor out the $\frac{1}{2}$ and apply the sum rule of derivatives.
$$ \frac{\partial J}{\partial b} = \frac{1}{2} \sum{ \frac{\partial}{\partial b} (y_i - (mx_i + b))^2} $$
Using chain rule, we bring the exponent down, and find that the inner derivative is $-1$.
$$ \frac{\partial J}{\partial b} = \frac{1}{2} \sum{ -2 \cdot (y_i - (mx_i + b))} $$
$$ \frac{\partial J}{\partial b} = -\sum{(y_i - (mx_i + b))} $$
So to summarize, we have found:
$$ \frac{\partial J}{\partial m} = - \sum{x_i \cdot (y_i - (mx_i + b))} $$
$$ \frac{\partial J}{\partial b} = - \sum{(y_i - (mx_i + b))} $$
We can then define the following update rule, where we calculate the gradient and then adjust the parameters $m$ and $b$:
$$ m := m - \alpha \cdot \frac{\partial J}{\partial m} $$
$$ b := b - \alpha \cdot \frac{\partial J}{\partial b} $$
Where $\alpha$ is a hyperparameter called the "learning rate" that controls the size of the update step. In simple gradient descent, the learning rate must be chosen manually, but as we shall see later, there are more complex variants of gradient descent which automatically pick and adjust the learning rate during training. If we set $\alpha$ too high, we may overshoot the best trajectory, whereas if we set it too low, learning may take an unacceptably long time. Typical values of this are 0.01, 0.001, 0.0001 and so on.
Let's implement this in code:
End of explanation
min_x, max_x = min(x), max(x)
fig = plt.figure(figsize=(3,3))
plt.scatter(x, y)
plt.plot([min_x, max_x], [m * min_x + b, m * max_x + b], 'k-')
plt.title("line of best fit")
Explanation: We can plot the line of best fit:
End of explanation |
6,197 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Solution of 5.9.1, Bee Checklist
First of all, we import the two modules we'll need to read the csv file, and to use regular expressions
Step1: Then, we read the file, and store the columns Scientific Name and Taxon Author in two lists
Step2: How many species?
Step3: Pick one of the authors element to use for testing. Choose one that is quite complicated, such as the 38th element
Step4: Now we need to build a regular expression. After some twiddling, you should end up with something like this, which captures the authors in one group, and the year in another group
Step5: Test the expression
Step6: Now we write a function that uses the regular expression to extract an author list (useful when there are multiple authors), and the year
Step7: Let's see the output of this function
Step8: Finally, let's build two dictionaries
Step9: For example, these are all the authors
Step10: What is the name of the author with most entries in the database?
We use the following strategy
Step11: An the winner is
Step12: Which year of publication is most represented in the database?
We use the same strategy to find that the golden year of bee publication is | Python Code:
import csv
import re
Explanation: Solution of 5.9.1, Bee Checklist
First of all, we import the two modules we'll need to read the csv file, and to use regular expressions:
End of explanation
with open('../data/bee_list.txt') as f:
csvr = csv.DictReader(f, delimiter = '\t')
species = []
authors = []
for r in csvr:
species.append(r['Scientific Name'])
authors.append(r['Taxon Author'])
Explanation: Then, we read the file, and store the columns Scientific Name and Taxon Author in two lists:
End of explanation
len(species)
len(authors)
Explanation: How many species?
End of explanation
au = authors[37]
au
Explanation: Pick one of the authors element to use for testing. Choose one that is quite complicated, such as the 38th element:
End of explanation
my_reg = re.compile(r'\(?([\w\s,\.\-\&]*),\s(\d{4})\)?')
# Translation
# \(? -> open parenthesis (or not)
# ([\w\s,\.\-\&]+) -> the first group is the list of authors
# which can contain \w (word character)
# \s (space) \. (dot) \- (dash) \& (ampersand)
# ,\s -> followed by comma and space
# (\d{4}) -> the second group is the year, 4 digits
# \)? -> potentially, close parenthesis
Explanation: Now we need to build a regular expression. After some twiddling, you should end up with something like this, which captures the authors in one group, and the year in another group:
End of explanation
re.findall(my_reg,au)
Explanation: Test the expression
End of explanation
def extract_list_au_year(au):
tmp = re.match(my_reg, au)
authorlist = tmp.group(1)
year = tmp.group(2)
# split authors into a list using re.split
authorlist = re.split(', | \& ', authorlist)
# Translation: either separate using ', ' or ' & '
return [authorlist, year]
Explanation: Now we write a function that uses the regular expression to extract an author list (useful when there are multiple authors), and the year
End of explanation
extract_list_au_year(au)
Explanation: Let's see the output of this function:
End of explanation
dict_years = {}
dict_authors = {}
for au in authors:
tmp = extract_list_au_year(au)
for aunum in tmp[0]:
if aunum in dict_authors.keys():
dict_authors[aunum] = dict_authors[aunum] + 1
else:
dict_authors[aunum] = 1
if tmp[1] in dict_years.keys():
dict_years[tmp[1]] = dict_years[tmp[1]] + 1
else:
dict_years[tmp[1]] = 1
Explanation: Finally, let's build two dictionaries:
- one tracking the number of times each year is mentioned in the database;
- one traking the number of times each author is mentioned
End of explanation
dict_authors
Explanation: For example, these are all the authors:
End of explanation
max_value_author = max(dict_authors.values())
max_value_author
which_index = list(dict_authors.values()).index(max_value_author)
which_index
Explanation: What is the name of the author with most entries in the database?
We use the following strategy:
- we find the maximum value in the dictionary
- we use the function index to find to which entry is it associated
- we find the corresponding author
End of explanation
list(dict_authors.keys())[which_index]
Explanation: An the winner is:
End of explanation
max_value_year = max(dict_years.values())
which_index = list(dict_years.values()).index(max_value_year)
list(dict_years.keys())[which_index]
Explanation: Which year of publication is most represented in the database?
We use the same strategy to find that the golden year of bee publication is:
End of explanation |
6,198 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Python 3
Python is a modern programming language that
* is open source
* is interpreted
* interpreters exist for most platforms
* is multi-paradigm (incl. object-oriented)
* comes with batteries included whenever possible
Versions
Version 2 of the Python language (2.7 is the current minor version) is what made Python popular. However it was far from perfect, and version 3 of Python fixes many of the most glaring design flaws in Python 2.
Because version 2 gained popularity rapidly, it has taken over 10 years for version 3 to gain foothold. This is the first time CSC gives introduction to Python course using Python 3.
Three levels of Python
There are 3 levels of functionality you can use in Python
* the built-in parts
* the language itself that is used to write programs
* the standard library
* these cover many common tasks in programming in general, e.g.
* file system and operating system abstraction
* reading standardized file formats (zip, xml, csv, etc.)
* most common data communications protocols (HTTP and email protocols)
* more data types, programming libraries
* the Python ecosystem, mostly available via the Python Package Index PyPI
* tens of thousands of packages of varying quality
* libraries for
* numeric computation (e.g. NumPy)
* machine learning (e.g. scikit-learn)
* HTTP frameworks (e.g. Django)
* natural language analysis (e.g. nltk)
* data visualization
The core or built-in parts of Python is relatively small and we will cover that first.
The typical way to write python programs is to write it in script files that end in *.py and that can be run with the python command. We will get to that later but first we use this Jupyter Notebook to go over the basics of the language.
Syntax
First a few motivational words from The Zen of Python
Beautiful is better than ugly.
Simple is better than complex.
Readability counts.
The design of Python aims for simplicity.
First program
The first exercise in most programming tutorials is a Hello World -program.
You can edit the code in the cell below and run it by clicking on the run-button in the above toolbar or by pressing CTRL+Enter when you have the cell in focus (surrounded by a green box).
The text between the quotation marks "" is a string. print is a function and the parameters are inside regular brackets () in a C-kind of style.
Step1: These exercises are run in this notebook environment, but you could just as easily copy the text below to a file called hello.py and run it with the command
$ python hello.py
hello world!
Extra
Step2: Variables and data types
Variable is something that can change in the execution of a program. It is referenced by a name.
In Python variable names
* may contain letters, numbers or underscores
* start with a a letter or underscore (but not with a number!)
* are case sensitive
Underscores in the beginning or end of a variable are part of idiomatic coding style that hints things to the reader of the code. We will get to that later.
Try them out below
Step3: Python is a dynamically typed, strongly typed language. It's OK not to understand the terms completely. They are simply mentioned because they carry very specific meaning to experienced programmers.
In practice this means that
Step4: Mutable and immutable data types
Some data types are mutable and some are immutable.
Mutable data types can be changed after they are created for example
Step5: Lists
Lists are created using [] brackets or the list() constructor, which accepts many types of other objects. There is no requirement for all the objects in the list to be of the same type. This is a consequence of the duck typing mentioned earlier.
Lists support multiple types of indexing.
Step6: Lists can be appended to using several types of syntax
Step7: Dictionaries
Dictionaries are also accessed by using the []-brackets. A dict is accessed by key.
The dict also contains a get() method that takes in a default value to return if the key is not present.
It is assigned to using the bracket notation. If a key exists, the value is overriden.
Step8: Tuples
A comma defines a tuple. for example
a,b
is a valid tuple. It's convention use parentheses to make the presence of a tuple more explicit,
(a, b)
but the parentheses are in no way required.
Python does automatic packing and unpacking of tuples, as is illustrated by the following example. | Python Code:
print("hello world!")
Explanation: Introduction to Python 3
Python is a modern programming language that
* is open source
* is interpreted
* interpreters exist for most platforms
* is multi-paradigm (incl. object-oriented)
* comes with batteries included whenever possible
Versions
Version 2 of the Python language (2.7 is the current minor version) is what made Python popular. However it was far from perfect, and version 3 of Python fixes many of the most glaring design flaws in Python 2.
Because version 2 gained popularity rapidly, it has taken over 10 years for version 3 to gain foothold. This is the first time CSC gives introduction to Python course using Python 3.
Three levels of Python
There are 3 levels of functionality you can use in Python
* the built-in parts
* the language itself that is used to write programs
* the standard library
* these cover many common tasks in programming in general, e.g.
* file system and operating system abstraction
* reading standardized file formats (zip, xml, csv, etc.)
* most common data communications protocols (HTTP and email protocols)
* more data types, programming libraries
* the Python ecosystem, mostly available via the Python Package Index PyPI
* tens of thousands of packages of varying quality
* libraries for
* numeric computation (e.g. NumPy)
* machine learning (e.g. scikit-learn)
* HTTP frameworks (e.g. Django)
* natural language analysis (e.g. nltk)
* data visualization
The core or built-in parts of Python is relatively small and we will cover that first.
The typical way to write python programs is to write it in script files that end in *.py and that can be run with the python command. We will get to that later but first we use this Jupyter Notebook to go over the basics of the language.
Syntax
First a few motivational words from The Zen of Python
Beautiful is better than ugly.
Simple is better than complex.
Readability counts.
The design of Python aims for simplicity.
First program
The first exercise in most programming tutorials is a Hello World -program.
You can edit the code in the cell below and run it by clicking on the run-button in the above toolbar or by pressing CTRL+Enter when you have the cell in focus (surrounded by a green box).
The text between the quotation marks "" is a string. print is a function and the parameters are inside regular brackets () in a C-kind of style.
End of explanation
help(print)
Explanation: These exercises are run in this notebook environment, but you could just as easily copy the text below to a file called hello.py and run it with the command
$ python hello.py
hello world!
Extra: compare this with a hello world program in some other programming language that you know. Is it simpler or more complex? What kinds of design decisions have to have been made in order for the example to be this simple?
Getting help
The built-in function help() will show you interactive documentation about most Python objects when you're inside an interpreter.
If you want to know all the members of an object (more about objects and classes later) you can call the dir() function.
End of explanation
hello_example_1 = "hello world!" # comments are marked with the #-sign
hello_example_1 = 5
hello_example_1
# in a Jupyter notebook if the cell ends
# with a single variable, the system will print the
# value for you
Explanation: Variables and data types
Variable is something that can change in the execution of a program. It is referenced by a name.
In Python variable names
* may contain letters, numbers or underscores
* start with a a letter or underscore (but not with a number!)
* are case sensitive
Underscores in the beginning or end of a variable are part of idiomatic coding style that hints things to the reader of the code. We will get to that later.
Try them out below:
End of explanation
value = 5
value2 = value + 1
my_string = "hello "
my_string = my_string + str(value2) # you can attempt the same without converting to string
print(my_string)
Explanation: Python is a dynamically typed, strongly typed language. It's OK not to understand the terms completely. They are simply mentioned because they carry very specific meaning to experienced programmers.
In practice this means that:
variables (and their types) don't need to be declared
trying to use a variable of an incorrect type will result in errors
Data types
Python has a small set of basic data types, that are grouped into groups that we will introduce. All variables in python have a type and you can use the built-in method type() to check the type of a variable.
boolean: is a data type that can be either True or False (note capitalization of first letter)
Numeric types, that represent numbers
int: integers, not limited in length
float: floating point numbers, like doubles in C, with similar caveats
complex: complex numbers, represented by j (not covered in this tutorial)
Sequences:
str: String, a sequence of Unicode characters in the range U+0000 - U+10FFFF
bytes: a sequence of integers in the range 0-255, i.e. raw data
byte array: like bytes, but mutable
list: a mutable ordered sequence of variables
tuple: an immutable ordered sequence of variables
Sets
set: an unordered collection of unique objects
frozen set: like set, but immutable
Mappings
dict: a dictionary, also called a hashmap
Python is dynamically typed, which means that the data types does not need to be declared, it is determined at run time.
Python is strongly typed, which means that it typically does not attempt to coerce a data type to another. For instance it is not possible to concatenate a string and a number, which is often valid in many languages. The number needs to be converted into a string explicitly.
The typing in Python is called duck typing. It is sufficient to implement the functions required and not necessary to explicitly implement an interface like in e.g. Java or C#.
Each of the abovementioned types is also a built-in function that returns objects of said type.
Sequences, sets and mappings are often iterated over. More on this later.
End of explanation
# mutable examples
dict_ = {"key": "value"}
dict_["key2"] = "value2"
print(dict_)
list_ = ["egg", "sausage", "bacon"]
list2 = list_
list_.append("spam")
print(list_)
# variables are just pointers to objects in memory
# for mutable types all references point to the same object that has changed
print(list2)
# immutable examples
str_ = "hello world!"
print(str_.replace("l", ""))
print(str_)
tuple_ = (4, 5, 6)
print(tuple_ + (6,7))
print(tuple_)
Explanation: Mutable and immutable data types
Some data types are mutable and some are immutable.
Mutable data types can be changed after they are created for example:
* a list can be appended to
* a byte in a byte array can be altered
* a set can be added to
* a dict can be added to
Immutable data types cannot be changed after they are created. Any operations on the data types will return a new instance of the same type, that is different. Typically this new value then needs to be assigned to a variable.
| Immutable | Mutable |
|----------------------------------|------------|
| numeric types (int, float, etc.) | |
| tuple | list |
| str | byte array |
| frozen set | set |
| | dict |
Only immutable data types can be the keys in a dict.
End of explanation
my_list = [1, 2, 3, 4]
print(my_list[0]) # indexing starts from 0
print(my_list[1:3]) # so-called slice syntax selects a part of a list
print(my_list[-1]) # negative indices are also permitted, -1 is the last index
print(my_list[-3:-1]) # also in slicing
Explanation: Lists
Lists are created using [] brackets or the list() constructor, which accepts many types of other objects. There is no requirement for all the objects in the list to be of the same type. This is a consequence of the duck typing mentioned earlier.
Lists support multiple types of indexing.
End of explanation
my_list = [1, 2]
my_list.append(3) # modifies in place, takes a single item
print(my_list)
my_list.extend([4, 5]) # takes another list
print(my_list)
another_list = my_list + [6, 7] # makes a copy
print(another_list)
print(my_list)
Explanation: Lists can be appended to using several types of syntax
End of explanation
my_dict = {1: 2, "key": "value"}
print(my_dict[1])
print(my_dict["key"])
print(my_dict.get("im_not_there", "default"))
my_dict["key2"] = "i was just inserted"
print(my_dict["key2"])
Explanation: Dictionaries
Dictionaries are also accessed by using the []-brackets. A dict is accessed by key.
The dict also contains a get() method that takes in a default value to return if the key is not present.
It is assigned to using the bracket notation. If a key exists, the value is overriden.
End of explanation
a, b = 1, 2
a, b = b, a
##Check what the values of a and b are now
Explanation: Tuples
A comma defines a tuple. for example
a,b
is a valid tuple. It's convention use parentheses to make the presence of a tuple more explicit,
(a, b)
but the parentheses are in no way required.
Python does automatic packing and unpacking of tuples, as is illustrated by the following example.
End of explanation |
6,199 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python Crash Course Exercises - Solutions
This is an optional exercise to test your understanding of Python Basics. If you find this extremely challenging, then you probably are not ready for the rest of this course yet and don't have enough programming experience to continue. I would suggest you take another course more geared towards complete beginners, such as Complete Python Bootcamp
Exercises
Answer the questions or complete the tasks outlined in bold below, use the specific method described if applicable.
What is 7 to the power of 4?
Step1: Split this string
Step2: Given the variables
Step3: Given this nested list, use indexing to grab the word "hello"
Step4: Given this nest dictionary grab the word "hello". Be prepared, this will be annoying/tricky
Step5: What is the main difference between a tuple and a list?
Step6: Create a function that grabs the email website domain from a string in the form
Step7: Create a basic function that returns True if the word 'dog' is contained in the input string. Don't worry about edge cases like a punctuation being attached to the word dog, but do account for capitalization.
Step8: Create a function that counts the number of times the word "dog" occurs in a string. Again ignore edge cases.
Step9: Use lambda expressions and the filter() function to filter out words from a list that don't start with the letter 's'. For example
Step10: Final Problem
You are driving a little too fast, and a police officer stops you. Write a function
to return one of 3 possible results | Python Code:
7 **4
Explanation: Python Crash Course Exercises - Solutions
This is an optional exercise to test your understanding of Python Basics. If you find this extremely challenging, then you probably are not ready for the rest of this course yet and don't have enough programming experience to continue. I would suggest you take another course more geared towards complete beginners, such as Complete Python Bootcamp
Exercises
Answer the questions or complete the tasks outlined in bold below, use the specific method described if applicable.
What is 7 to the power of 4?
End of explanation
s = 'Hi there Sam!'
s.split()
Explanation: Split this string:
s = "Hi there Sam!"
into a list.
End of explanation
planet = "Earth"
diameter = 12742
print("The diameter of {} is {} kilometers.".format(planet,diameter))
Explanation: Given the variables:
planet = "Earth"
diameter = 12742
Use .format() to print the following string:
The diameter of Earth is 12742 kilometers.
End of explanation
lst = [1,2,[3,4],[5,[100,200,['hello']],23,11],1,7]
lst[3][1][2][0]
Explanation: Given this nested list, use indexing to grab the word "hello"
End of explanation
d = {'k1':[1,2,3,{'tricky':['oh','man','inception',{'target':[1,2,3,'hello']}]}]}
d['k1'][3]['tricky'][3]['target'][3]
Explanation: Given this nest dictionary grab the word "hello". Be prepared, this will be annoying/tricky
End of explanation
# Tuple is immutable
Explanation: What is the main difference between a tuple and a list?
End of explanation
def domainGet(email):
return email.split('@')[-1]
domainGet('[email protected]')
Explanation: Create a function that grabs the email website domain from a string in the form:
[email protected]
So for example, passing "[email protected]" would return: domain.com
End of explanation
def findDog(st):
return 'dog' in st.lower().split()
findDog('Is there a dog here?')
Explanation: Create a basic function that returns True if the word 'dog' is contained in the input string. Don't worry about edge cases like a punctuation being attached to the word dog, but do account for capitalization.
End of explanation
def countDog(st):
count = 0
for word in st.lower().split():
if word == 'dog':
count += 1
return count
countDog('This dog runs faster than the other dog dude!')
Explanation: Create a function that counts the number of times the word "dog" occurs in a string. Again ignore edge cases.
End of explanation
seq = ['soup','dog','salad','cat','great']
list(filter(lambda word: word[0]=='s',seq))
Explanation: Use lambda expressions and the filter() function to filter out words from a list that don't start with the letter 's'. For example:
seq = ['soup','dog','salad','cat','great']
should be filtered down to:
['soup','salad']
End of explanation
def caught_speeding(speed, is_birthday):
if is_birthday:
speeding = speed - 5
else:
speeding = speed
if speeding > 80:
return 'Big Ticket'
elif speeding > 60:
return 'Small Ticket'
else:
return 'No Ticket'
caught_speeding(81,True)
caught_speeding(81,False)
Explanation: Final Problem
You are driving a little too fast, and a police officer stops you. Write a function
to return one of 3 possible results: "No ticket", "Small ticket", or "Big Ticket".
If your speed is 60 or less, the result is "No Ticket". If speed is between 61
and 80 inclusive, the result is "Small Ticket". If speed is 81 or more, the result is "Big Ticket". Unless it is your birthday (encoded as a boolean value in the parameters of the function) -- on your birthday, your speed can be 5 higher in all
cases.
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.