Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
2,900 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An Introduction to py-Goldsberry
py-Goldsberry is a Python package that makes it easy to interface with the http
Step1: py-goldsberry is designed to work in conjuntion with Pandas. Each function within the package returns data in a format that is easily converted to a Pandas DataFrame.
To get started, let's get a list of all of the players who were on an NBA roster during the 2014-15 season
PlayerIDs
Step2: If you want to get players who were on an NBA roster during the 1990-91 season, you can pass 1990 to goldsberry.PlayerList()
Step3: You can pass any year to the PlayerList() function to get the roster of players from that season. Alternatively, you may want a list of any player that has been on an NBA roster at any point in the history of the league. You can retrieve this list by passing alltime=True to the PlayerList() function.
Step4: I just sampled 10 random players from the alltime list to illustrate that there are a combination of historic and current NBA players.
The PlayerList() function is critical to the usage of other parts of the package. If you are interested in player level data, I highly recommend creating a list of players that you are interested in by using this function. You can refer to this list later.
GameIDs
One of the major modules of py-goldsberry is the game module. Within that module lies a set of classes that extracts information at a game level. There are two key sub-types of data in the module, box score and play-by-play. To access this data, you will need a specific GameID.
These GameIDs are not super straightforward to find through the stats.nba.com website.py-goldsberry has a function built in that links to a table I have created containing all of the GameIDs from the first game in NBA history through the end of the 2014-15 season.
To access this table of GameIDs, use the GameIDs() function.
Step5: This table is fairly raw at this point. I'm still in the process of augmenting and making the data more easily searchable. For now, it may make sense to filter by a specific season or date. In the GAMECODE column, the code breaks down into the date followed by the initials of the two teams involved.
As with PlayerIDs, this table will likely be used fairly often. It is best to pull the list of games into an object at the very beginning of the analysis for easy access to filter
TeamIDs
A third module, team requires the use of unique teamIDs. I'm still in the process of building a simple way to arrive at a searchable table, but you can get at a list of ids (not matched to team name) by filtering the gameids table we just created.
Step6: You will need to make sure you pass the year you wish to filter by as a string or you will need to change the datatype of the season column to numeric before you filter.
While this list is comprehensive in terms of unique teamIDs for the 2014-15 season, it is not matched with the team name. It is not as useful as it could be without additional information. We can use one of the classes within the team module to get some additional information, and with a few lines of code, have a more descriptive database of teamIDs
We'll start by getting information for a single team. Then we'll put together a loop that creates a searchable/sortable dataframe.
Step7: You can see above, calling the team_info() class within the team module returns an object which we saved as teaminfo. To get the actual data, we call the info() method which is part of the teaminfo object that we created. This is the standard parttern for the almost all of py-goldsberry. The package is built this way to minimize the nubmer of calls that need to be made the NBA servers while returning a maximum amount of data.
In general, all calls are classes. Each class has methods associated with the variety of data that is retrieved when a unique call is made to the NBA website. When you save each class as an object, you immediately make a call to the website and the data which is retrieved is stored within the object and accessible through the use of object specific methods. If that doesn't make sense, don't worry. Just keep following the tutorials and you'll get the hang of how to use it without necessarily needing to understand the underlying mechanics.
After a brief digression, back to creating a table of teamIDs with rich information. We can create a nice table by implementing a simple loop gathering information on each team and merginging it into a single dataframe. | Python Code:
import goldsberry
import pandas as pd
goldsberry.__version__
Explanation: An Introduction to py-Goldsberry
py-Goldsberry is a Python package that makes it easy to interface with the http://stats.nba.com and retrieve the data in a more analyzable format.
This is the first in a series of tutorials that walk through the different modules of the packages and how to use each to get different types of data.
If you've made it this far, you're probably less interested in reading about the package and more interested in actually using it.
Installation
If you don't have the package installed, use pip install get the latest version
pip install py-goldsberry
pip install --upgrade py-goldsberry
When you have py-goldsberry installed, you can load the package and check the version number
End of explanation
players2014 = goldsberry.PlayerList(2014)
players2014 = pd.DataFrame(players2014)
players2014.head()
Explanation: py-goldsberry is designed to work in conjuntion with Pandas. Each function within the package returns data in a format that is easily converted to a Pandas DataFrame.
To get started, let's get a list of all of the players who were on an NBA roster during the 2014-15 season
PlayerIDs
End of explanation
players1990 = goldsberry.PlayerList(1990)
players1990 = pd.DataFrame(players1990)
players1990.head()
Explanation: If you want to get players who were on an NBA roster during the 1990-91 season, you can pass 1990 to goldsberry.PlayerList()
End of explanation
players_alltime = goldsberry.PlayerList(AllTime=True)
players_alltime = pd.DataFrame(players_alltime)
players_alltime.sample(10)
Explanation: You can pass any year to the PlayerList() function to get the roster of players from that season. Alternatively, you may want a list of any player that has been on an NBA roster at any point in the history of the league. You can retrieve this list by passing alltime=True to the PlayerList() function.
End of explanation
gameids = goldsberry.GameIDs()
gameids = pd.DataFrame(gameids)
gameids.sample(10)
Explanation: I just sampled 10 random players from the alltime list to illustrate that there are a combination of historic and current NBA players.
The PlayerList() function is critical to the usage of other parts of the package. If you are interested in player level data, I highly recommend creating a list of players that you are interested in by using this function. You can refer to this list later.
GameIDs
One of the major modules of py-goldsberry is the game module. Within that module lies a set of classes that extracts information at a game level. There are two key sub-types of data in the module, box score and play-by-play. To access this data, you will need a specific GameID.
These GameIDs are not super straightforward to find through the stats.nba.com website.py-goldsberry has a function built in that links to a table I have created containing all of the GameIDs from the first game in NBA history through the end of the 2014-15 season.
To access this table of GameIDs, use the GameIDs() function.
End of explanation
filter_season = '2014'
teamids = gameids['HOME_TEAM_ID'].ix[gameids['SEASON']==filter_season].drop_duplicates()
teamids.head()
Explanation: This table is fairly raw at this point. I'm still in the process of augmenting and making the data more easily searchable. For now, it may make sense to filter by a specific season or date. In the GAMECODE column, the code breaks down into the date followed by the initials of the two teams involved.
As with PlayerIDs, this table will likely be used fairly often. It is best to pull the list of games into an object at the very beginning of the analysis for easy access to filter
TeamIDs
A third module, team requires the use of unique teamIDs. I'm still in the process of building a simple way to arrive at a searchable table, but you can get at a list of ids (not matched to team name) by filtering the gameids table we just created.
End of explanation
teaminfo = goldsberry.team.team_info(teamids.iloc[0])
pd.DataFrame(teaminfo.info())
Explanation: You will need to make sure you pass the year you wish to filter by as a string or you will need to change the datatype of the season column to numeric before you filter.
While this list is comprehensive in terms of unique teamIDs for the 2014-15 season, it is not matched with the team name. It is not as useful as it could be without additional information. We can use one of the classes within the team module to get some additional information, and with a few lines of code, have a more descriptive database of teamIDs
We'll start by getting information for a single team. Then we'll put together a loop that creates a searchable/sortable dataframe.
End of explanation
teamids_full = pd.DataFrame() # Create empty Data Frame
for i in teamids.values:
team = goldsberry.team.team_info(i)
teamids_full = pd.concat([teamids_full, pd.DataFrame(team.info())])
teamids_full
Explanation: You can see above, calling the team_info() class within the team module returns an object which we saved as teaminfo. To get the actual data, we call the info() method which is part of the teaminfo object that we created. This is the standard parttern for the almost all of py-goldsberry. The package is built this way to minimize the nubmer of calls that need to be made the NBA servers while returning a maximum amount of data.
In general, all calls are classes. Each class has methods associated with the variety of data that is retrieved when a unique call is made to the NBA website. When you save each class as an object, you immediately make a call to the website and the data which is retrieved is stored within the object and accessible through the use of object specific methods. If that doesn't make sense, don't worry. Just keep following the tutorials and you'll get the hang of how to use it without necessarily needing to understand the underlying mechanics.
After a brief digression, back to creating a table of teamIDs with rich information. We can create a nice table by implementing a simple loop gathering information on each team and merginging it into a single dataframe.
End of explanation |
2,901 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Loading data from Cloudant or CouchDB
You can load data from CouchDB or a managed Cloudant instance using the Cloudant Spark connector.
Prerequisites
Collect your database connection information
Step1: Configure database connectivity
Customize this cell with your Cloudant/CouchDB connection information
Step2: Load documents from the database
Load the documents into an Apache Spark DataFrame.
Step3: Explore the loaded data using PixieDust
Select the DataFrame view to inspect the metadata and explore the data by choosing a chart type and chart options. | Python Code:
import pixiedust
pixiedust.enableJobMonitor()
Explanation: Loading data from Cloudant or CouchDB
You can load data from CouchDB or a managed Cloudant instance using the Cloudant Spark connector.
Prerequisites
Collect your database connection information: the database host, user name, password and source database.
<div class="alert alert-block alert-info">If your Cloudant instance was provisioned in Bluemix you can find the connectivity information in the _Service Credentials_ tab.
</div>
Import PixieDust and enable the Apache Spark Job monitor
End of explanation
# @hidden_cell
# Enter your Cloudant host name
host = '...'
# Enter your Cloudant user name
username = '...'
# Enter your Cloudant password
password = '...'
# Enter your source database name
database = '...'
Explanation: Configure database connectivity
Customize this cell with your Cloudant/CouchDB connection information
End of explanation
# no changes are required to this cell
# obtain Spark SQL Context
sqlContext = SQLContext(sc)
# load data
cloudant_data = sqlContext.read.format("com.cloudant.spark").\
option("cloudant.host", host).\
option("cloudant.username", username).\
option("cloudant.password", password).\
load(database)
Explanation: Load documents from the database
Load the documents into an Apache Spark DataFrame.
End of explanation
display(cloudant_data)
Explanation: Explore the loaded data using PixieDust
Select the DataFrame view to inspect the metadata and explore the data by choosing a chart type and chart options.
End of explanation |
2,902 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Decision Trees in Practice
In this assignment we will explore various techniques for preventing overfitting in decision trees. We will extend the implementation of the binary decision trees that we implemented in the previous assignment. You will have to use your solutions from this previous assignment and extend them.
In this assignment you will
Step1: Load LendingClub Dataset
This assignment will use the LendingClub dataset used in the previous two assignments.
Step2: As before, we reassign the labels to have +1 for a safe loan, and -1 for a risky (bad) loan.
Step3: We will be using the same 4 categorical features as in the previous assignment
Step4: Subsample dataset to make sure classes are balanced
Just as we did in the previous assignment, we will undersample the larger class (safe loans) in order to balance out our dataset. This means we are throwing away many data points. We used seed = 1 so everyone gets the same results.
Step5: Note
Step6: The feature columns now look like this
Step7: Train-Validation split
We split the data into a train-validation split with 80% of the data in the training set and 20% of the data in the validation set. We use seed=1 so that everyone gets the same result.
Step8: Early stopping methods for decision trees
In this section, we will extend the binary tree implementation from the previous assignment in order to handle some early stopping conditions. Recall the 3 early stopping methods that were discussed in lecture
Step9: Quiz question
Step10: Quiz question
Step11: We then wrote a function best_splitting_feature that finds the best feature to split on given the data and a list of features to consider.
Please copy and paste your best_splitting_feature code here.
Step12: Finally, recall the function create_leaf from the previous assignment, which creates a leaf node given a set of target values.
Please copy and paste your create_leaf code here.
Step13: Incorporating new early stopping conditions in binary decision tree implementation
Now, you will implement a function that builds a decision tree handling the three early stopping conditions described in this assignment. In particular, you will write code to detect early stopping conditions 2 and 3. You implemented above the functions needed to detect these conditions. The 1st early stopping condition, max_depth, was implemented in the previous assigment and you will not need to reimplement this. In addition to these early stopping conditions, the typical stopping conditions of having no mistakes or no more features to split on (which we denote by "stopping conditions" 1 and 2) are also included as in the previous assignment.
Implementing early stopping condition 2
Step14: Here is a function to count the nodes in your tree
Step15: Run the following test code to check your implementation. Make sure you get 'Test passed' before proceeding.
Step16: Build a tree!
Now that your code is working, we will train a tree model on the train_data with
* max_depth = 6
* min_node_size = 100,
* min_error_reduction = 0.0
Warning
Step17: Let's now train a tree model ignoring early stopping conditions 2 and 3 so that we get the same tree as in the previous assignment. To ignore these conditions, we set min_node_size=0 and min_error_reduction=-1 (a negative value).
Step18: Making predictions
Recall that in the previous assignment you implemented a function classify to classify a new point x using a given tree.
Please copy and paste your classify code here.
Step19: Now, let's consider the first example of the validation set and see what the my_decision_tree_new model predicts for this data point.
Step20: Let's add some annotations to our prediction to see what the prediction path was that lead to this predicted class
Step21: Let's now recall the prediction path for the decision tree learned in the previous assignment, which we recreated here as my_decision_tree_old.
Step22: Quiz question
Step23: Now, let's use this function to evaluate the classification error of my_decision_tree_new on the validation_set.
Step24: Now, evaluate the validation error using my_decision_tree_old.
Step25: Quiz question
Step26: Evaluating the models
Let us evaluate the models on the train and validation data. Let us start by evaluating the classification error on the training data
Step27: Now evaluate the classification error on the validation data.
Step28: Quiz Question
Step29: Compute the number of nodes in model_1, model_2, and model_3.
Step30: Quiz question
Step31: Calculate the accuracy of each model (model_4, model_5, or model_6) on the validation set.
Step32: Using the count_leaves function, compute the number of leaves in each of each models in (model_4, model_5, and model_6).
Step33: Quiz Question
Step34: Now, let us evaluate the models (model_7, model_8, or model_9) on the validation_set.
Step35: Using the count_leaves function, compute the number of leaves in each of each models (model_7, model_8, and model_9). | Python Code:
import graphlab
Explanation: Decision Trees in Practice
In this assignment we will explore various techniques for preventing overfitting in decision trees. We will extend the implementation of the binary decision trees that we implemented in the previous assignment. You will have to use your solutions from this previous assignment and extend them.
In this assignment you will:
Implement binary decision trees with different early stopping methods.
Compare models with different stopping parameters.
Visualize the concept of overfitting in decision trees.
Let's get started!
Fire up GraphLab Create
Make sure you have the latest version of GraphLab Create.
End of explanation
loans = graphlab.SFrame('lending-club-data.gl/')
Explanation: Load LendingClub Dataset
This assignment will use the LendingClub dataset used in the previous two assignments.
End of explanation
loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)
loans = loans.remove_column('bad_loans')
Explanation: As before, we reassign the labels to have +1 for a safe loan, and -1 for a risky (bad) loan.
End of explanation
features = ['grade', # grade of the loan
'term', # the term of the loan
'home_ownership', # home_ownership status: own, mortgage or rent
'emp_length', # number of years of employment
]
target = 'safe_loans'
loans = loans[features + [target]]
Explanation: We will be using the same 4 categorical features as in the previous assignment:
1. grade of the loan
2. the length of the loan term
3. the home ownership status: own, mortgage, rent
4. number of years of employment.
In the dataset, each of these features is a categorical feature. Since we are building a binary decision tree, we will have to convert this to binary data in a subsequent section using 1-hot encoding.
End of explanation
safe_loans_raw = loans[loans[target] == 1]
risky_loans_raw = loans[loans[target] == -1]
# Since there are less risky loans than safe loans, find the ratio of the sizes
# and use that percentage to undersample the safe loans.
percentage = len(risky_loans_raw)/float(len(safe_loans_raw))
safe_loans = safe_loans_raw.sample(percentage, seed = 1)
risky_loans = risky_loans_raw
loans_data = risky_loans.append(safe_loans)
print "Percentage of safe loans :", len(safe_loans) / float(len(loans_data))
print "Percentage of risky loans :", len(risky_loans) / float(len(loans_data))
print "Total number of loans in our new dataset :", len(loans_data)
Explanation: Subsample dataset to make sure classes are balanced
Just as we did in the previous assignment, we will undersample the larger class (safe loans) in order to balance out our dataset. This means we are throwing away many data points. We used seed = 1 so everyone gets the same results.
End of explanation
loans_data = risky_loans.append(safe_loans)
for feature in features:
loans_data_one_hot_encoded = loans_data[feature].apply(lambda x: {x: 1})
loans_data_unpacked = loans_data_one_hot_encoded.unpack(column_name_prefix=feature)
# Change None's to 0's
for column in loans_data_unpacked.column_names():
loans_data_unpacked[column] = loans_data_unpacked[column].fillna(0)
loans_data.remove_column(feature)
loans_data.add_columns(loans_data_unpacked)
Explanation: Note: There are many approaches for dealing with imbalanced data, including some where we modify the learning algorithm. These approaches are beyond the scope of this course, but some of them are reviewed in this paper. For this assignment, we use the simplest possible approach, where we subsample the overly represented class to get a more balanced dataset. In general, and especially when the data is highly imbalanced, we recommend using more advanced methods.
Transform categorical data into binary features
Since we are implementing binary decision trees, we transform our categorical data into binary data using 1-hot encoding, just as in the previous assignment. Here is the summary of that discussion:
For instance, the home_ownership feature represents the home ownership status of the loanee, which is either own, mortgage or rent. For example, if a data point has the feature
{'home_ownership': 'RENT'}
we want to turn this into three features:
{
'home_ownership = OWN' : 0,
'home_ownership = MORTGAGE' : 0,
'home_ownership = RENT' : 1
}
Since this code requires a few Python and GraphLab tricks, feel free to use this block of code as is. Refer to the API documentation for a deeper understanding.
End of explanation
features = loans_data.column_names()
features.remove('safe_loans') # Remove the response variable
features
Explanation: The feature columns now look like this:
End of explanation
train_data, validation_set = loans_data.random_split(.8, seed=1)
Explanation: Train-Validation split
We split the data into a train-validation split with 80% of the data in the training set and 20% of the data in the validation set. We use seed=1 so that everyone gets the same result.
End of explanation
def reached_minimum_node_size(data, min_node_size):
# Return True if the number of data points is less than or equal to the minimum node size.
## YOUR CODE HERE
return len(data) <= min_node_size
Explanation: Early stopping methods for decision trees
In this section, we will extend the binary tree implementation from the previous assignment in order to handle some early stopping conditions. Recall the 3 early stopping methods that were discussed in lecture:
Reached a maximum depth. (set by parameter max_depth).
Reached a minimum node size. (set by parameter min_node_size).
Don't split if the gain in error reduction is too small. (set by parameter min_error_reduction).
For the rest of this assignment, we will refer to these three as early stopping conditions 1, 2, and 3.
Early stopping condition 1: Maximum depth
Recall that we already implemented the maximum depth stopping condition in the previous assignment. In this assignment, we will experiment with this condition a bit more and also write code to implement the 2nd and 3rd early stopping conditions.
We will be reusing code from the previous assignment and then building upon this. We will alert you when you reach a function that was part of the previous assignment so that you can simply copy and past your previous code.
Early stopping condition 2: Minimum node size
The function reached_minimum_node_size takes 2 arguments:
The data (from a node)
The minimum number of data points that a node is allowed to split on, min_node_size.
This function simply calculates whether the number of data points at a given node is less than or equal to the specified minimum node size. This function will be used to detect this early stopping condition in the decision_tree_create function.
Fill in the parts of the function below where you find ## YOUR CODE HERE. There is one instance in the function below.
End of explanation
def error_reduction(error_before_split, error_after_split):
# Return the error before the split minus the error after the split.
## YOUR CODE HERE
return error_before_split - error_after_split
Explanation: Quiz question: Given an intermediate node with 6 safe loans and 3 risky loans, if the min_node_size parameter is 10, what should the tree learning algorithm do next?
Early stopping condition 3: Minimum gain in error reduction
The function error_reduction takes 2 arguments:
The error before a split, error_before_split.
The error after a split, error_after_split.
This function computes the gain in error reduction, i.e., the difference between the error before the split and that after the split. This function will be used to detect this early stopping condition in the decision_tree_create function.
Fill in the parts of the function below where you find ## YOUR CODE HERE. There is one instance in the function below.
End of explanation
def intermediate_node_num_mistakes(labels_in_node):
# Corner case: If labels_in_node is empty, return 0
if len(labels_in_node) == 0:
return 0
# Count the number of 1's (safe loans)
## YOUR CODE HERE
num_positive = sum([x == 1 for x in labels_in_node])
# Count the number of -1's (risky loans)
## YOUR CODE HERE
num_negative = sum([x == -1 for x in labels_in_node])
# Return the number of mistakes that the majority classifier makes.
## YOUR CODE HERE
return min(num_positive, num_negative)
Explanation: Quiz question: Assume an intermediate node has 6 safe loans and 3 risky loans. For each of 4 possible features to split on, the error reduction is 0.0, 0.05, 0.1, and 0.14, respectively. If the minimum gain in error reduction parameter is set to 0.2, what should the tree learning algorithm do next?
Grabbing binary decision tree helper functions from past assignment
Recall from the previous assignment that we wrote a function intermediate_node_num_mistakes that calculates the number of misclassified examples when predicting the majority class. This is used to help determine which feature is best to split on at a given node of the tree.
Please copy and paste your code for intermediate_node_num_mistakes here.
End of explanation
def best_splitting_feature(data, features, target):
best_feature = None # Keep track of the best feature
best_error = 10 # Keep track of the best error so far
# Note: Since error is always <= 1, we should intialize it with something larger than 1.
# Convert to float to make sure error gets computed correctly.
num_data_points = float(len(data))
# Loop through each feature to consider splitting on that feature
for feature in features:
# The left split will have all data points where the feature value is 0
left_split = data[data[feature] == 0]
# The right split will have all data points where the feature value is 1
## YOUR CODE HERE
right_split = data[data[feature] == 1]
# Calculate the number of misclassified examples in the left split.
# Remember that we implemented a function for this! (It was called intermediate_node_num_mistakes)
# YOUR CODE HERE
left_mistakes = intermediate_node_num_mistakes(left_split['safe_loans'])
# Calculate the number of misclassified examples in the right split.
## YOUR CODE HERE
right_mistakes = intermediate_node_num_mistakes(right_split['safe_loans'])
# Compute the classification error of this split.
# Error = (# of mistakes (left) + # of mistakes (right)) / (# of data points)
## YOUR CODE HERE
error = float(left_mistakes + right_mistakes) / len(data)
# If this is the best error we have found so far, store the feature as best_feature and the error as best_error
## YOUR CODE HERE
if error < best_error:
best_feature, best_error = feature, error
return best_feature # Return the best feature we found
Explanation: We then wrote a function best_splitting_feature that finds the best feature to split on given the data and a list of features to consider.
Please copy and paste your best_splitting_feature code here.
End of explanation
def create_leaf(target_values):
# Create a leaf node
leaf = {'splitting_feature' : None,
'left' : None,
'right' : None,
'is_leaf': True } ## YOUR CODE HERE
# Count the number of data points that are +1 and -1 in this node.
num_ones = len(target_values[target_values == +1])
num_minus_ones = len(target_values[target_values == -1])
# For the leaf node, set the prediction to be the majority class.
# Store the predicted class (1 or -1) in leaf['prediction']
if num_ones > num_minus_ones:
leaf['prediction'] = 1 ## YOUR CODE HERE
else:
leaf['prediction'] = -1 ## YOUR CODE HERE
# Return the leaf node
return leaf
Explanation: Finally, recall the function create_leaf from the previous assignment, which creates a leaf node given a set of target values.
Please copy and paste your create_leaf code here.
End of explanation
def decision_tree_create(data, features, target, current_depth = 0,
max_depth = 10, min_node_size=1,
min_error_reduction=0.0):
remaining_features = features[:] # Make a copy of the features.
target_values = data[target]
print "--------------------------------------------------------------------"
print "Subtree, depth = %s (%s data points)." % (current_depth, len(target_values))
# Stopping condition 1: All nodes are of the same type.
if intermediate_node_num_mistakes(target_values) == 0:
print "Stopping condition 1 reached. All data points have the same target value."
return create_leaf(target_values)
# Stopping condition 2: No more features to split on.
if remaining_features == []:
print "Stopping condition 2 reached. No remaining features."
return create_leaf(target_values)
# Early stopping condition 1: Reached max depth limit.
if current_depth >= max_depth:
print "Early stopping condition 1 reached. Reached maximum depth."
return create_leaf(target_values)
# Early stopping condition 2: Reached the minimum node size.
# If the number of data points is less than or equal to the minimum size, return a leaf.
if reached_minimum_node_size(data, min_node_size): ## YOUR CODE HERE
print "Early stopping condition 2 reached. Reached minimum node size."
return create_leaf(target_values) ## YOUR CODE HERE
# Find the best splitting feature
splitting_feature = best_splitting_feature(data, features, target)
# Split on the best feature that we found.
left_split = data[data[splitting_feature] == 0]
right_split = data[data[splitting_feature] == 1]
# Early stopping condition 3: Minimum error reduction
# Calculate the error before splitting (number of misclassified examples
# divided by the total number of examples)
error_before_split = intermediate_node_num_mistakes(target_values) / float(len(data))
# Calculate the error after splitting (number of misclassified examples
# in both groups divided by the total number of examples)
left_mistakes = intermediate_node_num_mistakes(left_split['safe_loans']) ## YOUR CODE HERE
right_mistakes = intermediate_node_num_mistakes(right_split['safe_loans']) ## YOUR CODE HERE
error_after_split = (left_mistakes + right_mistakes) / float(len(data))
# If the error reduction is LESS THAN OR EQUAL TO min_error_reduction, return a leaf.
if error_reduction(error_before_split, error_after_split) <= min_error_reduction: ## YOUR CODE HERE
print "Early stopping condition 3 reached. Minimum error reduction."
return create_leaf(target_values) ## YOUR CODE HERE
remaining_features.remove(splitting_feature)
print "Split on feature %s. (%s, %s)" % (\
splitting_feature, len(left_split), len(right_split))
# Repeat (recurse) on left and right subtrees
left_tree = decision_tree_create(left_split, remaining_features, target,
current_depth + 1, max_depth, min_node_size, min_error_reduction)
## YOUR CODE HERE
right_tree = decision_tree_create(right_split, remaining_features, target, current_depth + 1, max_depth, min_node_size, min_error_reduction)
return {'is_leaf' : False,
'prediction' : None,
'splitting_feature': splitting_feature,
'left' : left_tree,
'right' : right_tree}
Explanation: Incorporating new early stopping conditions in binary decision tree implementation
Now, you will implement a function that builds a decision tree handling the three early stopping conditions described in this assignment. In particular, you will write code to detect early stopping conditions 2 and 3. You implemented above the functions needed to detect these conditions. The 1st early stopping condition, max_depth, was implemented in the previous assigment and you will not need to reimplement this. In addition to these early stopping conditions, the typical stopping conditions of having no mistakes or no more features to split on (which we denote by "stopping conditions" 1 and 2) are also included as in the previous assignment.
Implementing early stopping condition 2: minimum node size:
Step 1: Use the function reached_minimum_node_size that you implemented earlier to write an if condition to detect whether we have hit the base case, i.e., the node does not have enough data points and should be turned into a leaf. Don't forget to use the min_node_size argument.
Step 2: Return a leaf. This line of code should be the same as the other (pre-implemented) stopping conditions.
Implementing early stopping condition 3: minimum error reduction:
Note: This has to come after finding the best splitting feature so we can calculate the error after splitting in order to calculate the error reduction.
Step 1: Calculate the classification error before splitting. Recall that classification error is defined as:
$$
\text{classification error} = \frac{\text{# mistakes}}{\text{# total examples}}
$$
* Step 2: Calculate the classification error after splitting. This requires calculating the number of mistakes in the left and right splits, and then dividing by the total number of examples.
* Step 3: Use the function error_reduction to that you implemented earlier to write an if condition to detect whether the reduction in error is less than the constant provided (min_error_reduction). Don't forget to use that argument.
* Step 4: Return a leaf. This line of code should be the same as the other (pre-implemented) stopping conditions.
Fill in the places where you find ## YOUR CODE HERE. There are seven places in this function for you to fill in.
End of explanation
def count_nodes(tree):
if tree['is_leaf']:
return 1
return 1 + count_nodes(tree['left']) + count_nodes(tree['right'])
Explanation: Here is a function to count the nodes in your tree:
End of explanation
small_decision_tree = decision_tree_create(train_data, features, 'safe_loans', max_depth = 2,
min_node_size = 10, min_error_reduction=0.0)
if count_nodes(small_decision_tree) == 7:
print 'Test passed!'
else:
print 'Test failed... try again!'
print 'Number of nodes found :', count_nodes(small_decision_tree)
print 'Number of nodes that should be there : 5'
Explanation: Run the following test code to check your implementation. Make sure you get 'Test passed' before proceeding.
End of explanation
my_decision_tree_new = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 100, min_error_reduction=0.0)
Explanation: Build a tree!
Now that your code is working, we will train a tree model on the train_data with
* max_depth = 6
* min_node_size = 100,
* min_error_reduction = 0.0
Warning: This code block may take a minute to learn.
End of explanation
my_decision_tree_old = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=-1)
Explanation: Let's now train a tree model ignoring early stopping conditions 2 and 3 so that we get the same tree as in the previous assignment. To ignore these conditions, we set min_node_size=0 and min_error_reduction=-1 (a negative value).
End of explanation
def classify(tree, x, annotate = False):
# if the node is a leaf node.
if tree['is_leaf']:
if annotate:
print "At leaf, predicting %s" % tree['prediction']
return tree['prediction']
else:
# split on feature.
split_feature_value = x[tree['splitting_feature']]
if annotate:
print "Split on %s = %s" % (tree['splitting_feature'], split_feature_value)
if split_feature_value == 0:
return classify(tree['left'], x, annotate)
else:
return classify(tree['right'], x, annotate)
### YOUR CODE HERE
Explanation: Making predictions
Recall that in the previous assignment you implemented a function classify to classify a new point x using a given tree.
Please copy and paste your classify code here.
End of explanation
validation_set[0]
print 'Predicted class: %s ' % classify(my_decision_tree_new, validation_set[0])
Explanation: Now, let's consider the first example of the validation set and see what the my_decision_tree_new model predicts for this data point.
End of explanation
classify(my_decision_tree_new, validation_set[0], annotate = True)
Explanation: Let's add some annotations to our prediction to see what the prediction path was that lead to this predicted class:
End of explanation
classify(my_decision_tree_old, validation_set[0], annotate = True)
Explanation: Let's now recall the prediction path for the decision tree learned in the previous assignment, which we recreated here as my_decision_tree_old.
End of explanation
def evaluate_classification_error(tree, data):
# Apply the classify(tree, x) to each row in your data
prediction = data.apply(lambda x: classify(tree, x))
# Once you've made the predictions, calculate the classification error and return it
## YOUR CODE HERE
return (prediction != data['safe_loans']).sum() / float(len(data))
Explanation: Quiz question: For my_decision_tree_new trained with max_depth = 6, min_node_size = 100, min_error_reduction=0.0, is the prediction path for validation_set[0] shorter, longer, or the same as for my_decision_tree_old that ignored the early stopping conditions 2 and 3?
Quiz question: For my_decision_tree_new trained with max_depth = 6, min_node_size = 100, min_error_reduction=0.0, is the prediction path for any point always shorter, always longer, always the same, shorter or the same, or longer or the same as for my_decision_tree_old that ignored the early stopping conditions 2 and 3?
Quiz question: For a tree trained on any dataset using max_depth = 6, min_node_size = 100, min_error_reduction=0.0, what is the maximum number of splits encountered while making a single prediction?
Evaluating the model
Now let us evaluate the model that we have trained. You implemented this evautation in the function evaluate_classification_error from the previous assignment.
Please copy and paste your evaluate_classification_error code here.
End of explanation
evaluate_classification_error(my_decision_tree_new, validation_set)
Explanation: Now, let's use this function to evaluate the classification error of my_decision_tree_new on the validation_set.
End of explanation
evaluate_classification_error(my_decision_tree_old, validation_set)
Explanation: Now, evaluate the validation error using my_decision_tree_old.
End of explanation
model_1 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 2, min_node_size = 0, min_error_reduction=-1)
model_2 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6, min_node_size = 0, min_error_reduction=-1)
model_3 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 14, min_node_size = 0, min_error_reduction=-1)
Explanation: Quiz question: Is the validation error of the new decision tree (using early stopping conditions 2 and 3) lower than, higher than, or the same as that of the old decision tree from the previous assignment?
Exploring the effect of max_depth
We will compare three models trained with different values of the stopping criterion. We intentionally picked models at the extreme ends (too small, just right, and too large).
Train three models with these parameters:
model_1: max_depth = 2 (too small)
model_2: max_depth = 6 (just right)
model_3: max_depth = 14 (may be too large)
For each of these three, we set min_node_size = 0 and min_error_reduction = -1.
Note: Each tree can take up to a few minutes to train. In particular, model_3 will probably take the longest to train.
End of explanation
print "Training data, classification error (model 1):", evaluate_classification_error(model_1, train_data)
print "Training data, classification error (model 2):", evaluate_classification_error(model_2, train_data)
print "Training data, classification error (model 3):", evaluate_classification_error(model_3, train_data)
Explanation: Evaluating the models
Let us evaluate the models on the train and validation data. Let us start by evaluating the classification error on the training data:
End of explanation
print "Validation data, classification error (model 1):", evaluate_classification_error(model_1, validation_set)
print "Validation data, classification error (model 2):", evaluate_classification_error(model_2, validation_set)
print "Validation data, classification error (model 3):", evaluate_classification_error(model_3, validation_set)
Explanation: Now evaluate the classification error on the validation data.
End of explanation
def count_leaves(tree):
if tree['is_leaf']:
return 1
return count_leaves(tree['left']) + count_leaves(tree['right'])
Explanation: Quiz Question: Which tree has the smallest error on the validation data?
Quiz Question: Does the tree with the smallest error in the training data also have the smallest error in the validation data?
Quiz Question: Is it always true that the tree with the lowest classification error on the training set will result in the lowest classification error in the validation set?
Measuring the complexity of the tree
Recall in the lecture that we talked about deeper trees being more complex. We will measure the complexity of the tree as
complexity(T) = number of leaves in the tree T
Here, we provide a function count_leaves that counts the number of leaves in a tree. Using this implementation, compute the number of nodes in model_1, model_2, and model_3.
End of explanation
print count_leaves(model_1)
print count_leaves(model_2)
print count_leaves(model_3)
Explanation: Compute the number of nodes in model_1, model_2, and model_3.
End of explanation
model_4 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6, min_node_size = 0, min_error_reduction=-1)
model_5 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6, min_node_size = 0, min_error_reduction=0)
model_6 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6, min_node_size = 0, min_error_reduction=5)
Explanation: Quiz question: Which tree has the largest complexity?
Quiz question: Is it always true that the most complex tree will result in the lowest classification error in the validation_set?
Exploring the effect of min_error
We will compare three models trained with different values of the stopping criterion. We intentionally picked models at the extreme ends (negative, just right, and too positive).
Train three models with these parameters:
1. model_4: min_error_reduction = -1 (ignoring this early stopping condition)
2. model_5: min_error_reduction = 0 (just right)
3. model_6: min_error_reduction = 5 (too positive)
For each of these three, we set max_depth = 6, and min_node_size = 0.
Note: Each tree can take up to 30 seconds to train.
End of explanation
print "Validation data, classification error (model 4):", evaluate_classification_error(model_4, validation_set)
print "Validation data, classification error (model 5):", evaluate_classification_error(model_5, validation_set)
print "Validation data, classification error (model 6):", evaluate_classification_error(model_6, validation_set)
Explanation: Calculate the accuracy of each model (model_4, model_5, or model_6) on the validation set.
End of explanation
print count_leaves(model_4)
print count_leaves(model_5)
print count_leaves(model_6)
Explanation: Using the count_leaves function, compute the number of leaves in each of each models in (model_4, model_5, and model_6).
End of explanation
model_7 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6, min_node_size = 0, min_error_reduction=-1)
model_8 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6, min_node_size = 2000, min_error_reduction=-1)
model_9 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6, min_node_size = 50000, min_error_reduction=-1)
Explanation: Quiz Question: Using the complexity definition above, which model (model_4, model_5, or model_6) has the largest complexity?
Did this match your expectation?
Quiz Question: model_4 and model_5 have similar classification error on the validation set but model_5 has lower complexity? Should you pick model_5 over model_4?
Exploring the effect of min_node_size
We will compare three models trained with different values of the stopping criterion. Again, intentionally picked models at the extreme ends (too small, just right, and just right).
Train three models with these parameters:
1. model_7: min_node_size = 0 (too small)
2. model_8: min_node_size = 2000 (just right)
3. model_9: min_node_size = 50000 (too large)
For each of these three, we set max_depth = 6, and min_error_reduction = -1.
Note: Each tree can take up to 30 seconds to train.
End of explanation
print "Validation data, classification error (model 7):", evaluate_classification_error(model_7, validation_set)
print "Validation data, classification error (model 8):", evaluate_classification_error(model_8, validation_set)
print "Validation data, classification error (model 9):", evaluate_classification_error(model_9, validation_set)
Explanation: Now, let us evaluate the models (model_7, model_8, or model_9) on the validation_set.
End of explanation
print count_leaves(model_7)
print count_leaves(model_8)
print count_leaves(model_9)
Explanation: Using the count_leaves function, compute the number of leaves in each of each models (model_7, model_8, and model_9).
End of explanation |
2,903 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Neural Network for Image Classification
Step1: 2 - Dataset
You will use the same "Cat vs non-Cat" dataset as in "Logistic Regression as a Neural Network" (Assignment 2). The model you had built had 70% test accuracy on classifying cats vs non-cats images. Hopefully, your new model will perform a better!
Problem Statement
Step2: The following code will show you an image in the dataset. Feel free to change the index and re-run the cell multiple times to see other images.
Step3: As usual, you reshape and standardize the images before feeding them to the network. The code is given in the cell below.
<img src="images/imvectorkiank.png" style="width
Step5: $12,288$ equals $64 \times 64 \times 3$ which is the size of one reshaped image vector.
3 - Architecture of your model
Now that you are familiar with the dataset, it is time to build a deep neural network to distinguish cat images from non-cat images.
You will build two different models
Step6: Run the cell below to train your parameters. See if your model runs. The cost should be decreasing. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (โฌ) on the upper bar of the notebook to stop the cell and try to find your error.
Step7: Expected Output
Step8: Expected Output
Step10: Expected Output
Step11: You will now train the model as a 5-layer neural network.
Run the cell below to train your model. The cost should decrease on every iteration. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (โฌ) on the upper bar of the notebook to stop the cell and try to find your error.
Step12: Expected Output
Step13: <table>
<tr>
<td>
**Train Accuracy**
</td>
<td>
1.0
</td>
</tr>
</table>
Step14: Expected Output
Step15: A few type of images the model tends to do poorly on include | Python Code:
import time
import numpy as np
import h5py
import matplotlib.pyplot as plt
import scipy
from PIL import Image
from scipy import ndimage
from dnn_app_utils import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
Explanation: Deep Neural Network for Image Classification: Application
When you finish this, you will have finished the last programming assignment of Week 4, and also the last programming assignment of this course!
You will use use the functions you'd implemented in the previous assignment to build a deep network, and apply it to cat vs non-cat classification. Hopefully, you will see an improvement in accuracy relative to your previous logistic regression implementation.
After this assignment you will be able to:
- Build and apply a deep neural network to supervised learning.
Let's get started!
1 - Packages
Let's first import all the packages that you will need during this assignment.
- numpy is the fundamental package for scientific computing with Python.
- matplotlib is a library to plot graphs in Python.
- h5py is a common package to interact with a dataset that is stored on an H5 file.
- PIL and scipy are used here to test your model with your own picture at the end.
- dnn_app_utils provides the functions implemented in the "Building your Deep Neural Network: Step by Step" assignment to this notebook.
- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work.
End of explanation
train_x_orig, train_y, test_x_orig, test_y, classes = load_data()
Explanation: 2 - Dataset
You will use the same "Cat vs non-Cat" dataset as in "Logistic Regression as a Neural Network" (Assignment 2). The model you had built had 70% test accuracy on classifying cats vs non-cats images. Hopefully, your new model will perform a better!
Problem Statement: You are given a dataset ("data.h5") containing:
- a training set of m_train images labelled as cat (1) or non-cat (0)
- a test set of m_test images labelled as cat and non-cat
- each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB).
Let's get more familiar with the dataset. Load the data by running the cell below.
End of explanation
# Example of a picture
index = 10
plt.imshow(train_x_orig[index])
print ("y = " + str(train_y[0,index]) + ". It's a " + classes[train_y[0,index]].decode("utf-8") + " picture.")
# Explore your dataset
m_train = train_x_orig.shape[0]
num_px = train_x_orig.shape[1]
m_test = test_x_orig.shape[0]
print ("Number of training examples: " + str(m_train))
print ("Number of testing examples: " + str(m_test))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_x_orig shape: " + str(train_x_orig.shape))
print ("train_y shape: " + str(train_y.shape))
print ("test_x_orig shape: " + str(test_x_orig.shape))
print ("test_y shape: " + str(test_y.shape))
Explanation: The following code will show you an image in the dataset. Feel free to change the index and re-run the cell multiple times to see other images.
End of explanation
# Reshape the training and test examples
train_x_flatten = train_x_orig.reshape(train_x_orig.shape[0], -1).T # The "-1" makes reshape flatten the remaining dimensions
test_x_flatten = test_x_orig.reshape(test_x_orig.shape[0], -1).T
# Standardize data to have feature values between 0 and 1.
train_x = train_x_flatten/255.
test_x = test_x_flatten/255.
print ("train_x's shape: " + str(train_x.shape))
print ("test_x's shape: " + str(test_x.shape))
Explanation: As usual, you reshape and standardize the images before feeding them to the network. The code is given in the cell below.
<img src="images/imvectorkiank.png" style="width:450px;height:300px;">
<caption><center> <u>Figure 1</u>: Image to vector conversion. <br> </center></caption>
End of explanation
### CONSTANTS DEFINING THE MODEL ####
n_x = 12288 # num_px * num_px * 3
n_h = 7
n_y = 1
layers_dims = (n_x, n_h, n_y)
# GRADED FUNCTION: two_layer_model
def two_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):
Implements a two-layer neural network: LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (n_x, number of examples)
Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
layers_dims -- dimensions of the layers (n_x, n_h, n_y)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- If set to True, this will print the cost every 100 iterations
Returns:
parameters -- a dictionary containing W1, W2, b1, and b2
np.random.seed(1)
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
(n_x, n_h, n_y) = layers_dims
# Initialize parameters dictionary, by calling one of the functions you'd previously implemented
### START CODE HERE ### (โ 1 line of code)
parameters = initialize_parameters(n_x, n_h, n_y)
### END CODE HERE ###
# Get W1, b1, W2 and b2 from the dictionary parameters.
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> SIGMOID. Inputs: "X, W1, b1". Output: "A1, cache1, A2, cache2".
### START CODE HERE ### (โ 2 lines of code)
A1, cache1 = linear_activation_forward(X, W1, b1, "relu")
A2, cache2 = linear_activation_forward(A1, W2, b2, "sigmoid")
### END CODE HERE ###
# Compute cost
### START CODE HERE ### (โ 1 line of code)
cost = compute_cost(A2, Y)
### END CODE HERE ###
# Initializing backward propagation
dA2 = - (np.divide(Y, A2) - np.divide(1 - Y, 1 - A2))
# Backward propagation. Inputs: "dA2, cache2, cache1". Outputs: "dA1, dW2, db2; also dA0 (not used), dW1, db1".
### START CODE HERE ### (โ 2 lines of code)
dA1, dW2, db2 = linear_activation_backward(dA2, cache2, "sigmoid")
dA0, dW1, db1 = linear_activation_backward(dA1, cache1, "relu")
### END CODE HERE ###
# Set grads['dWl'] to dW1, grads['db1'] to db1, grads['dW2'] to dW2, grads['db2'] to db2
grads['dW1'] = dW1
grads['db1'] = db1
grads['dW2'] = dW2
grads['db2'] = db2
# Update parameters.
### START CODE HERE ### (approx. 1 line of code)
parameters = update_parameters(parameters, grads, learning_rate)
### END CODE HERE ###
# Retrieve W1, b1, W2, b2 from parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
# Print the cost every 100 training example
if print_cost and i % 100 == 0:
print("Cost after iteration {}: {}".format(i, np.squeeze(cost)))
if print_cost and i % 100 == 0:
costs.append(cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
Explanation: $12,288$ equals $64 \times 64 \times 3$ which is the size of one reshaped image vector.
3 - Architecture of your model
Now that you are familiar with the dataset, it is time to build a deep neural network to distinguish cat images from non-cat images.
You will build two different models:
- A 2-layer neural network
- An L-layer deep neural network
You will then compare the performance of these models, and also try out different values for $L$.
Let's look at the two architectures.
3.1 - 2-layer neural network
<img src="images/2layerNN_kiank.png" style="width:650px;height:400px;">
<caption><center> <u>Figure 2</u>: 2-layer neural network. <br> The model can be summarized as: INPUT -> LINEAR -> RELU -> LINEAR -> SIGMOID -> OUTPUT. </center></caption>
<u>Detailed Architecture of figure 2</u>:
- The input is a (64,64,3) image which is flattened to a vector of size $(12288,1)$.
- The corresponding vector: $[x_0,x_1,...,x_{12287}]^T$ is then multiplied by the weight matrix $W^{[1]}$ of size $(n^{[1]}, 12288)$.
- You then add a bias term and take its relu to get the following vector: $[a_0^{[1]}, a_1^{[1]},..., a_{n^{[1]}-1}^{[1]}]^T$.
- You then repeat the same process.
- You multiply the resulting vector by $W^{[2]}$ and add your intercept (bias).
- Finally, you take the sigmoid of the result. If it is greater than 0.5, you classify it to be a cat.
3.2 - L-layer deep neural network
It is hard to represent an L-layer deep neural network with the above representation. However, here is a simplified network representation:
<img src="images/LlayerNN_kiank.png" style="width:650px;height:400px;">
<caption><center> <u>Figure 3</u>: L-layer neural network. <br> The model can be summarized as: [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID</center></caption>
<u>Detailed Architecture of figure 3</u>:
- The input is a (64,64,3) image which is flattened to a vector of size (12288,1).
- The corresponding vector: $[x_0,x_1,...,x_{12287}]^T$ is then multiplied by the weight matrix $W^{[1]}$ and then you add the intercept $b^{[1]}$. The result is called the linear unit.
- Next, you take the relu of the linear unit. This process could be repeated several times for each $(W^{[l]}, b^{[l]})$ depending on the model architecture.
- Finally, you take the sigmoid of the final linear unit. If it is greater than 0.5, you classify it to be a cat.
3.3 - General methodology
As usual you will follow the Deep Learning methodology to build the model:
1. Initialize parameters / Define hyperparameters
2. Loop for num_iterations:
a. Forward propagation
b. Compute cost function
c. Backward propagation
d. Update parameters (using parameters, and grads from backprop)
4. Use trained parameters to predict labels
Let's now implement those two models!
4 - Two-layer neural network
Question: Use the helper functions you have implemented in the previous assignment to build a 2-layer neural network with the following structure: LINEAR -> RELU -> LINEAR -> SIGMOID. The functions you may need and their inputs are:
python
def initialize_parameters(n_x, n_h, n_y):
...
return parameters
def linear_activation_forward(A_prev, W, b, activation):
...
return A, cache
def compute_cost(AL, Y):
...
return cost
def linear_activation_backward(dA, cache, activation):
...
return dA_prev, dW, db
def update_parameters(parameters, grads, learning_rate):
...
return parameters
End of explanation
parameters = two_layer_model(train_x, train_y, layers_dims = (n_x, n_h, n_y), num_iterations = 2500, print_cost=True)
Explanation: Run the cell below to train your parameters. See if your model runs. The cost should be decreasing. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (โฌ) on the upper bar of the notebook to stop the cell and try to find your error.
End of explanation
predictions_train = predict(train_x, train_y, parameters)
Explanation: Expected Output:
<table>
<tr>
<td> **Cost after iteration 0**</td>
<td> 0.693049735659989 </td>
</tr>
<tr>
<td> **Cost after iteration 100**</td>
<td> 0.6464283150388817 </td>
</tr>
<tr>
<td> **...**</td>
<td> ... </td>
</tr>
<tr>
<td> **Cost after iteration 2400**</td>
<td> 0.04950829635846386 </td>
</tr>
</table>
Good thing you built a vectorized implementation! Otherwise it might have taken 10 times longer to train this.
Now, you can use the trained parameters to classify images from the dataset. To see your predictions on the training and test sets, run the cell below.
End of explanation
predictions_test = predict(test_x, test_y, parameters)
Explanation: Expected Output:
<table>
<tr>
<td> **Accuracy**</td>
<td> 1.0 </td>
</tr>
</table>
End of explanation
### CONSTANTS ###
layers_dims = [12288, 20, 7, 5, 1] # 5-layer model
# GRADED FUNCTION: n_layer_model
def L_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):#lr was 0.009
Implements a L-layer neural network: [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID.
Arguments:
X -- data, numpy array of shape (number of examples, num_px * num_px * 3)
Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
layers_dims -- list containing the input size and each layer size, of length (number of layers + 1).
learning_rate -- learning rate of the gradient descent update rule
num_iterations -- number of iterations of the optimization loop
print_cost -- if True, it prints the cost every 100 steps
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
np.random.seed(1)
costs = [] # keep track of cost
# Parameters initialization.
### START CODE HERE ###
parameters = initialize_parameters_deep(layers_dims)
### END CODE HERE ###
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: [LINEAR -> RELU]*(L-1) -> LINEAR -> SIGMOID.
### START CODE HERE ### (โ 1 line of code)
AL, caches = L_model_forward(X, parameters)
### END CODE HERE ###
# Compute cost.
### START CODE HERE ### (โ 1 line of code)
cost = compute_cost(AL, Y)
### END CODE HERE ###
# Backward propagation.
### START CODE HERE ### (โ 1 line of code)
grads = L_model_backward(AL, Y, caches)
### END CODE HERE ###
# Update parameters.
### START CODE HERE ### (โ 1 line of code)
parameters = update_parameters(parameters, grads, learning_rate)
### END CODE HERE ###
# Print the cost every 100 training example
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
if print_cost and i % 100 == 0:
costs.append(cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
Explanation: Expected Output:
<table>
<tr>
<td> **Accuracy**</td>
<td> 0.72 </td>
</tr>
</table>
Note: You may notice that running the model on fewer iterations (say 1500) gives better accuracy on the test set. This is called "early stopping" and we will talk about it in the next course. Early stopping is a way to prevent overfitting.
Congratulations! It seems that your 2-layer neural network has better performance (72%) than the logistic regression implementation (70%, assignment week 2). Let's see if you can do even better with an $L$-layer model.
5 - L-layer Neural Network
Question: Use the helper functions you have implemented previously to build an $L$-layer neural network with the following structure: [LINEAR -> RELU]$\times$(L-1) -> LINEAR -> SIGMOID. The functions you may need and their inputs are:
python
def initialize_parameters_deep(layer_dims):
...
return parameters
def L_model_forward(X, parameters):
...
return AL, caches
def compute_cost(AL, Y):
...
return cost
def L_model_backward(AL, Y, caches):
...
return grads
def update_parameters(parameters, grads, learning_rate):
...
return parameters
End of explanation
parameters = L_layer_model(train_x, train_y, layers_dims, num_iterations = 2500, print_cost = True)
Explanation: You will now train the model as a 5-layer neural network.
Run the cell below to train your model. The cost should decrease on every iteration. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (โฌ) on the upper bar of the notebook to stop the cell and try to find your error.
End of explanation
pred_train = predict(train_x, train_y, parameters)
Explanation: Expected Output:
<table>
<tr>
<td> **Cost after iteration 0**</td>
<td> 0.771749 </td>
</tr>
<tr>
<td> **Cost after iteration 100**</td>
<td> 0.673350 </td>
</tr>
<tr>
<td> **...**</td>
<td> ... </td>
</tr>
<tr>
<td> **Cost after iteration 2400**</td>
<td> 0.082077 </td>
</tr>
</table>
End of explanation
pred_test = predict(test_x, test_y, parameters)
Explanation: <table>
<tr>
<td>
**Train Accuracy**
</td>
<td>
1.0
</td>
</tr>
</table>
End of explanation
# print_mislabeled_images(classes, test_x, test_y, pred_test)
Explanation: Expected Output:
<table>
<tr>
<td> **Test Accuracy**</td>
<td> 0.84 </td>
</tr>
</table>
Congrats! It seems that your 5-layer neural network has better performance (84%) than your 2-layer neural network (72%) on the same test set.
This is good performance for this task. Nice job!
Though in the next course on "Improving deep neural networks" you will learn how to obtain even higher accuracy by systematically searching for better hyperparameters (learning_rate, layers_dims, num_iterations, and others you'll also learn in the next course).
6) Results Analysis
First, let's take a look at some images the L-layer model labeled incorrectly. This will show a few mislabeled images.
End of explanation
## START CODE HERE ##
my_image = "my_image.jpg" # change this to the name of your image file
my_label_y = [1] # the true class of your image (1 -> cat, 0 -> non-cat)
## END CODE HERE ##
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((num_px*num_px*3,1))
my_predicted_image = predict(my_image, my_label_y, parameters)
plt.imshow(image)
print ("y = " + str(np.squeeze(my_predicted_image)) + ", your L-layer model predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
Explanation: A few type of images the model tends to do poorly on include:
- Cat body in an unusual position
- Cat appears against a background of a similar color
- Unusual cat color and species
- Camera Angle
- Brightness of the picture
- Scale variation (cat is very large or small in image)
7) Test with your own image (optional/ungraded exercise)
Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Change your image's name in the following code
4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)!
End of explanation |
2,904 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matplotlib Exercise 3
Imports
Step2: Contour plots of 2d wavefunctions
The wavefunction of a 2d quantum well is
Step3: The contour, contourf, pcolor and pcolormesh functions of Matplotlib can be used for effective visualizations of 2d scalar fields. Use the Matplotlib documentation to learn how to use these functions along with the numpy.meshgrid function to visualize the above wavefunction
Step4: Next make a visualization using one of the pcolor functions | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: Matplotlib Exercise 3
Imports
End of explanation
import math as math
def well2d(x, y, nx, ny, L=1.0):
Compute the 2d quantum well wave function.
wave_funct = (2/L)*np.sin((nx*math.pi*x)/L)*np.sin((ny*math.pi*y)/L)
return wave_funct
psi = well2d(np.linspace(0,1,10), np.linspace(0,1,10), 1, 1)
assert len(psi)==10
assert psi.shape==(10,)
Explanation: Contour plots of 2d wavefunctions
The wavefunction of a 2d quantum well is:
$$ \psi_{n_x,n_y}(x,y) = \frac{2}{L}
\sin{\left( \frac{n_x \pi x}{L} \right)}
\sin{\left( \frac{n_y \pi y}{L} \right)} $$
This is a scalar field and $n_x$ and $n_y$ are quantum numbers that measure the level of excitation in the x and y directions. $L$ is the size of the well.
Define a function well2d that computes this wavefunction for values of x and y that are NumPy arrays.
End of explanation
X, Y = np.meshgrid(np.linspace(0,1,10), np.linspace(0,1,10))
psi = well2d(X, Y, 3, 2 ,1)
f = plt.figure(figsize=(7,5))
plt.contour(psi, cmap='hsv')
plt.xlim(0,1)
plt.ylim(0,1)
plt.title('Contour Plot of 2d Quantum Well Wave Function')
plt.xlabel('x')
plt.ylabel('y')
plt.tick_params(right=False,top=False)
plt.colorbar()
assert True # use this cell for grading the contour plot
Explanation: The contour, contourf, pcolor and pcolormesh functions of Matplotlib can be used for effective visualizations of 2d scalar fields. Use the Matplotlib documentation to learn how to use these functions along with the numpy.meshgrid function to visualize the above wavefunction:
Use $n_x=3$, $n_y=2$ and $L=0$.
Use the limits $[0,1]$ for the x and y axis.
Customize your plot to make it effective and beautiful.
Use a non-default colormap.
Add a colorbar to you visualization.
First make a plot using one of the contour functions:
End of explanation
f = plt.figure(figsize=(7,5))
plt.pcolor(psi, cmap='jet')
plt.xlabel('x')
plt.ylabel('y')
plt.xlim(0,4)
plt.ylim(0,10)
plt.title('PseudoColor Plot of 2d Quantum Well Wave Function')
plt.tick_params(right=False,top=False)
plt.colorbar()
assert True # use this cell for grading the pcolor plot
Explanation: Next make a visualization using one of the pcolor functions:
End of explanation |
2,905 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Precisione dei numeri floats
A seguito di un paio di domande fatte a lezione, vediamo la precisione dei numeri "reali" in Python.
I float in python corrispondono ai double in C e quindi sono numeri in doppio precisione, e occupano in memoria 64 bits. Questo comporta un errore di precisione che deve essere considerato in quei ambiti in cui รจ richiesta un'alta precisione numerica. Per esempio, si dovrebbe ricordare che
Step1: ovvero la differenza non รจ nulla. Questo va considerato soprattutto quando si effetuano dei confronti tra due numeri float
Step2: Se viene richiesta una precionse piรน alta si possono utilizzare due librerie diverse
Step3: Divisione tra numeri interi
In python esistono due operatori per effettuare la divisione tra due numeri $a$ e $b$
Step4: Sympy
Step5: Accesso al file system
Nel notebook รจ sempre possibile accedere al file system usando i commandi linux.
ESEMPIO | Python Code:
0.1+0.1+0.1-0.3
Explanation: Precisione dei numeri floats
A seguito di un paio di domande fatte a lezione, vediamo la precisione dei numeri "reali" in Python.
I float in python corrispondono ai double in C e quindi sono numeri in doppio precisione, e occupano in memoria 64 bits. Questo comporta un errore di precisione che deve essere considerato in quei ambiti in cui รจ richiesta un'alta precisione numerica. Per esempio, si dovrebbe ricordare che:
End of explanation
0.1 + 0.1 + 0.1 == 0.3
Explanation: ovvero la differenza non รจ nulla. Questo va considerato soprattutto quando si effetuano dei confronti tra due numeri float:
End of explanation
from fractions import Fraction
a = Fraction(1,10)
b = Fraction(1,10)
c = Fraction(1,10)
d = Fraction(3,10)
a + b + c - d == 0
Explanation: Se viene richiesta una precionse piรน alta si possono utilizzare due librerie diverse:
decimal โ Decimal fixed point and floating point arithmetic
fractions โ Rational numbers
Vediamo un esempio per la seconda libreria.
End of explanation
a = 2.41
b = 2.42
print(a/b)
print(a//b)
print(b//a)
Explanation: Divisione tra numeri interi
In python esistono due operatori per effettuare la divisione tra due numeri $a$ e $b$:
\ effettua la divisione come una "normale" calcolatrice (a meno della precisione numerica)
\\ viene chiamato il floor division operator ed effettua la divisione "intera", ovvero $\lfloor \frac{a}{b} \rfloor$
End of explanation
from sympy import *
x,y = symbols('x y')
init_printing(use_unicode=True)
diff(sin(x)*exp(-x**2), x)
integrate(cos(x), (x,0,3))
Explanation: Sympy: Matematica simbolica
Sympy รจ una libreria Python per effettuare dei calcoli matematici simbolici. Per un approfondimento su come รจ stata sviluppata questa libreria si consiglia di leggere l'articolo SymPy: symbolic computing in Python o direttamente la documentazione della libreria.
Seguono un paio di esempi per rendere l'idea di cosa si puรฒ calcolare con Sympy.
End of explanation
ls
cd ..
cd Programmazione2/
Explanation: Accesso al file system
Nel notebook รจ sempre possibile accedere al file system usando i commandi linux.
ESEMPIO:
End of explanation |
2,906 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Defining the Problem
Here we will derive the equations of motion for the classic mass-spring-damper system under the influence of gravity. The following figure gives a pictorial description of the problem.
Step1: Start by loading in the core functionality of both SymPy and Mechanics.
Step2: We can make use of the pretty printing of our results by loading SymPy's printing extension, in particular we will use the vector printing which is nice for mechanics objects.
Step3: We'll start by defining the variables we will need for this problem
Step4: Now, we define a Newtonian reference frame that represents the ceiling which the particle is attached to, $C$.
Step5: We will need two points, one to represent the original position of the particle which stays fixed in the ceiling frame, $O$, and the second one, $P$ which is aligned with the particle as it moves.
Step6: The velocity of point $O$ in the ceiling is zero.
Step7: Point $P$ can move downward in the $y$ direction and its velocity is specified as $v$ in the downward direction.
Step8: There are three forces acting on the particle. Those due to the acceleration of gravity, the damper, and the spring.
Step9: Now we can use Newton's second law, $0=F-ma$, to form the equation of motion of the system.
Step10: We can then form the first order equations of motion by solving for $\frac{dv}{dt}$ and introducing the kinematical differential equation, $v=\frac{dx}{dt}$.
Step11: Forming the equations of motion can also be done with the automated methods available in the Mechanics package
Step12: Now we can construct a KanesMethod object by passing in the generalized coordinate, $x$, the generalized speed, $v$, and the kinematical differential equation which relates the two, $0=v-\frac{dx}{dt}$.
Step13: Now Kane's equations can be computed, and we can obtain $F_r$ and $F_r^*$.
Step14: The equations are also available in the form $M\frac{d}{dt}[q,u]^T=f(q, u)$ and we can extract the mass matrix, $M$, and the forcing functions, $f$.
Step15: Finally, we can form the first order differential equations of motion $\frac{d}{dt}[q,u]^T=M^{-1}f(\dot{u}, u, q)$, which is the same as previously found.
Step16: Simulating the system
Now that we have defined the mass-spring-damper system, we are going to simulate it.
PyDy's System is a wrapper that holds the Kanes object to integrate the equations of motion using numerical values of constants.
Step17: Now, we specify the numerical values of the constants and the initial values of states in the form of a dict.
Step18: We must generate a time vector over which the integration will be carried out. NumPy's linspace is often useful for this.
Step19: The trajectory of the states over time can be found by calling the .integrate() method.
Step20: Visualizing the System
PyDy has a native module pydy.viz which is used to visualize a System in an interactive 3D GUI.
Step21: For visualizing the system, we need to create shapes for the objects we wish to visualize, and map each of them
to a VisualizationFrame, which holds the position and orientation of the object. First create a sphere to represent the bob and attach it to the point $P$ and the ceiling reference frame (the sphere does not rotate with respect to the ceiling).
Step22: Now create a circular disc that represents the ceiling and fix it to the ceiling reference frame. The circle's default axis is aligned with its local $y$ axis, so we need to attach it to a rotated ceiling reference frame if we want the circle's axis to align with the $\hat{c}_x$ unit vector.
Step23: Now we initialize a Scene. A Scene contains all the information required to visualize a System onto a canvas.
It takes a ReferenceFrame and Point as arguments.
Step24: We provide the VisualizationFrames, which we want to visualize as a list to scene.
Step25: The default camera of Scene has the z axis of the base frame pointing out of the screen, and the y axis pointing up. We want the x axis to point downwards, so we supply a new camera that will achieve this.
Step26: The generate_visualization_json_system method generates the required data for the animations, in the form of JSON files. These JSON files are needed before calling the display methods.
Step27: Now, we call the display method. | Python Code:
from IPython.display import SVG
SVG(filename='mass_spring_damper.svg')
Explanation: Defining the Problem
Here we will derive the equations of motion for the classic mass-spring-damper system under the influence of gravity. The following figure gives a pictorial description of the problem.
End of explanation
import sympy as sym
import sympy.physics.mechanics as me
Explanation: Start by loading in the core functionality of both SymPy and Mechanics.
End of explanation
from sympy.physics.vector import init_vprinting
init_vprinting()
Explanation: We can make use of the pretty printing of our results by loading SymPy's printing extension, in particular we will use the vector printing which is nice for mechanics objects.
End of explanation
x, v = me.dynamicsymbols('x v')
m, c, k, g, t = sym.symbols('m c k g t')
Explanation: We'll start by defining the variables we will need for this problem:
- $x(t)$: distance of the particle from the ceiling
- $v(t)$: speed of the particle
- $m$: mass of the particle
- $c$: damping coefficient of the damper
- $k$: stiffness of the spring
- $g$: acceleration due to gravity
- $t$: time
End of explanation
ceiling = me.ReferenceFrame('C')
Explanation: Now, we define a Newtonian reference frame that represents the ceiling which the particle is attached to, $C$.
End of explanation
O = me.Point('O')
P = me.Point('P')
Explanation: We will need two points, one to represent the original position of the particle which stays fixed in the ceiling frame, $O$, and the second one, $P$ which is aligned with the particle as it moves.
End of explanation
O.set_vel(ceiling, 0)
Explanation: The velocity of point $O$ in the ceiling is zero.
End of explanation
P.set_pos(O, x * ceiling.x)
P.set_vel(ceiling, v * ceiling.x)
P.vel(ceiling)
Explanation: Point $P$ can move downward in the $y$ direction and its velocity is specified as $v$ in the downward direction.
End of explanation
damping = -c * P.vel(ceiling)
stiffness = -k * P.pos_from(O)
gravity = m * g * ceiling.x
forces = damping + stiffness + gravity
forces
Explanation: There are three forces acting on the particle. Those due to the acceleration of gravity, the damper, and the spring.
End of explanation
zero = me.dot(forces - m * P.acc(ceiling), ceiling.x)
zero
Explanation: Now we can use Newton's second law, $0=F-ma$, to form the equation of motion of the system.
End of explanation
dv_by_dt = sym.solve(zero, v.diff(t))[0]
dx_by_dt = v
dv_by_dt, dx_by_dt
Explanation: We can then form the first order equations of motion by solving for $\frac{dv}{dt}$ and introducing the kinematical differential equation, $v=\frac{dx}{dt}$.
End of explanation
mass = me.Particle('mass', P, m)
Explanation: Forming the equations of motion can also be done with the automated methods available in the Mechanics package: LagrangesMethod and KanesMethod. Here we will make use of Kane's method to find the same equations of motion that we found manually above. First, define a particle that represents the mass attached to the damper and spring.
End of explanation
kane = me.KanesMethod(ceiling, q_ind=[x], u_ind=[v], kd_eqs=[v - x.diff(t)])
Explanation: Now we can construct a KanesMethod object by passing in the generalized coordinate, $x$, the generalized speed, $v$, and the kinematical differential equation which relates the two, $0=v-\frac{dx}{dt}$.
End of explanation
fr, frstar = kane.kanes_equations([(P, forces)], [mass])
fr, frstar
Explanation: Now Kane's equations can be computed, and we can obtain $F_r$ and $F_r^*$.
End of explanation
M = kane.mass_matrix_full
f = kane.forcing_full
M, f
Explanation: The equations are also available in the form $M\frac{d}{dt}[q,u]^T=f(q, u)$ and we can extract the mass matrix, $M$, and the forcing functions, $f$.
End of explanation
M.inv() * f
Explanation: Finally, we can form the first order differential equations of motion $\frac{d}{dt}[q,u]^T=M^{-1}f(\dot{u}, u, q)$, which is the same as previously found.
End of explanation
from pydy.system import System
sys = System(kane)
Explanation: Simulating the system
Now that we have defined the mass-spring-damper system, we are going to simulate it.
PyDy's System is a wrapper that holds the Kanes object to integrate the equations of motion using numerical values of constants.
End of explanation
sys.constants = {m:10.0, g:9.8, c:5.0, k:10.0}
sys.initial_conditions = {x:0.0, v:0.0}
Explanation: Now, we specify the numerical values of the constants and the initial values of states in the form of a dict.
End of explanation
from numpy import linspace
sys.times = linspace(0.0, 10.0, 100)
Explanation: We must generate a time vector over which the integration will be carried out. NumPy's linspace is often useful for this.
End of explanation
x_trajectory = sys.integrate()
Explanation: The trajectory of the states over time can be found by calling the .integrate() method.
End of explanation
from pydy.viz import *
Explanation: Visualizing the System
PyDy has a native module pydy.viz which is used to visualize a System in an interactive 3D GUI.
End of explanation
bob = Sphere(2.0, color="red", material="metal")
bob_vframe = VisualizationFrame(ceiling, P, bob)
Explanation: For visualizing the system, we need to create shapes for the objects we wish to visualize, and map each of them
to a VisualizationFrame, which holds the position and orientation of the object. First create a sphere to represent the bob and attach it to the point $P$ and the ceiling reference frame (the sphere does not rotate with respect to the ceiling).
End of explanation
ceiling_circle = Circle(radius=10, color="white", material="metal")
from numpy import pi
rotated = ceiling.orientnew("C_R", 'Axis', [pi / 2, ceiling.z])
ceiling_vframe = VisualizationFrame(rotated, O, ceiling_circle)
Explanation: Now create a circular disc that represents the ceiling and fix it to the ceiling reference frame. The circle's default axis is aligned with its local $y$ axis, so we need to attach it to a rotated ceiling reference frame if we want the circle's axis to align with the $\hat{c}_x$ unit vector.
End of explanation
scene = Scene(ceiling, O)
Explanation: Now we initialize a Scene. A Scene contains all the information required to visualize a System onto a canvas.
It takes a ReferenceFrame and Point as arguments.
End of explanation
scene.visualization_frames = [bob_vframe, ceiling_vframe]
Explanation: We provide the VisualizationFrames, which we want to visualize as a list to scene.
End of explanation
camera_frame = ceiling.orientnew('Camera Frame','Axis', [pi / 2, ceiling.z])
camera_point = O.locatenew('Camera Location', 100 * camera_frame.z)
primary_camera = PerspectiveCamera(camera_frame, camera_point)
scene.cameras = [primary_camera]
Explanation: The default camera of Scene has the z axis of the base frame pointing out of the screen, and the y axis pointing up. We want the x axis to point downwards, so we supply a new camera that will achieve this.
End of explanation
scene.generate_visualization_json_system(sys)
Explanation: The generate_visualization_json_system method generates the required data for the animations, in the form of JSON files. These JSON files are needed before calling the display methods.
End of explanation
scene.display_ipython()
Explanation: Now, we call the display method.
End of explanation |
2,907 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Finding similar documents with Word2Vec and Soft Cosine Measure
Soft Cosine Measure (SCM) [1, 3] is a promising new tool in machine learning that allows us to submit a query and return the most relevant documents. In part 1, we will show how you can compute SCM between two documents using the inner_product method. In part 2, we will use SoftCosineSimilarity to retrieve documents most similar to a query and compare the performance against other similarity measures.
First, however, we go through the basics of what Soft Cosine Measure is.
Soft Cosine Measure basics
Soft Cosine Measure (SCM) is a method that allows us to assess the similarity between two documents in a meaningful way, even when they have no words in common. It uses a measure of similarity between words, which can be derived [2] using word2vec [4] vector embeddings of words. It has been shown to outperform many of the state-of-the-art methods in the semantic text similarity task in the context of community question answering [2].
SCM is illustrated below for two very similar sentences. The sentences have no words in common, but by modeling synonymy, SCM is able to accurately measure the similarity between the two sentences. The method also uses the bag-of-words vector representation of the documents (simply put, the word's frequencies in the documents). The intution behind the method is that we compute standard cosine similarity assuming that the document vectors are expressed in a non-orthogonal basis, where the angle between two basis vectors is derived from the angle between the word2vec embeddings of the corresponding words.
This method was perhaps first introduced in the article โSoft Measure and Soft Cosine Measure
Step1: Part 1
Step2: The first two sentences have very similar content, and as such the SCM should be large. Before we compute the SCM, we want to remove stopwords ("the", "to", etc.), as these do not contribute a lot to the information in the sentences.
Step3: Now, as we mentioned earlier, we will be using some downloaded pre-trained embeddings. Note that the embeddings we have chosen here require a lot of memory. We will use the embeddings to construct a term similarity matrix that will be used by the inner_product method.
Step4: Let's compute SCM using the inner_product method.
Step5: Let's try the same thing with two completely unrelated sentences. Notice that the similarity is smaller.
Step6: Part 2
Step7: Using the corpus we have just build, we will now construct a dictionary, a TF-IDF model, a word2vec model, and a term similarity matrix.
Step8: Evaluation
Next, we will load the validation and test datasets that were used by the SemEval 2016 and 2017 contestants. The datasets contain 208 original questions posted by the forum members. For each question, there is a list of 10 threads with a human annotation denoting whether or not the thread is relevant to the original question. Our task will be to order the threads so that relevant threads rank above irrelevant threads.
Step9: Finally, we will perform an evaluation to compare three unsupervised similarity measures โ the Soft Cosine Measure, two different implementations of the Word Mover's Distance, and standard cosine similarity. We will use the Mean Average Precision (MAP) as an evaluation measure and 10-fold cross-validation to get an estimate of the variance of MAP for each similarity measure.
Step10: The table below shows the pointwise estimates of means and standard variances for MAP scores and elapsed times. Baselines and winners for each year are displayed in bold. We can see that the Soft Cosine Measure gives a strong performance on both the 2016 and the 2017 dataset. | Python Code:
# Initialize logging.
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
Explanation: Finding similar documents with Word2Vec and Soft Cosine Measure
Soft Cosine Measure (SCM) [1, 3] is a promising new tool in machine learning that allows us to submit a query and return the most relevant documents. In part 1, we will show how you can compute SCM between two documents using the inner_product method. In part 2, we will use SoftCosineSimilarity to retrieve documents most similar to a query and compare the performance against other similarity measures.
First, however, we go through the basics of what Soft Cosine Measure is.
Soft Cosine Measure basics
Soft Cosine Measure (SCM) is a method that allows us to assess the similarity between two documents in a meaningful way, even when they have no words in common. It uses a measure of similarity between words, which can be derived [2] using word2vec [4] vector embeddings of words. It has been shown to outperform many of the state-of-the-art methods in the semantic text similarity task in the context of community question answering [2].
SCM is illustrated below for two very similar sentences. The sentences have no words in common, but by modeling synonymy, SCM is able to accurately measure the similarity between the two sentences. The method also uses the bag-of-words vector representation of the documents (simply put, the word's frequencies in the documents). The intution behind the method is that we compute standard cosine similarity assuming that the document vectors are expressed in a non-orthogonal basis, where the angle between two basis vectors is derived from the angle between the word2vec embeddings of the corresponding words.
This method was perhaps first introduced in the article โSoft Measure and Soft Cosine Measure: Measure of Features in Vector Space Modelโ by Grigori Sidorov, Alexander Gelbukh, Helena Gomez-Adorno, and David Pinto (link to PDF).
In this tutorial, we will learn how to use Gensim's SCM functionality, which consists of the inner_product method for one-off computation, and the SoftCosineSimilarity class for corpus-based similarity queries.
Note:
If you use this software, please consider citing [1], [2], and [3].
Running this notebook
You can download this Jupyter notebook, and run it on your own computer, provided you have installed the gensim, jupyter, sklearn, pyemd, and wmd Python packages.
The notebook was run on an Ubuntu machine with an Intel core i7-6700HQ CPU 3.10GHz (4 cores) and 16 GB memory. Assuming all resources required by the notebook have already been downloaded, running the entire notebook on this machine takes about 30 minutes.
End of explanation
sentence_obama = 'Obama speaks to the media in Illinois'.lower().split()
sentence_president = 'The president greets the press in Chicago'.lower().split()
sentence_orange = 'Having a tough time finding an orange juice press machine?'.lower().split()
Explanation: Part 1: Computing the Soft Cosine Measure
To use SCM, we need some word embeddings first of all. You could train a word2vec (see tutorial here) model on some corpus, but we will use pre-trained word2vec embeddings.
Let's create some sentences to compare.
End of explanation
!pip install nltk
# Import and download stopwords from NLTK.
from nltk.corpus import stopwords
from nltk import download
download('stopwords') # Download stopwords list.
# Remove stopwords.
stop_words = stopwords.words('english')
sentence_obama = [w for w in sentence_obama if w not in stop_words]
sentence_president = [w for w in sentence_president if w not in stop_words]
sentence_orange = [w for w in sentence_orange if w not in stop_words]
# Prepare a dictionary and a corpus.
from gensim import corpora
documents = [sentence_obama, sentence_president, sentence_orange]
dictionary = corpora.Dictionary(documents)
# Convert the sentences into bag-of-words vectors.
sentence_obama = dictionary.doc2bow(sentence_obama)
sentence_president = dictionary.doc2bow(sentence_president)
sentence_orange = dictionary.doc2bow(sentence_orange)
Explanation: The first two sentences have very similar content, and as such the SCM should be large. Before we compute the SCM, we want to remove stopwords ("the", "to", etc.), as these do not contribute a lot to the information in the sentences.
End of explanation
%%time
import gensim.downloader as api
from gensim.models import WordEmbeddingSimilarityIndex
from gensim.similarities import SparseTermSimilarityMatrix
w2v_model = api.load("glove-wiki-gigaword-50")
similarity_index = WordEmbeddingSimilarityIndex(w2v_model)
similarity_matrix = SparseTermSimilarityMatrix(similarity_index, dictionary)
Explanation: Now, as we mentioned earlier, we will be using some downloaded pre-trained embeddings. Note that the embeddings we have chosen here require a lot of memory. We will use the embeddings to construct a term similarity matrix that will be used by the inner_product method.
End of explanation
similarity = similarity_matrix.inner_product(sentence_obama, sentence_president, normalized=True)
print('similarity = %.4f' % similarity)
Explanation: Let's compute SCM using the inner_product method.
End of explanation
similarity = similarity_matrix.inner_product(sentence_obama, sentence_orange, normalized=True)
print('similarity = %.4f' % similarity)
Explanation: Let's try the same thing with two completely unrelated sentences. Notice that the similarity is smaller.
End of explanation
%%time
from itertools import chain
import json
from re import sub
from os.path import isfile
import gensim.downloader as api
from gensim.utils import simple_preprocess
from nltk.corpus import stopwords
from nltk import download
download("stopwords") # Download stopwords list.
stopwords = set(stopwords.words("english"))
def preprocess(doc):
doc = sub(r'<img[^<>]+(>|$)', " image_token ", doc)
doc = sub(r'<[^<>]+(>|$)', " ", doc)
doc = sub(r'\[img_assist[^]]*?\]', " ", doc)
doc = sub(r'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+', " url_token ", doc)
return [token for token in simple_preprocess(doc, min_len=0, max_len=float("inf")) if token not in stopwords]
corpus = list(chain(*[
chain(
[preprocess(thread["RelQuestion"]["RelQSubject"]), preprocess(thread["RelQuestion"]["RelQBody"])],
[preprocess(relcomment["RelCText"]) for relcomment in thread["RelComments"]])
for thread in api.load("semeval-2016-2017-task3-subtaskA-unannotated")]))
print("Number of documents: %d" % len(corpus))
Explanation: Part 2: Similarity queries using SoftCosineSimilarity
You can use SCM to get the most similar documents to a query, using the SoftCosineSimilarity class. Its interface is similar to what is described in the Similarity Queries Gensim tutorial.
Qatar Living unannotated dataset
Contestants solving the community question answering task in the SemEval 2016 and 2017 competitions had an unannotated dataset of 189,941 questions and 1,894,456 comments from the Qatar Living discussion forums. As our first step, we will use the same dataset to build a corpus.
End of explanation
%%time
from multiprocessing import cpu_count
from gensim.corpora import Dictionary
from gensim.models import TfidfModel
from gensim.models import Word2Vec
from gensim.models import WordEmbeddingSimilarityIndex
from gensim.similarities import SparseTermSimilarityMatrix
dictionary = Dictionary(corpus)
tfidf = TfidfModel(dictionary=dictionary)
w2v_model = Word2Vec(corpus, workers=cpu_count(), min_count=5, size=300, seed=12345)
similarity_index = WordEmbeddingSimilarityIndex(w2v_model.wv)
similarity_matrix = SparseTermSimilarityMatrix(similarity_index, dictionary, tfidf, nonzero_limit=100)
Explanation: Using the corpus we have just build, we will now construct a dictionary, a TF-IDF model, a word2vec model, and a term similarity matrix.
End of explanation
datasets = api.load("semeval-2016-2017-task3-subtaskBC")
Explanation: Evaluation
Next, we will load the validation and test datasets that were used by the SemEval 2016 and 2017 contestants. The datasets contain 208 original questions posted by the forum members. For each question, there is a list of 10 threads with a human annotation denoting whether or not the thread is relevant to the original question. Our task will be to order the threads so that relevant threads rank above irrelevant threads.
End of explanation
!pip install wmd
!pip install sklearn
!pip install pyemd
from math import isnan
from time import time
from gensim.similarities import MatrixSimilarity, WmdSimilarity, SoftCosineSimilarity
import numpy as np
from sklearn.model_selection import KFold
from wmd import WMD
def produce_test_data(dataset):
for orgquestion in datasets[dataset]:
query = preprocess(orgquestion["OrgQSubject"]) + preprocess(orgquestion["OrgQBody"])
documents = [
preprocess(thread["RelQuestion"]["RelQSubject"]) + preprocess(thread["RelQuestion"]["RelQBody"])
for thread in orgquestion["Threads"]]
relevance = [
thread["RelQuestion"]["RELQ_RELEVANCE2ORGQ"] in ("PerfectMatch", "Relevant")
for thread in orgquestion["Threads"]]
yield query, documents, relevance
def cossim(query, documents):
# Compute cosine similarity between the query and the documents.
query = tfidf[dictionary.doc2bow(query)]
index = MatrixSimilarity(
tfidf[[dictionary.doc2bow(document) for document in documents]],
num_features=len(dictionary))
similarities = index[query]
return similarities
def softcossim(query, documents):
# Compute Soft Cosine Measure between the query and the documents.
query = tfidf[dictionary.doc2bow(query)]
index = SoftCosineSimilarity(
tfidf[[dictionary.doc2bow(document) for document in documents]],
similarity_matrix)
similarities = index[query]
return similarities
def wmd_gensim(query, documents):
# Compute Word Mover's Distance as implemented in PyEMD by William Mayner
# between the query and the documents.
index = WmdSimilarity(documents, w2v_model)
similarities = index[query]
return similarities
def wmd_relax(query, documents):
# Compute Word Mover's Distance as implemented in WMD by Source{d}
# between the query and the documents.
words = [word for word in set(chain(query, *documents)) if word in w2v_model.wv]
indices, words = zip(*sorted((
(index, word) for (index, _), word in zip(dictionary.doc2bow(words), words))))
query = dict(tfidf[dictionary.doc2bow(query)])
query = [
(new_index, query[dict_index])
for new_index, dict_index in enumerate(indices)
if dict_index in query]
documents = [dict(tfidf[dictionary.doc2bow(document)]) for document in documents]
documents = [[
(new_index, document[dict_index])
for new_index, dict_index in enumerate(indices)
if dict_index in document] for document in documents]
embeddings = np.array([w2v_model.wv[word] for word in words], dtype=np.float32)
nbow = dict(((index, list(chain([None], zip(*document)))) for index, document in enumerate(documents)))
nbow["query"] = tuple([None] + list(zip(*query)))
distances = WMD(embeddings, nbow, vocabulary_min=1).nearest_neighbors("query")
similarities = [-distance for _, distance in sorted(distances)]
return similarities
strategies = {
"cossim" : cossim,
"softcossim": softcossim,
"wmd-gensim": wmd_gensim,
"wmd-relax": wmd_relax}
def evaluate(split, strategy):
# Perform a single round of evaluation.
results = []
start_time = time()
for query, documents, relevance in split:
similarities = strategies[strategy](query, documents)
assert len(similarities) == len(documents)
precision = [
(num_correct + 1) / (num_total + 1) for num_correct, num_total in enumerate(
num_total for num_total, (_, relevant) in enumerate(
sorted(zip(similarities, relevance), reverse=True)) if relevant)]
average_precision = np.mean(precision) if precision else 0.0
results.append(average_precision)
return (np.mean(results) * 100, time() - start_time)
def crossvalidate(args):
# Perform a cross-validation.
dataset, strategy = args
test_data = np.array(list(produce_test_data(dataset)))
kf = KFold(n_splits=10)
samples = []
for _, test_index in kf.split(test_data):
samples.append(evaluate(test_data[test_index], strategy))
return (np.mean(samples, axis=0), np.std(samples, axis=0))
%%time
from multiprocessing import Pool
args_list = [
(dataset, technique)
for dataset in ("2016-test", "2017-test")
for technique in ("softcossim", "wmd-gensim", "wmd-relax", "cossim")]
with Pool() as pool:
results = pool.map(crossvalidate, args_list)
Explanation: Finally, we will perform an evaluation to compare three unsupervised similarity measures โ the Soft Cosine Measure, two different implementations of the Word Mover's Distance, and standard cosine similarity. We will use the Mean Average Precision (MAP) as an evaluation measure and 10-fold cross-validation to get an estimate of the variance of MAP for each similarity measure.
End of explanation
from IPython.display import display, Markdown
output = []
baselines = [
(("2016-test", "**Winner (UH-PRHLT-primary)**"), ((76.70, 0), (0, 0))),
(("2016-test", "**Baseline 1 (IR)**"), ((74.75, 0), (0, 0))),
(("2016-test", "**Baseline 2 (random)**"), ((46.98, 0), (0, 0))),
(("2017-test", "**Winner (SimBow-primary)**"), ((47.22, 0), (0, 0))),
(("2017-test", "**Baseline 1 (IR)**"), ((41.85, 0), (0, 0))),
(("2017-test", "**Baseline 2 (random)**"), ((29.81, 0), (0, 0)))]
table_header = ["Dataset | Strategy | MAP score | Elapsed time (sec)", ":---|:---|:---|---:"]
for row, ((dataset, technique), ((mean_map_score, mean_duration), (std_map_score, std_duration))) \
in enumerate(sorted(chain(zip(args_list, results), baselines), key=lambda x: (x[0][0], -x[1][0][0]))):
if row % (len(strategies) + 3) == 0:
output.extend(chain(["\n"], table_header))
map_score = "%.02f ยฑ%.02f" % (mean_map_score, std_map_score)
duration = "%.02f ยฑ%.02f" % (mean_duration, std_duration) if mean_duration else ""
output.append("%s|%s|%s|%s" % (dataset, technique, map_score, duration))
display(Markdown('\n'.join(output)))
Explanation: The table below shows the pointwise estimates of means and standard variances for MAP scores and elapsed times. Baselines and winners for each year are displayed in bold. We can see that the Soft Cosine Measure gives a strong performance on both the 2016 and the 2017 dataset.
End of explanation |
2,908 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Word2Vec Tutorial
In case you missed the buzz, word2vec is a widely featured as a member of the โnew waveโ of machine learning algorithms based on neural networks, commonly referred to as "deep learning" (though word2vec itself is rather shallow). Using large amounts of unannotated plain text, word2vec learns relationships between words automatically. The output are vectors, one vector per word, with remarkable linear relationships that allow us to do things like vec(โkingโ) โ vec(โmanโ) + vec(โwomanโ) =~ vec(โqueenโ), or vec(โMontreal Canadiensโ) โ vec(โMontrealโ) + vec(โTorontoโ) resembles the vector for โToronto Maple Leafsโ.
Word2vec is very useful in automatic text tagging, recommender systems and machine translation.
Check out an online word2vec demo where you can try this vector algebra for yourself. That demo runs word2vec on the Google News dataset, of about 100 billion words.
This tutorial
In this tutorial you will learn how to train and evaluate word2vec models on your business data.
Preparing the Input
Starting from the beginning, gensimโs word2vec expects a sequence of sentences as its input. Each sentence a list of words (utf8 strings)
Step1: Keeping the input as a Python built-in list is convenient, but can use up a lot of RAM when the input is large.
Gensim only requires that the input must provide sentences sequentially, when iterated over. No need to keep everything in RAM
Step2: Say we want to further preprocess the words from the files โ convert to unicode, lowercase, remove numbers, extract named entitiesโฆ All of this can be done inside the MySentences iterator and word2vec doesnโt need to know. All that is required is that the input yields one sentence (list of utf8 words) after another.
Note to advanced users
Step3: More data would be nice
For the following examples, we'll use the Lee Corpus (which you already have if you've installed gensim)
Step4: Training
Word2Vec accepts several parameters that affect both training speed and quality.
One of them is for pruning the internal dictionary. Words that appear only once or twice in a billion-word corpus are probably uninteresting typos and garbage. In addition, thereโs not enough data to make any meaningful training on those words, so itโs best to ignore them
Step5: Bigger size values require more training data, but can lead to better (more accurate) models. Reasonable values are in the tens to hundreds.
The last of the major parameters (full list here) is for training parallelization, to speed up training
Step6: The workers parameter only has an effect if you have Cython installed. Without Cython, youโll only be able to use one core because of the GIL (and word2vec training will be miserably slow).
Memory
At its core, word2vec model parameters are stored as matrices (NumPy arrays). Each array is #vocabulary (controlled by min_count parameter) times #size (size parameter) of floats (single precision aka 4 bytes).
Three such matrices are held in RAM (work is underway to reduce that number to two, or even one). So if your input contains 100,000 unique words, and you asked for layer size=200, the model will require approx. 100,000*200*4*3 bytes = ~229MB.
Thereโs a little extra memory needed for storing the vocabulary tree (100,000 words would take a few megabytes), but unless your words are extremely loooong strings, memory footprint will be dominated by the three matrices above.
Evaluating
Word2Vec training is an unsupervised task, thereโs no good way to objectively evaluate the result. Evaluation depends on your end application.
Google have released their testing set of about 20,000 syntactic and semantic test examples, following the โA is to B as C is to Dโ task. It is provided in the 'datasets' folder.
For example a syntactic analogy of comparative type is bad
Step7: This accuracy takes an
optional parameter restrict_vocab
which limits which test examples are to be considered.
In the December 2016 release of Gensim we added a better way to evaluate semantic similarity.
By default it uses an academic dataset WS-353 but one can create a dataset specific to your business based on it. It contain word pairs together with human-assigned similarity judgments. It measures the relatedness or co-occurrence of two words. For example, coast and shore are very similar as they appear in the same context. At the same time clothes and closet are less similar because they are related but not interchangeable.
Step8: Once again, good performance on Google's or WS-353 test set doesnโt mean word2vec will work well in your application, or vice versa. Itโs always best to evaluate directly on your intended task. For an example of how to use word2vec in a classifier pipeline, see this tutorial.
Storing and loading models
You can store/load models using the standard gensim methods
Step9: which uses pickle internally, optionally mmapโing the modelโs internal large NumPy matrices into virtual memory directly from disk files, for inter-process memory sharing.
In addition, you can load models created by the original C tool, both using its text and binary formats
Step10: You may need to tweak the total_words parameter to train(), depending on what learning rate decay you want to simulate.
Note that itโs not possible to resume training with models generated by the C tool, load_word2vec_format(). You can still use them for querying/similarity, but information vital for training (the vocab tree) is missing there.
Using the model
Word2Vec supports several word similarity tasks out of the box
Step11: The results here don't look good because the training corpus is very small. To get meaningful results one needs to train on 500k+ words.
If you need the raw output vectors in your application, you can access these either on a word-by-word basis | Python Code:
# import modules & set up logging
import gensim, logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
sentences = [['first', 'sentence'], ['second', 'sentence']]
# train word2vec on the two sentences
model = gensim.models.Word2Vec(sentences, min_count=1)
Explanation: Word2Vec Tutorial
In case you missed the buzz, word2vec is a widely featured as a member of the โnew waveโ of machine learning algorithms based on neural networks, commonly referred to as "deep learning" (though word2vec itself is rather shallow). Using large amounts of unannotated plain text, word2vec learns relationships between words automatically. The output are vectors, one vector per word, with remarkable linear relationships that allow us to do things like vec(โkingโ) โ vec(โmanโ) + vec(โwomanโ) =~ vec(โqueenโ), or vec(โMontreal Canadiensโ) โ vec(โMontrealโ) + vec(โTorontoโ) resembles the vector for โToronto Maple Leafsโ.
Word2vec is very useful in automatic text tagging, recommender systems and machine translation.
Check out an online word2vec demo where you can try this vector algebra for yourself. That demo runs word2vec on the Google News dataset, of about 100 billion words.
This tutorial
In this tutorial you will learn how to train and evaluate word2vec models on your business data.
Preparing the Input
Starting from the beginning, gensimโs word2vec expects a sequence of sentences as its input. Each sentence a list of words (utf8 strings):
End of explanation
# create some toy data to use with the following example
import smart_open, os
if not os.path.exists('./data/'):
os.makedirs('./data/')
filenames = ['./data/f1.txt', './data/f2.txt']
for i, fname in enumerate(filenames):
with smart_open.smart_open(fname, 'w') as fout:
for line in sentences[i]:
fout.write(line + '\n')
class MySentences(object):
def __init__(self, dirname):
self.dirname = dirname
def __iter__(self):
for fname in os.listdir(self.dirname):
for line in open(os.path.join(self.dirname, fname)):
yield line.split()
sentences = MySentences('./data/') # a memory-friendly iterator
print(list(sentences))
# generate the Word2Vec model
model = gensim.models.Word2Vec(sentences, min_count=1)
print(model)
print(model.wv.vocab)
Explanation: Keeping the input as a Python built-in list is convenient, but can use up a lot of RAM when the input is large.
Gensim only requires that the input must provide sentences sequentially, when iterated over. No need to keep everything in RAM: we can provide one sentence, process it, forget it, load another sentenceโฆ
For example, if our input is strewn across several files on disk, with one sentence per line, then instead of loading everything into an in-memory list, we can process the input file by file, line by line:
End of explanation
# build the same model, making the 2 steps explicit
new_model = gensim.models.Word2Vec(min_count=1) # an empty model, no training
new_model.build_vocab(sentences) # can be a non-repeatable, 1-pass generator
new_model.train(sentences) # can be a non-repeatable, 1-pass generator
print(new_model)
print(model.wv.vocab)
Explanation: Say we want to further preprocess the words from the files โ convert to unicode, lowercase, remove numbers, extract named entitiesโฆ All of this can be done inside the MySentences iterator and word2vec doesnโt need to know. All that is required is that the input yields one sentence (list of utf8 words) after another.
Note to advanced users: calling Word2Vec(sentences, iter=1) will run two passes over the sentences iterator. In general it runs iter+1 passes. By the way, the default value is iter=5 to comply with Google's word2vec in C language.
1. The first pass collects words and their frequencies to build an internal dictionary tree structure.
2. The second pass trains the neural model.
These two passes can also be initiated manually, in case your input stream is non-repeatable (you can only afford one pass), and youโre able to initialize the vocabulary some other way:
End of explanation
# Set file names for train and test data
test_data_dir = '{}'.format(os.sep).join([gensim.__path__[0], 'test', 'test_data']) + os.sep
lee_train_file = test_data_dir + 'lee_background.cor'
class MyText(object):
def __iter__(self):
for line in open(lee_train_file):
# assume there's one document per line, tokens separated by whitespace
yield line.lower().split()
sentences = MyText()
print(sentences)
Explanation: More data would be nice
For the following examples, we'll use the Lee Corpus (which you already have if you've installed gensim):
End of explanation
# default value of min_count=5
model = gensim.models.Word2Vec(sentences, min_count=10)
# default value of size=100
model = gensim.models.Word2Vec(sentences, size=200)
Explanation: Training
Word2Vec accepts several parameters that affect both training speed and quality.
One of them is for pruning the internal dictionary. Words that appear only once or twice in a billion-word corpus are probably uninteresting typos and garbage. In addition, thereโs not enough data to make any meaningful training on those words, so itโs best to ignore them:
End of explanation
# default value of workers=3 (tutorial says 1...)
model = gensim.models.Word2Vec(sentences, workers=4)
Explanation: Bigger size values require more training data, but can lead to better (more accurate) models. Reasonable values are in the tens to hundreds.
The last of the major parameters (full list here) is for training parallelization, to speed up training:
End of explanation
model.accuracy('./datasets/questions-words.txt')
Explanation: The workers parameter only has an effect if you have Cython installed. Without Cython, youโll only be able to use one core because of the GIL (and word2vec training will be miserably slow).
Memory
At its core, word2vec model parameters are stored as matrices (NumPy arrays). Each array is #vocabulary (controlled by min_count parameter) times #size (size parameter) of floats (single precision aka 4 bytes).
Three such matrices are held in RAM (work is underway to reduce that number to two, or even one). So if your input contains 100,000 unique words, and you asked for layer size=200, the model will require approx. 100,000*200*4*3 bytes = ~229MB.
Thereโs a little extra memory needed for storing the vocabulary tree (100,000 words would take a few megabytes), but unless your words are extremely loooong strings, memory footprint will be dominated by the three matrices above.
Evaluating
Word2Vec training is an unsupervised task, thereโs no good way to objectively evaluate the result. Evaluation depends on your end application.
Google have released their testing set of about 20,000 syntactic and semantic test examples, following the โA is to B as C is to Dโ task. It is provided in the 'datasets' folder.
For example a syntactic analogy of comparative type is bad:worse;good:?. There are total of 9 types of syntactic comparisons in the dataset like plural nouns and nouns of opposite meaning.
The semantic questions contain five types of semantic analogies, such as capital cities (Paris:France;Tokyo:?) or family members (brother:sister;dad:?).
Gensim support the same evaluation set, in exactly the same format:
End of explanation
model.evaluate_word_pairs(test_data_dir +'wordsim353.tsv')
Explanation: This accuracy takes an
optional parameter restrict_vocab
which limits which test examples are to be considered.
In the December 2016 release of Gensim we added a better way to evaluate semantic similarity.
By default it uses an academic dataset WS-353 but one can create a dataset specific to your business based on it. It contain word pairs together with human-assigned similarity judgments. It measures the relatedness or co-occurrence of two words. For example, coast and shore are very similar as they appear in the same context. At the same time clothes and closet are less similar because they are related but not interchangeable.
End of explanation
from tempfile import mkstemp
fs, temp_path = mkstemp("gensim_temp") # creates a temp file
model.save(temp_path) # save the model
new_model = gensim.models.Word2Vec.load(temp_path) # open the model
Explanation: Once again, good performance on Google's or WS-353 test set doesnโt mean word2vec will work well in your application, or vice versa. Itโs always best to evaluate directly on your intended task. For an example of how to use word2vec in a classifier pipeline, see this tutorial.
Storing and loading models
You can store/load models using the standard gensim methods:
End of explanation
model = gensim.models.Word2Vec.load(temp_path)
more_sentences = [['Advanced', 'users', 'can', 'load', 'a', 'model', 'and', 'continue',
'training', 'it', 'with', 'more', 'sentences']]
model.build_vocab(more_sentences, update=True)
model.train(more_sentences, )
# cleaning up temp
os.close(fs)
os.remove(temp_path)
Explanation: which uses pickle internally, optionally mmapโing the modelโs internal large NumPy matrices into virtual memory directly from disk files, for inter-process memory sharing.
In addition, you can load models created by the original C tool, both using its text and binary formats:
model = gensim.models.Word2Vec.load_word2vec_format('/tmp/vectors.txt', binary=False)
# using gzipped/bz2 input works too, no need to unzip:
model = gensim.models.Word2Vec.load_word2vec_format('/tmp/vectors.bin.gz', binary=True)
Online training / Resuming training
Advanced users can load a model and continue training it with more sentences and new vocabulary words:
End of explanation
model.most_similar(positive=['human', 'crime'], negative=['party'], topn=1)
model.doesnt_match("input is lunch he sentence cat".split())
print(model.similarity('human', 'party'))
print(model.similarity('tree', 'murder'))
Explanation: You may need to tweak the total_words parameter to train(), depending on what learning rate decay you want to simulate.
Note that itโs not possible to resume training with models generated by the C tool, load_word2vec_format(). You can still use them for querying/similarity, but information vital for training (the vocab tree) is missing there.
Using the model
Word2Vec supports several word similarity tasks out of the box:
End of explanation
model['tree'] # raw NumPy vector of a word
Explanation: The results here don't look good because the training corpus is very small. To get meaningful results one needs to train on 500k+ words.
If you need the raw output vectors in your application, you can access these either on a word-by-word basis:
End of explanation |
2,909 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> Machine Learning Model training and serving </h1>
The training architecture involves collecting data from the two sources, mashing them up with Dataflow and saving the results in BigQuery. That data is used for ML training in Cloud ML. Then, the trained model is used to orchestrate windmills.
<img src="training_arch.png" />
Step1: <h2> Step 1
Step3: <h2> 2. Preprocessing </h2>
Align the radar and windmill data temporally and compute moving averages.
Step4: <h2> Local preprocessing, training and prediction </h2>
Before we setup a large-scale ML pipeline, it is best practice to try out the code on a small dataset. This can be done on the machine on which you are running the Datalab notebook.
Our pipeline consists of the following steps
Step5: <h3> Training </h3>
Step6: <h3> Prediction </h3>
Step7: <h2> Cloud deploy model and predict </h2>
Now that we have a working model, we can turn the model loose and train on the full dataset. The model can then be deployed, which essentially puts it on the Cloud and attaches a REST API to it so that we can invoke the trained model from the windmills.
Step8: After the model is deployed, we will want to test it. An easy way to test a deployed ML model is to write out a file with some hypothetical inputs and then use gcloud to invoke the REST API.
<h2> Invoking REST API </h2>
Here is how the windmills invoke our model. Note that invoking a deployed ML model is just a REST API call. The windmill owners don't know anything about what ML model we are running and this way, those details are all hidden away. The input variables may be scaled during preprocessing, but again the client doesn't know any of that.
<img src="serving_arch.png"/>
This preserves maximum flexbility. | Python Code:
%projects set ml-autoawesome
import os
PROJECT = 'ml-autoawesome' # CHANGE THIS
BUCKET = 'ml-autoawesome-cmle' # CHANGE THIS
REGION = 'us-central1' # CHANGE THIS
os.environ['PROJECT'] = PROJECT # for bash
os.environ['BUCKET'] = BUCKET # for bash
os.environ['REGION'] = REGION # for bash
%bash
echo "project=$PROJECT"
echo "bucket=$BUCKET"
echo "region=$REGION"
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
#gcloud beta ml init-project -q
Explanation: <h1> Machine Learning Model training and serving </h1>
The training architecture involves collecting data from the two sources, mashing them up with Dataflow and saving the results in BigQuery. That data is used for ML training in Cloud ML. Then, the trained model is used to orchestrate windmills.
<img src="training_arch.png" />
End of explanation
import tensorflow as tf
import google.datalab.ml as ml
import apache_beam as beam
from tensorflow.python.lib.io import file_io
import json
import shutil
INDIR = '.'
OUTDIR = '.'
os.environ['OUTDIR'] = OUTDIR
Explanation: <h2> Step 1: Import Python packages that we need </h2>
End of explanation
def make_preprocessing_fn():
# stop-gap ...
def _scalar_to_vector(scalar):
# FeatureColumns expect shape (batch_size, 1), not just (batch_size)
return api.map(lambda x: tf.expand_dims(x, -1), scalar)
def preprocessing_fn(inputs):
result = {col: _scalar_to_vector(inputs[col]) for col in CSV_COLUMNS}
for name in SCALE_COLUMNS:
result[name] = _scalar_to_vector(mappers.scale_to_0_1(inputs[name]))
return result
return preprocessing_fn
def make_input_schema(mode):
input_schema = {}
if mode != tf.contrib.learn.ModeKeys.INFER:
input_schema[LABEL_COLUMN] = tf.FixedLenFeature(shape=[], dtype=tf.float32, default_value=0.0)
for name in ['dayofweek', 'key']:
input_schema[name] = tf.FixedLenFeature(shape=[], dtype=tf.string, default_value='null')
for name in ['hourofday']:
input_schema[name] = tf.FixedLenFeature(shape=[], dtype=tf.int64, default_value=0)
for name in SCALE_COLUMNS:
input_schema[name] = tf.FixedLenFeature(shape=[], dtype=tf.float32, default_value=0.0)
input_schema = dataset_schema.from_feature_spec(input_schema)
return input_schema
def make_coder(schema, mode):
import copy
column_names = copy.deepcopy(CSV_COLUMNS)
if mode == tf.contrib.learn.ModeKeys.INFER:
column_names.pop(LABEL_COLUMN)
coder = coders.CsvCoder(column_names, schema)
return coder
def preprocess_all(pipeline, training_data, eval_data, predict_data, output_dir, mode=tf.contrib.learn.ModeKeys.TRAIN):
path_constants = PathConstants()
work_dir = os.path.join(output_dir, path_constants.TEMP_DIR)
# create schema
input_schema = make_input_schema(mode)
# coder
coder = make_coder(input_schema, mode)
# 3) Read from text using the coder.
train_data = (
pipeline
| 'ReadTrainingData' >> beam.io.ReadFromText(training_data)
| 'ParseTrainingCsv' >> beam.Map(coder.decode))
evaluate_data = (
pipeline
| 'ReadEvalData' >> beam.io.ReadFromText(eval_data)
| 'ParseEvalCsv' >> beam.Map(coder.decode))
# metadata
input_metadata = dataset_metadata.DatasetMetadata(schema=input_schema)
_ = (input_metadata
| 'WriteInputMetadata' >> io.WriteMetadata(
os.path.join(output_dir, path_constants.RAW_METADATA_DIR),
pipeline=pipeline))
preprocessing_fn = make_preprocessing_fn()
(train_dataset, train_metadata), transform_fn = (
(train_data, input_metadata)
| 'AnalyzeAndTransform' >> tft.AnalyzeAndTransformDataset(
preprocessing_fn, work_dir))
# WriteTransformFn writes transform_fn and metadata to fixed subdirectories
# of output_dir, which are given by path_constants.TRANSFORM_FN_DIR and
# path_constants.TRANSFORMED_METADATA_DIR.
transform_fn_is_written = (transform_fn | io.WriteTransformFn(output_dir))
(evaluate_dataset, evaluate_metadata) = (
((evaluate_data, input_metadata), transform_fn)
| 'TransformEval' >> tft.TransformDataset())
train_coder = coders.ExampleProtoCoder(train_metadata.schema)
_ = (train_dataset
| 'SerializeTrainExamples' >> beam.Map(train_coder.encode)
| 'WriteTraining'
>> beam.io.WriteToTFRecord(
os.path.join(output_dir,
path_constants.TRANSFORMED_TRAIN_DATA_FILE_PREFIX),
file_name_suffix='.tfrecord.gz'))
evaluate_coder = coders.ExampleProtoCoder(evaluate_metadata.schema)
_ = (evaluate_dataset
| 'SerializeEvalExamples' >> beam.Map(evaluate_coder.encode)
| 'WriteEval'
>> beam.io.WriteToTFRecord(
os.path.join(output_dir,
path_constants.TRANSFORMED_EVAL_DATA_FILE_PREFIX),
file_name_suffix='.tfrecord.gz'))
if predict_data:
predict_mode = tf.contrib.learn.ModeKeys.INFER
predict_schema = make_input_schema(mode=predict_mode)
tsv_coder = make_coder(predict_schema, mode=predict_mode)
predict_coder = coders.ExampleProtoCoder(predict_schema)
_ = (pipeline
| 'ReadPredictData' >> beam.io.ReadFromText(predict_data,
coder=tsv_coder)
# TODO(b/35194257) Obviate the need for this explicit serialization.
| 'EncodePredictData' >> beam.Map(predict_coder.encode)
| 'WritePredictData' >> beam.io.WriteToTFRecord(
os.path.join(output_dir,
path_constants.TRANSFORMED_PREDICT_DATA_FILE_PREFIX),
file_name_suffix='.tfrecord.gz'))
# Workaround b/35366670, to ensure that training and eval don't start before
# the transform_fn is written.
train_dataset |= beam.Map(
lambda x, y: x, y=beam.pvalue.AsSingleton(transform_fn_is_written))
evaluate_dataset |= beam.Map(
lambda x, y: x, y=beam.pvalue.AsSingleton(transform_fn_is_written))
return transform_fn, train_dataset, evaluate_dataset
p = beam.Pipeline()
output_dataset = 'windmills_control'
transform_fn, train_dataset, eval_dataset = preprocess_all(
p, train_data_paths, eval_data_paths, predict_data_paths, output_dataset)
p.run()
import pandas as pd
import numpy as np
import datalab.bigquery as bq
# get data from BigQuery
query=
SELECT
speed, weight, angular_momentum, wind_dir, moisture_content, radar_reflectivity, radar_cap,
radar_distance_to_nearest_cloud, radar_x, radar_y
FROM
windmill_control
df = bq.Query(query).to_dataframe()
# Add a unique key (needed for batch prediction)
df['key'] = 1000 + df.index.values
# Use Pandas to create 90% training & 10% evaluation
df = df.reindex(np.random.permutation(df.index))
trainsize = (len(df)*9)/10
df_train = df.head(trainsize)
df_eval = df.tail(len(df) - trainsize)
df_train.to_csv('cleanedup-train.csv', header=False, index_label=False, index=False)
df_eval.to_csv('cleanedup-eval.csv', header=False, index_label=False, index=False)
df.head()
%bash
ls -lrt cleanedup*
Explanation: <h2> 2. Preprocessing </h2>
Align the radar and windmill data temporally and compute moving averages.
End of explanation
!rm -rf ml_preproc ml_trained
train_bq = ml.BigQueryDataSet(
table_pattern=('cleanedup-train*'),
schema_file=os.path.join('ml.json'))
sd.local_preprocess(
dataset=train_bq,
output_dir=os.path.join(OUTDIR, 'ml_preproc'),
)
file_io.write_string_to_file(os.path.join(OUTDIR, 'ml_preproc/transforms.json'),
json.dumps(transforms, indent=2))
!cat $OUTDIR/ml_preproc/num*json
Explanation: <h2> Local preprocessing, training and prediction </h2>
Before we setup a large-scale ML pipeline, it is best practice to try out the code on a small dataset. This can be done on the machine on which you are running the Datalab notebook.
Our pipeline consists of the following steps:
<ol>
<li> Preprocessing: this goes through the full dataset and computes min/max/mean of the input columns and target. These are useful as (a) defaults in case some input is missing in production (b) to scale the inputs, because some ML optimizers work better on scaled inputs.
<li> Training: this consists of adjusting weights on the model to reduce the error on the training dataset.
<li> Evaluation: how well does this model do on the validation dataset? In the packaged solution, we can pass in the validation dataset and the evaluation will be carried out periodically during training itself. We don't need to do this separately.
<li> Prediction: try out the trained model on some inputs to get an idea of what the model does in some hypothetical situation.
</ol>
<p/>
<h3> Preprocessing </h3>
End of explanation
eval_bq = ml.BigQueryDataSet(
file_pattern=('cleanedup-eval*'),
schema_file=os.path.join(OUTDIR, 'ml.json'))
shutil.rmtree(os.path.join(OUTDIR, 'ml_trained'), ignore_errors=True)
sd.local_train(
train_dataset=train_bq,
eval_dataset=eval_bq,
preprocess_output_dir=os.path.join(OUTDIR, 'ml_preproc'),
transforms=os.path.join(OUTDIR, 'ml_preproc/transforms.json'),
output_dir=os.path.join(OUTDIR, 'ml_trained'),
model_type='dnn_regression',
max_steps=2500,
layer_sizes=[1024]*4
)
%bash
ls $OUTDIR/ml_trained
Explanation: <h3> Training </h3>
End of explanation
import pandas as pd
df = pd.read_csv('{}/batch_predict/predictions-00000-of-00001.csv'.format(OUTDIR), names=('key','true_cost','predicted_cost'))
df['true_cost'] = df['true_cost'] * 20000
df['predicted_cost'] = df['predicted_cost'] * 20000
df.head()
import seaborn as sns
sns.jointplot(x='true_cost', y="predicted_cost", data=df, kind='hex');
Explanation: <h3> Prediction </h3>
End of explanation
%bash
MODEL_NAME="windmill"
MODEL_VERSION="v3"
MODEL_LOCATION="/content/autoawesome/notebooks/ml_trained/model"
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
gcloud beta ml versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
gcloud beta ml models delete ${MODEL_NAME}
gcloud beta ml models create ${MODEL_NAME} --regions $REGION
gcloud beta ml versions create ${MODEL_VERSION} --model ${MODEL_NAME} --staging-bucket gs://${BUCKET} --origin ${MODEL_LOCATION}
Explanation: <h2> Cloud deploy model and predict </h2>
Now that we have a working model, we can turn the model loose and train on the full dataset. The model can then be deployed, which essentially puts it on the Cloud and attaches a REST API to it so that we can invoke the trained model from the windmills.
End of explanation
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
import json
#import google.cloud.ml.features as features
#from google.cloud.ml import session_bundle
credentials = GoogleCredentials.get_application_default()
api = discovery.build('ml', 'v1beta1', credentials=credentials,
discoveryServiceUrl='https://storage.googleapis.com/cloud-ml/discovery/ml_v1beta1_discovery.json')
request_data = {'instances':
[
# redacted to protect privacy of our windmill owners
]
}
parent = 'projects/%s/models/%s/versions/%s' % (PROJECT, 'windmills', 'v3')
response = api.projects().predict(body=request_data, name=parent).execute()
print "response={0}".format(response)
Explanation: After the model is deployed, we will want to test it. An easy way to test a deployed ML model is to write out a file with some hypothetical inputs and then use gcloud to invoke the REST API.
<h2> Invoking REST API </h2>
Here is how the windmills invoke our model. Note that invoking a deployed ML model is just a REST API call. The windmill owners don't know anything about what ML model we are running and this way, those details are all hidden away. The input variables may be scaled during preprocessing, but again the client doesn't know any of that.
<img src="serving_arch.png"/>
This preserves maximum flexbility.
End of explanation |
2,910 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Q3
In this question, you'll go over some of the core terms and concepts in statistics.
Part A
Write a function, variance, which computes the variance of a list of numbers.
The function takes one argument
Step1: Part B
The lecture on statistics mentions latent variables, specifically how you cannot know what the underlying process is that's generating your data; all you have is the data, on which you have to impose certain assumptions in order to derive hypotheses about what generated the data in the first place.
To illustrate this, the code provided below generates sample data from distributions with mean and variance that are typically not known to you. Put another way, pretend you cannot see the mean (loc) and variance (scale) in the code that generates these samples; all you usually can see are the data samples themselves.
You'll use the numpy.mean and variance function you wrote in Part A to compute the statistics on the sample data itself and observe how these statistics change.
In the space provided, compute and print the mean and variance of each of the three samples | Python Code:
import numpy as np
np.random.seed(5987968)
x = np.random.random(8491)
v = x.var(ddof = 1)
np.testing.assert_allclose(v, variance(x))
np.random.seed(4159)
y = np.random.random(25)
w = y.var(ddof = 1)
np.testing.assert_allclose(w, variance(y))
Explanation: Q3
In this question, you'll go over some of the core terms and concepts in statistics.
Part A
Write a function, variance, which computes the variance of a list of numbers.
The function takes one argument: a list or 1D NumPy array of numbers. It returns one floating-point number: the variance of all the numbers.
Recall the formula for variance:
$$
variance = \frac{1}{N - 1} \sum_{i = 1}^{N} (x_i - \mu_x)^2
$$
where $N$ is the number of numbers in your list, $x_i$ is the number at index $i$ in the list, and $\mu_x$ is the average value of all the $x$ values.
You can use numpy.array and your numpy.mean functions, but no other NumPy functions or built-in Python functions other than range().
End of explanation
import numpy as np
np.random.seed(5735636)
sample1 = np.random.normal(loc = 10, scale = 5, size = 10)
sample2 = np.random.normal(loc = 10, scale = 5, size = 1000)
sample3 = np.random.normal(loc = 10, scale = 5, size = 1000000)
#########################
# DON'T MODIFY ANYTHING #
# ABOVE THIS BLOCK #
#########################
### BEGIN SOLUTION
### END SOLUTION
Explanation: Part B
The lecture on statistics mentions latent variables, specifically how you cannot know what the underlying process is that's generating your data; all you have is the data, on which you have to impose certain assumptions in order to derive hypotheses about what generated the data in the first place.
To illustrate this, the code provided below generates sample data from distributions with mean and variance that are typically not known to you. Put another way, pretend you cannot see the mean (loc) and variance (scale) in the code that generates these samples; all you usually can see are the data samples themselves.
You'll use the numpy.mean and variance function you wrote in Part A to compute the statistics on the sample data itself and observe how these statistics change.
In the space provided, compute and print the mean and variance of each of the three samples:
- sample1
- sample2
- sample3
You can just print() them out in the space provided. Don't modify anything above where it says "DON'T MODIFY".
End of explanation |
2,911 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Learn Posture
use machine learning to recognize robot's posture (following the example in scikit-learn-intro.ipynb )
1. Data collection
We have colleceted data before, you need to add new data if you want to add new posture.
the dateset are in robot_pose_data folder
each file contains the data belongs to this posture, e.g. the data in Back file are collected when robot was in "Back" posture
the data file can be load by pickle, e.g. pickle.load(open('Back')), the data is a list of feature data
the features (e.g. each row of the data) are ['LHipYawPitch', 'LHipRoll', 'LHipPitch', 'LKneePitch', 'RHipYawPitch', 'RHipRoll', 'RHipPitch', 'RKneePitch', 'AngleX', 'AngleY'], where 'AngleX' and 'AngleY' are body angle (e.g. Perception.imu) and others are joint angles.
2. Data preprocessing
Step1: 3. Learn on training data
In scikit-learn, an estimator for classification is a Python object that implements the methods fit(X, y) and predict(T). An example of an estimator is the class sklearn.svm.SVC that implements support vector classification.
Step2: learning
Step3: predicting
Step4: 4. Evaluate on the test data
Step5: 5. Deploy to the real system
We can simple use pickle module to serialize the trained classifier.
Step6: Then, in the application we can load the trained classifier again. | Python Code:
%pylab inline
import pickle
from os import listdir, path
import numpy as np
from sklearn import svm, metrics
ROBOT_POSE_DATA_DIR = 'robot_pose_data'
classes = listdir(ROBOT_POSE_DATA_DIR)
print classes
def load_pose_data(i):
'''load pose data from file'''
data = []
target = []
# YOUR CODE HERE
filename = path.join(ROBOT_POSE_DATA_DIR, classes[i])
data = pickle.load(open(filename))
target = [i] * len(data)
return data, target
# load all the data
all_data = []
all_target = []
# YOUR CODE HERE
print 'total number of data', len(all_data)
# shuffule data
permutation = np.random.permutation(len(all_data))
n_training_data = int(len(all_data) * 0.7)
training_data = permutation[:n_training_data]
Explanation: Learn Posture
use machine learning to recognize robot's posture (following the example in scikit-learn-intro.ipynb )
1. Data collection
We have colleceted data before, you need to add new data if you want to add new posture.
the dateset are in robot_pose_data folder
each file contains the data belongs to this posture, e.g. the data in Back file are collected when robot was in "Back" posture
the data file can be load by pickle, e.g. pickle.load(open('Back')), the data is a list of feature data
the features (e.g. each row of the data) are ['LHipYawPitch', 'LHipRoll', 'LHipPitch', 'LKneePitch', 'RHipYawPitch', 'RHipRoll', 'RHipPitch', 'RKneePitch', 'AngleX', 'AngleY'], where 'AngleX' and 'AngleY' are body angle (e.g. Perception.imu) and others are joint angles.
2. Data preprocessing
End of explanation
clf = svm.SVC(gamma=0.001, C=100.)
Explanation: 3. Learn on training data
In scikit-learn, an estimator for classification is a Python object that implements the methods fit(X, y) and predict(T). An example of an estimator is the class sklearn.svm.SVC that implements support vector classification.
End of explanation
# YOUR CODE HERE
Explanation: learning
End of explanation
clf.predict(all_data[-1]), all_target[-1]
def evaluate(expected, predicted):
print("Classification report:\n%s\n" % metrics.classification_report(expected, predicted))
print("Confusion matrix:\n%s" % metrics.confusion_matrix(expected, predicted))
expected = []
predicted = []
# YOUR CODE HERE
evaluate(expected, predicted)
Explanation: predicting
End of explanation
expected = []
predicted = []
# YOUR CODE HERE
evaluate(expected, predicted)
Explanation: 4. Evaluate on the test data
End of explanation
import pickle
ROBOT_POSE_CLF = 'robot_pose.pkl'
pickle.dump(clf, open(ROBOT_POSE_CLF, 'w'))
Explanation: 5. Deploy to the real system
We can simple use pickle module to serialize the trained classifier.
End of explanation
clf2 = pickle.load(open(ROBOT_POSE_CLF))
clf2.predict(all_data[-1]), all_target[-1]
Explanation: Then, in the application we can load the trained classifier again.
End of explanation |
2,912 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predicting Who Will Leave and Why - Python Machine Learning
In this notebook, we do a brief exploration of our HR analytics data (found on Kaggle, which you can check for more info on the dataset) and try to discern which factors matter the most in determining why our personnel leave. The notebook will primarily be divided into two sections -- data analysis and machine learning.
Data Analysis
Step1: Reading in the Data
First, let's read in and get an overview of the data we'll be working with.
Step2: Conveniently, there is no missing data. Given that the "sales' and "salary" columns are non-numeric, we can check the number of unique levels and dummy code the variables.
Step3: Observe that "IT" and "high" are the baseline levels for the assigned department and salary level, respectively. Also note that we saved the data with dummified variables as another dataframe in case we need to access the string values, such as for a cross-tabulation table.
Exploring the Data
Now that we have data in an analysis-friendly form, we can do some basic visualizations to spot any relationships in the data.
Step4: The matrix above shows that, generally speaking, the data is not correlated. This is good because it means we likely won't have issues with multicollinearity later.
It is notable, though perhaps unsurprising, that our employees' satisfaction level is the variable that is most highly correlated with them leaving.
Step5: Let's first check if there are any particular departments that our people tend to be leaving from.
Step6: We can check the above in terms of percentages to more easily see if there are particular departments that tend to have a higher proportion of people leaving.
Step7: R&D and management tend to have lower rates of leaving, and HR and accounting tend to have higher rates of leaving. The other departments are fairly similar, all between around 22 to 25 percent. We can also visualize the above data with a countplot.
Step8: While there doesn't appear to be too much of a difference in the satisfaction, we notice that both HR and accounting, the departments that have the highest rates of leaving, have slightly lower median satisfaction levels than the rest of the departments.
Salary is likely to have a high impact on leaving. In fact, it is highly likely that both R&D and management, the two departments with the lowers leaving rates, have high salaries. Let's first check the relationship between leaving and salary.
Step9: Confirming our hypothesis, those with low salaries tend to have the highest number of people that leave. Eyeballing the plot shows us that around 40% of those with low salaries leave and 25% of those with median salaries leave. It looks like only 10% of those with high salaries leave.
Let's also check the spread of satisfaction level between the different salary ranges.
Step10: Again, in line with some of our prior observations, low salary has the lowest median satisfaction and the highest spread.
Something that may impact employee perception in the company is the number of projects they are assigned.
Step11: It is very clear that evaluation scores are affected by the number of projects assigned to the employee. What's more, we again notice a peculiar trend in accounting -- they have a lower last_evaluation score than the other departments at 7 projects.
Step12: It looks like we've found a very important relationship -- those with high numbers of projects (6 or 7) tend to have extremely low satisfaction levels. This will likely play a role when we do our modeling. Also worth noting is that those with only 2 projects tend to also have lower satisfaction levels.
Let's take a look at time spent at the company and the effect of that on leaving. It was the third most correlated factor with leaving, so this should give us some usable information. We also check this in the context of two of the variables we previously studied, salary and department, to see if there are additional insights we can extract.
Step13: There is a clear trend for those with low and medium salaries -- those that leave tend to have spent more time at the company. For those with high salaries, leaving depends on the department. At the high salary level, time spent doesn't vary in accounting for those that left versus those that haven't but it varies pretty wildly for the support and IT departments.
Before we move on to the modeling section, let's take a look at accidents. This was the second most correlated factor with leaving, interestingly enough.
Step14: The difference is quite subtle, but the monthly hours (just noticed when I made this plot that the variable was spelled wrong in the dataset) seems to be bimodally distributed more often for those without work accidents versus those with.
Let's check a similar plot to see the relationship between leaving, work accidents, and satisfaction level.Let's check a similar plot to see the relationship between leaving, work accidents, and satisfaction level.
Step15: What we see here is that there is a marked difference in the satisfaction level spreads of those that leave versus those that don't, with the peaks for those that left being slightly more pronounced for those that have not had workplace accidents, interestingly enough.
Machine Learning
Modeling
Let's model the data with a decision tree.
Step16: While we will, of course, make predictions on our test set, we treat that as a holdout set and first do some cross-validation on our training set.
Step17: Our results are very good; showing a consistently high score for all folds of our 10-fold cross-validation using the training data. Let's make predictions and check the performance of the model on the holdout set in the same manner.
Step18: Once again, the model performs very well. Let's check on some additional classification metrics to see, in more detail, how our model does.
Evaluation
Step19: On the basis of our 4500 test samples, our model is very accurate, with only 98 test cases wrong (or only around 2.2% wrong). All of the other metrics -- precision, recall, f1-score -- are also very good. Our model also doesn't appear to display any inherent bias in predicting one class.
We can also take a look at the ROC curve to determine the effectiveness of the test at correctly classifying those who stay and those who leave.
Step20: With a very high area under the curve of 0.977, our model is excellent at discriminating between those who stay and those who leave.
Let's check out the most important features, or those that are most influential in determining whether an employee leaves (or stays) in our company.
Step21: To make it easier to interpret, we can order these from most important to least. | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
Explanation: Predicting Who Will Leave and Why - Python Machine Learning
In this notebook, we do a brief exploration of our HR analytics data (found on Kaggle, which you can check for more info on the dataset) and try to discern which factors matter the most in determining why our personnel leave. The notebook will primarily be divided into two sections -- data analysis and machine learning.
Data Analysis
End of explanation
hr_data = pd.read_csv('../input/HR_comma_sep.csv')
hr_data.head()
hr_data.describe()
hr_data.info()
Explanation: Reading in the Data
First, let's read in and get an overview of the data we'll be working with.
End of explanation
print('Departments: ', ', '.join(hr_data['sales'].unique()))
print('Salary levels: ', ', '.join(hr_data['salary'].unique()))
hr_data.rename(columns={'sales':'department'}, inplace=True)
hr_data_new = pd.get_dummies(hr_data, ['department', 'salary'] ,drop_first = True)
hr_data_new.head()
Explanation: Conveniently, there is no missing data. Given that the "sales' and "salary" columns are non-numeric, we can check the number of unique levels and dummy code the variables.
End of explanation
# Correlation matrix
sns.heatmap(hr_data.corr(), annot=True)
Explanation: Observe that "IT" and "high" are the baseline levels for the assigned department and salary level, respectively. Also note that we saved the data with dummified variables as another dataframe in case we need to access the string values, such as for a cross-tabulation table.
Exploring the Data
Now that we have data in an analysis-friendly form, we can do some basic visualizations to spot any relationships in the data.
End of explanation
hr_data_new.columns
Explanation: The matrix above shows that, generally speaking, the data is not correlated. This is good because it means we likely won't have issues with multicollinearity later.
It is notable, though perhaps unsurprising, that our employees' satisfaction level is the variable that is most highly correlated with them leaving.
End of explanation
dept_table = pd.crosstab(hr_data['department'], hr_data['left'])
dept_table.index.names = ['Department']
dept_table
Explanation: Let's first check if there are any particular departments that our people tend to be leaving from.
End of explanation
dept_table_percentages = dept_table.apply(lambda row: (row/row.sum())*100, axis = 1)
dept_table_percentages
Explanation: We can check the above in terms of percentages to more easily see if there are particular departments that tend to have a higher proportion of people leaving.
End of explanation
sns.countplot(x='department', hue='left', data=hr_data)
sns.boxplot(x='department', y='satisfaction_level', data=hr_data)
Explanation: R&D and management tend to have lower rates of leaving, and HR and accounting tend to have higher rates of leaving. The other departments are fairly similar, all between around 22 to 25 percent. We can also visualize the above data with a countplot.
End of explanation
sns.countplot(x='salary', hue='left', data=hr_data)
Explanation: While there doesn't appear to be too much of a difference in the satisfaction, we notice that both HR and accounting, the departments that have the highest rates of leaving, have slightly lower median satisfaction levels than the rest of the departments.
Salary is likely to have a high impact on leaving. In fact, it is highly likely that both R&D and management, the two departments with the lowers leaving rates, have high salaries. Let's first check the relationship between leaving and salary.
End of explanation
sns.boxplot(x='salary', y='satisfaction_level', data=hr_data)
Explanation: Confirming our hypothesis, those with low salaries tend to have the highest number of people that leave. Eyeballing the plot shows us that around 40% of those with low salaries leave and 25% of those with median salaries leave. It looks like only 10% of those with high salaries leave.
Let's also check the spread of satisfaction level between the different salary ranges.
End of explanation
sns.factorplot(x='number_project', y='last_evaluation', hue='department', data=hr_data)
Explanation: Again, in line with some of our prior observations, low salary has the lowest median satisfaction and the highest spread.
Something that may impact employee perception in the company is the number of projects they are assigned.
End of explanation
sns.boxplot(x='number_project', y='satisfaction_level', data=hr_data_new)
Explanation: It is very clear that evaluation scores are affected by the number of projects assigned to the employee. What's more, we again notice a peculiar trend in accounting -- they have a lower last_evaluation score than the other departments at 7 projects.
End of explanation
timeplot = sns.factorplot(x='time_spend_company', hue='left', y='department', row='salary', data=hr_data, aspect=2)
Explanation: It looks like we've found a very important relationship -- those with high numbers of projects (6 or 7) tend to have extremely low satisfaction levels. This will likely play a role when we do our modeling. Also worth noting is that those with only 2 projects tend to also have lower satisfaction levels.
Let's take a look at time spent at the company and the effect of that on leaving. It was the third most correlated factor with leaving, so this should give us some usable information. We also check this in the context of two of the variables we previously studied, salary and department, to see if there are additional insights we can extract.
End of explanation
accidentplot = plt.figure(figsize=(10,6))
accidentplotax = accidentplot.add_axes([0,0,1,1])
accidentplotax = sns.violinplot(x='department', y='average_montly_hours', hue='Work_accident', split=True, data = hr_data, jitter = 0.47)
Explanation: There is a clear trend for those with low and medium salaries -- those that leave tend to have spent more time at the company. For those with high salaries, leaving depends on the department. At the high salary level, time spent doesn't vary in accounting for those that left versus those that haven't but it varies pretty wildly for the support and IT departments.
Before we move on to the modeling section, let's take a look at accidents. This was the second most correlated factor with leaving, interestingly enough.
End of explanation
satisaccident = plt.figure(figsize=(10,6))
satisaccidentax = satisaccident.add_axes([0,0,1,1])
satisaccidentax = sns.violinplot(x='left', hue='Work_accident', y='satisfaction_level', split=True, data=hr_data)
Explanation: The difference is quite subtle, but the monthly hours (just noticed when I made this plot that the variable was spelled wrong in the dataset) seems to be bimodally distributed more often for those without work accidents versus those with.
Let's check a similar plot to see the relationship between leaving, work accidents, and satisfaction level.Let's check a similar plot to see the relationship between leaving, work accidents, and satisfaction level.
End of explanation
# We now use model_selection instead of cross_validation
from sklearn.model_selection import train_test_split
X = hr_data_new.drop('left', axis=1)
y = hr_data_new['left']
X_train, X_test, y_train, y_test, = train_test_split(X, y, test_size = 0.3, random_state = 47)
from sklearn.tree import DecisionTreeClassifier
dt = DecisionTreeClassifier()
dt.fit(X_train, y_train)
Explanation: What we see here is that there is a marked difference in the satisfaction level spreads of those that leave versus those that don't, with the peaks for those that left being slightly more pronounced for those that have not had workplace accidents, interestingly enough.
Machine Learning
Modeling
Let's model the data with a decision tree.
End of explanation
from sklearn.model_selection import cross_val_score
# Score first on our training data
print('Score: ', dt.score(X_train, y_train))
print('Cross validation score, 10-fold cv: \n', cross_val_score(dt, X_train, y_train, cv=10))
print('Mean cross validation score: ', cross_val_score(dt,X_train,y_train,cv=10).mean())
Explanation: While we will, of course, make predictions on our test set, we treat that as a holdout set and first do some cross-validation on our training set.
End of explanation
predictions = dt.predict(X_test)
print('Score: ', dt.score(X_test, y_test))
print('Cross validation score, 10-fold cv: \n', cross_val_score(dt, X, y, cv=10))
print('Mean cross validation score: ', cross_val_score(dt,X,y,cv=10).mean())
Explanation: Our results are very good; showing a consistently high score for all folds of our 10-fold cross-validation using the training data. Let's make predictions and check the performance of the model on the holdout set in the same manner.
End of explanation
from sklearn.metrics import confusion_matrix, classification_report
print('Confusion matrix: \n', confusion_matrix(y_test, predictions), '\n')
print('Classification report: \n', classification_report(y_test, predictions))
Explanation: Once again, the model performs very well. Let's check on some additional classification metrics to see, in more detail, how our model does.
Evaluation
End of explanation
from sklearn.metrics import roc_curve, roc_auc_score
probabilities = dt.predict_proba(X_test)
fpr, tpr, thresholds = roc_curve(y_test, probabilities[:,1])
rates = pd.DataFrame({'False Positive Rate': fpr, 'True Positive Rate': tpr})
roc = plt.figure(figsize = (10,6))
rocax = roc.add_axes([0,0,1,1])
rocax.plot(fpr, tpr, color='g', label='Decision Tree')
rocax.plot([0,1],[0,1], color='gray', ls='--', label='Baseline (Random Guessing)')
rocax.set_xlabel('False Positive Rate')
rocax.set_ylabel('True Positive Rate')
rocax.set_title('ROC Curve')
rocax.legend()
print('Area Under the Curve:', roc_auc_score(y_test, probabilities[:,1]))
Explanation: On the basis of our 4500 test samples, our model is very accurate, with only 98 test cases wrong (or only around 2.2% wrong). All of the other metrics -- precision, recall, f1-score -- are also very good. Our model also doesn't appear to display any inherent bias in predicting one class.
We can also take a look at the ROC curve to determine the effectiveness of the test at correctly classifying those who stay and those who leave.
End of explanation
importances = dt.feature_importances_
print("Feature importances: \n")
for f in range(len(X.columns)):
print('โข', X.columns[f], ":", importances[f])
Explanation: With a very high area under the curve of 0.977, our model is excellent at discriminating between those who stay and those who leave.
Let's check out the most important features, or those that are most influential in determining whether an employee leaves (or stays) in our company.
End of explanation
featureswithimportances = list(zip(X.columns, importances))
featureswithimportances.sort(key = lambda f: f[1], reverse=True)
print('Ordered feature importances: \n', '(From most important to least important)\n')
for f in range(len(featureswithimportances)):
print(f+1,". ", featureswithimportances[f][0], ": ", featureswithimportances[f][1])
sorted_features, sorted_importances = zip(*featureswithimportances)
plt.figure(figsize=(12,6))
sns.barplot(sorted_features, sorted_importances)
plt.title('Feature Importances (Gini Importance)')
plt.ylabel('Decrease in Node Impurity')
plt.xlabel('Feature')
plt.xticks(rotation=90);
Explanation: To make it easier to interpret, we can order these from most important to least.
End of explanation |
2,913 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I want to process a gray image in the form of np.array. | Problem:
import numpy as np
im = np.array([[1,1,1,1,1,5],
[1,0,0,1,2,0],
[2,1,0,0,1,0],
[1,0,0,7,1,0],
[1,0,0,0,0,0]])
mask = im == 0
rows = np.flatnonzero((mask).sum(axis=1))
cols = np.flatnonzero((mask).sum(axis=0))
if rows.shape[0] == 0:
result = np.array([])
else:
result = im[rows.min():rows.max()+1, cols.min():cols.max()+1] |
2,914 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sensitivity analysis for L-Serine
In this example, the ammount of produced serine is increased in steps. The biomass production will decrease with increased accumulation of Serine. This is a scenario where an metabolite analog would compete with Serine and the cell needs to increase the production of Serine to compete for biomass production and enzyme activity.
Step1: We can also see how does this affects other variables.
Step2: The same analysis can be done with different simulation methods (e.g. lMOMA).
Step3: Sensitivity analysis for Pyruvate
In this example, the pyruvate of succinate is decreased. This is a scenario where the cells are evolved with a toxic compound and the consumption turnover of that compound decreases. | Python Code:
ser__L = model.metabolites.ser__L_c
result = sensitivity_analysis(model, ser__L, is_essential=True, steps=10,
biomass=model.reactions.BIOMASS_Ec_iJO1366_core_53p95M)
result.data_frame
result.plot(width=700, height=500)
Explanation: Sensitivity analysis for L-Serine
In this example, the ammount of produced serine is increased in steps. The biomass production will decrease with increased accumulation of Serine. This is a scenario where an metabolite analog would compete with Serine and the cell needs to increase the production of Serine to compete for biomass production and enzyme activity.
End of explanation
result = sensitivity_analysis(model, ser__L, is_essential=True, steps=10,
variables=[model.reactions.SERAT, model.reactions.SUCOAS],
biomass=model.reactions.BIOMASS_Ec_iJO1366_core_53p95M)
result.data_frame
result.plot(width=700, height=500)
Explanation: We can also see how does this affects other variables.
End of explanation
from cameo.flux_analysis.simulation import lmoma
result = sensitivity_analysis(model, ser__L, is_essential=True, steps=10, simulation_method=lmoma,
biomass=model.reactions.BIOMASS_Ec_iJO1366_core_53p95M)
result.plot(width=700, height=500)
Explanation: The same analysis can be done with different simulation methods (e.g. lMOMA).
End of explanation
pyr = model.metabolites.pyr_c
result = sensitivity_analysis(model, pyr, is_essential=False, steps=10,
biomass=model.reactions.BIOMASS_Ec_iJO1366_core_53p95M)
result.plot(width=700, height=500)
result = sensitivity_analysis(model, pyr, is_essential=False, steps=10, simulation_method=lmoma,
biomass=model.reactions.BIOMASS_Ec_iJO1366_core_53p95M)
result.data_frame
Explanation: Sensitivity analysis for Pyruvate
In this example, the pyruvate of succinate is decreased. This is a scenario where the cells are evolved with a toxic compound and the consumption turnover of that compound decreases.
End of explanation |
2,915 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
On Kepler 452
Kepler 452 is a solar-like star in the Kepler field that was recently announced to possess a planet with an orbit of 385 Earth days. Based on a stellar evolution model analysis of the host star, the planet is found to have a radius of approximately $1.63 \pm 0.23 R_{\oplus}$. Standard Dartmouth stellar models were used to draw this conclusion, with added support from a similar analysis performed with YREC models. While I do not doubt the overall validity of the stellar models, it is still a worthwhile excerise to explore how various modeling assumptions may affect the results given that only three observable properties were used to constrain the model parameters
Step1: Solar Abundance Distribution
Step2: Note that there is not an $0.95 M_{\odot}$ mass track plotted. The right-most track for both model sets is a $0.90 M_{\odot}$ track. Assuming the star is burning hydrogen in the core and is not on the pre-main-sequence, then we can estimate a mass of approximately $1.03\pm0.03 M_{\odot}$ from the Dartmouth 2008 models (dashed lines). If we instead look at the Dartmouth 2015 models, we find the mass is approximately $1.08\pm0.03 M_{\odot}$, consistent with the first esimate, within $2\sigma$. Once one builds in the metallicity uncertainty, the errors increase further, providing a greater consistency between the two measurements. Jenkins et al. quote a mass of $1.04\pm0.05 M_{\odot}$, in agreement with the aforementioned values.
One small factor that was not accounted for is that the observed metallicity provides the present day metallicity, which is not necessarily equivalent to the quote metallicity for model mass tracks. Due to gravitational settling and multiple diffusive processes, one may need to use models with a higher proto-stellar (re
Step3: Dartmouth 2008 tracks with [Fe/H] $= +0.30$ dex are shown as solid lines, with dotted lines showing models computed with the present day metallicity of [Fe/H] = $+0.20$ dex. As was the case with the Dartmouth 2015 models, adopting a higher metallicity pushes the inferred stellar mass up to approximately $1.10 \pm 0.04 M_{\odot}$. | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
Explanation: On Kepler 452
Kepler 452 is a solar-like star in the Kepler field that was recently announced to possess a planet with an orbit of 385 Earth days. Based on a stellar evolution model analysis of the host star, the planet is found to have a radius of approximately $1.63 \pm 0.23 R_{\oplus}$. Standard Dartmouth stellar models were used to draw this conclusion, with added support from a similar analysis performed with YREC models. While I do not doubt the overall validity of the stellar models, it is still a worthwhile excerise to explore how various modeling assumptions may affect the results given that only three observable properties were used to constrain the model parameters: $\log(g)$, $T_{\rm eff}$, and [Fe/H].
Being by initializing matplotlib and numpy (eventually I'll add this to the default config)
End of explanation
fig, ax = plt.subplots(1, 1, figsize=(10, 10))
# configure axes
ax.set_xlabel('Effective Temperature (K)', fontsize=22.)
ax.set_xlim(6000., 5000.)
ax.set_ylabel('$\log_{10}(g)$', fontsize=22.)
ax.set_ylim(4.6, 4.1)
ax.tick_params(axis='both', which='major', length=20., labelsize=20.)
# approximate mass range from measured temperature
masses = np.arange(0.90, 1.3, 0.05)
# directories for metallicity of +0.2 dex
f15_directory = '../../evolve/dmestar/trk/gas07/p020/a0/amlt2202'
d08_directory = '../../evolve/dsep08/trk/fehp02afep0'
# plot mass tracks for GS98 and GAS07 composition
for mass in masses:
f15_file = '{:s}/m{:04.0f}_GAS07_p020_p0_y27_mlt2.202.trk'.format(f15_directory, mass*1000.)
d08_file = '{:s}/m{:03.0f}fehp02afep0.jc2mass'.format(d08_directory, mass*100.)
try:
f15_trk = np.genfromtxt(f15_file)
d08_trk = np.genfromtxt(d08_file)
except IOError:
continue
ax.plot(10**d08_trk[:, 1], d08_trk[:, 2], '--', lw=2, color='#444444')
ax.plot(10**f15_trk[:, 1], f15_trk[:, 2], '-', lw=2, color='#333333')
# add Kepler 452 point
ax.errorbar([5757.], [4.32], xerr=85., yerr=0.09, fmt='-o', lw=3, markersize=14., color='#4682B4')
Explanation: Solar Abundance Distribution
End of explanation
fig, ax = plt.subplots(1, 1, figsize=(10, 10))
# configure axes
ax.set_xlabel('Effective Temperature (K)', fontsize=22.)
ax.set_xlim(6000., 5000.)
ax.set_ylabel('$\log_{10}(g)$', fontsize=22.)
ax.set_ylim(4.6, 4.1)
ax.tick_params(axis='both', which='major', length=20., labelsize=20.)
# directories for metallicity of +0.2, +0.3 dex
d08_directory_03 = '../../evolve/dsep08/trk/fehp03afep0'
# plot mass tracks for GS98 and GAS07 composition
for mass in masses:
d08_file = '{:s}/m{:03.0f}fehp02afep0.jc2mass'.format(d08_directory, mass*100.)
d08_03_file = '{:s}/m{:03.0f}fehp03afep0.jc2mass'.format(d08_directory_03, mass*100.)
try:
d08_trk = np.genfromtxt(d08_file)
d08_03_trk = np.genfromtxt(d08_03_file)
except IOError:
continue
ax.plot(10**d08_trk[:, 1], d08_trk[:, 2], '--', lw=2, color='#444444')
ax.plot(10**d08_03_trk[:, 1], d08_03_trk[:, 2], '-', lw=2, color='#333333')
# add Kepler 452 point
ax.errorbar([5757.], [4.32], xerr=85., yerr=0.09, fmt='-o', lw=3, markersize=14., color='#4682B4')
Explanation: Note that there is not an $0.95 M_{\odot}$ mass track plotted. The right-most track for both model sets is a $0.90 M_{\odot}$ track. Assuming the star is burning hydrogen in the core and is not on the pre-main-sequence, then we can estimate a mass of approximately $1.03\pm0.03 M_{\odot}$ from the Dartmouth 2008 models (dashed lines). If we instead look at the Dartmouth 2015 models, we find the mass is approximately $1.08\pm0.03 M_{\odot}$, consistent with the first esimate, within $2\sigma$. Once one builds in the metallicity uncertainty, the errors increase further, providing a greater consistency between the two measurements. Jenkins et al. quote a mass of $1.04\pm0.05 M_{\odot}$, in agreement with the aforementioned values.
One small factor that was not accounted for is that the observed metallicity provides the present day metallicity, which is not necessarily equivalent to the quote metallicity for model mass tracks. Due to gravitational settling and multiple diffusive processes, one may need to use models with a higher proto-stellar (re: initial) surface metal abundance to acheive a present day value of [Fe/H] $= +0.2$ dex. At most, we might expect an 0.1 dex reduction in the surface abundance of heavy element over time. Although this is very rough, it should provide an upper limit to the uncertainty one expects heavy element diffusion to inflict on model properties.
End of explanation
fig, ax = plt.subplots(1, 1, figsize=(10, 8))
ax.set_title('Evolution of Surface Metallicity', fontsize=26., family='serif')
ax.set_xlabel('Age (Gyr)', fontsize=22., family='serif')
ax.set_xlim(1.0, 10.0)
ax.set_ylabel('[M/H] (dex)', fontsize=22., family='serif')
ax.tick_params(axis='both', which='major', length=16., labelsize=20.)
f15_trk = np.genfromtxt('{:s}/m1100_GAS07_p020_p0_y27_mlt2.202.trk'.format(f15_directory))
d08_trk = np.genfromtxt('../../evolve/models/tmp/m1100_GS98_p020_p0_y29_mlt1.884.trk')
# solar Z/X = 0.0165 for GAS07 solar abundance distribution, 0.0231 for GS98
ax.plot(f15_trk[:,0]/1.0e9, np.log10(f15_trk[:,7]/0.0165), '-', lw=3, color="#333333")
ax.plot(d08_trk[:,0]/1.0e9, np.log10(d08_trk[:,7]/0.0231), '--', lw=3, color="#333333")
Explanation: Dartmouth 2008 tracks with [Fe/H] $= +0.30$ dex are shown as solid lines, with dotted lines showing models computed with the present day metallicity of [Fe/H] = $+0.20$ dex. As was the case with the Dartmouth 2015 models, adopting a higher metallicity pushes the inferred stellar mass up to approximately $1.10 \pm 0.04 M_{\odot}$.
End of explanation |
2,916 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Point sources
In astromodels a point source is described by its position in the sky and its spectral features.
Creating a point source
A simple source with a power law spectrum can be created like this
Step1: We can also use Galactic coordinates
Step2: As spectral shape we can use any function or any composite function (see "Creating and modifying functions")
Getting info about a point source
Info about a point source can easily be obtained with the usual .display() method (which will use the richest representation available), or by printing it which will display a text-only representation
Step3: As you can see we have created a point source with one component (see below) automatically named "main", with a power law spectrum, at the specified position.
Converting between coordinates systems
By default the coordinates of the point source are displayed in the same system used during creation. However, you can always obtain R.A, Dec or L,B like this
Step4: The get_ra, get_dec, get_l and get_b return either a Latitude or Longitude object of astropy.coordinates, from which you can obtain all formats for the coordinates, like
Step5: For more control on the output and many more options, such as transform to local frames or other equinoxes, compute distances between points in the sky, and so on, you can obtain an instance of astropy.coordinates.SkyCoord by using the sky_coord property of the position object
Step6: Gotcha while accessing coordinates
Please note that using get_ra() and .ra (or the equivalent methods for the other coordinates) is not the same. While get_ra() will always return a single float value corresponding to the R.A. of the source, the .ra property will exist only if the source has been created using R.A, Dec as input coordinates and will return a Parameter instance
Step7: Multi-component sources
A multi-component source is a point source which has different spectral components. For example, in a Gamma-Ray Burst you can have a Synchrotron component and a Inverse Compton component, which come from different zones and are described by different spectra. Depending on the needs of your analysis, you might model this situation using a single component constituted by the sum of the two spectra, or you might want to model them independently. The latter choice allows you to measure for instance the fluxes from the two components independently. Also, each components has its own polarization, which can be useful when studying polarized sources (to be implemented). Representing a source with more than one component is easy in astromodels
Step8: Modifying features of the source and modify parameters of its spectrum
Starting from the source instance you can modify any of its components, or its position, in a straightforward way | Python Code:
from astromodels import *
# Using J2000 R.A. and Dec (ICRS), which is the default coordinate system:
simple_source_icrs = PointSource('simple_source', ra=123.2, dec=-13.2, spectral_shape=powerlaw())
Explanation: Point sources
In astromodels a point source is described by its position in the sky and its spectral features.
Creating a point source
A simple source with a power law spectrum can be created like this:
End of explanation
simple_source_gal = PointSource('simple_source', l=234.320573, b=11.365142, spectral_shape=powerlaw())
Explanation: We can also use Galactic coordinates:
End of explanation
simple_source_icrs.display()
# or print(simple_source_icrs) for a text-only representation
Explanation: As spectral shape we can use any function or any composite function (see "Creating and modifying functions")
Getting info about a point source
Info about a point source can easily be obtained with the usual .display() method (which will use the richest representation available), or by printing it which will display a text-only representation:
End of explanation
l = simple_source_icrs.position.get_l()
b = simple_source_icrs.position.get_b()
ra = simple_source_gal.position.get_ra()
dec = simple_source_gal.position.get_dec()
type(ra)
Explanation: As you can see we have created a point source with one component (see below) automatically named "main", with a power law spectrum, at the specified position.
Converting between coordinates systems
By default the coordinates of the point source are displayed in the same system used during creation. However, you can always obtain R.A, Dec or L,B like this:
End of explanation
# Decimal R.A.
print("Decimal R.A. is %s" % ra.deg)
print("Sexadecimal R.A. is %.0f:%.0f:%s" % (ra.dms.d, ra.dms.m, ra.dms.s))
Explanation: The get_ra, get_dec, get_l and get_b return either a Latitude or Longitude object of astropy.coordinates, from which you can obtain all formats for the coordinates, like:
End of explanation
# Refer to the documentation of the astropy.coordinates.SkyCoord class:
# http://docs.astropy.org/en/stable/coordinates/
# for all available options.
sky_coord_instance = simple_source_icrs.position.sky_coord
# Now you can transform to another reference use transform_to.
# Here for example we compute the altitude of our source for HAWC at 2 am on 2013-07-01
from astropy.coordinates import SkyCoord, EarthLocation, AltAz
from astropy.time import Time
hawc_site = EarthLocation(lat=19*u.deg, lon=97.3*u.deg, height=4100*u.m)
utcoffset = -5*u.hour # Hour at HAWC is CDT, which is UTC - 5 hours
time = Time('2013-7-01 02:00:00') - utcoffset
src_altaz = sky_coord_instance.transform_to(AltAz(obstime=time,location=hawc_site))
print("Source Altitude at HAWC : {0.alt:.5}".format(src_altaz))
Explanation: For more control on the output and many more options, such as transform to local frames or other equinoxes, compute distances between points in the sky, and so on, you can obtain an instance of astropy.coordinates.SkyCoord by using the sky_coord property of the position object:
End of explanation
# These will return two Parameter instances corresponding to the parameters ra and dec
# NOT the corresponding floating point numbers:
parameter_ra = simple_source_icrs.position.ra
parameter_dec = simple_source_icrs.position.dec
# This would instead throw AttributeError, since simple_source_icrs was instanced using
# R.A. and Dec. and hence does not have the l,b parameters:
# error = simple_source_icrs.position.l
# error = simple_source_icrs.position.b
# Similarly this will throw AttributeError, because simple_source_gal was instanced using
# Galactic coordinates:
# error = simple_source_gal.position.ra
# error = simple_source_gal.position.dec
# In all cases, independently on how the source was instanced, you can obtain the coordinates
# as normal floating point numbers using:
ra1 = simple_source_icrs.position.get_ra().value
dec1 = simple_source_icrs.position.get_dec().value
l1 = simple_source_icrs.position.get_l().value
b1 = simple_source_icrs.position.get_b().value
ra2 = simple_source_gal.position.get_ra().value
dec2 = simple_source_gal.position.get_dec().value
l2 = simple_source_gal.position.get_l().value
b2 = simple_source_gal.position.get_b().value
Explanation: Gotcha while accessing coordinates
Please note that using get_ra() and .ra (or the equivalent methods for the other coordinates) is not the same. While get_ra() will always return a single float value corresponding to the R.A. of the source, the .ra property will exist only if the source has been created using R.A, Dec as input coordinates and will return a Parameter instance:
End of explanation
# Create the two different components
#(of course the shape can be any function, or any composite function)
component1 = SpectralComponent('synchrotron',shape=powerlaw())
component2 = SpectralComponent('IC',shape=powerlaw())
# Create a multi-component source
multicomp_source = PointSource('multicomp_source', ra=123.2, dec=-13.2, components=[component1,component2])
multicomp_source.display()
Explanation: Multi-component sources
A multi-component source is a point source which has different spectral components. For example, in a Gamma-Ray Burst you can have a Synchrotron component and a Inverse Compton component, which come from different zones and are described by different spectra. Depending on the needs of your analysis, you might model this situation using a single component constituted by the sum of the two spectra, or you might want to model them independently. The latter choice allows you to measure for instance the fluxes from the two components independently. Also, each components has its own polarization, which can be useful when studying polarized sources (to be implemented). Representing a source with more than one component is easy in astromodels:
End of explanation
# Change position
multicomp_source.position.ra = 124.5
multicomp_source.position.dec = -11.5
# Change values for the parameters
multicomp_source.spectrum.synchrotron.powerlaw.logK = -1.2
multicomp_source.spectrum.IC.powerlaw.index = -1.0
# To avoid having to write that much, you can create a "shortcut" for a function
po = multicomp_source.spectrum.synchrotron.powerlaw
# Now you can modify its parameters more easily
# (see "Creating and modifying functions" for more info on what you can to with a parameter)
po.K = 1e-5
# Change the minimum using explicit units
po.K.min_value = 1e-6 * 1 / (u.MeV * u.cm**2 * u.s)
# GOTCHA
# Creating a shortcut directly to the parameter will not work:
# p1 = multicomp_source.spectrum.synchrotron.powerlaw.logK
# p1 = -1.3 # this does NOT change the value of logK, but instead assign -1.3 to p1 (i.e., destroy the shortcut)
# However you can change the value of p1 like this:
# p1.value = -1.3 # This will work
multicomp_source.display()
Explanation: Modifying features of the source and modify parameters of its spectrum
Starting from the source instance you can modify any of its components, or its position, in a straightforward way:
End of explanation |
2,917 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modifying an image
Basic example, rotates the image (rot_const) by 0.2 radians.
Step1: Full list of modifications and defaults
center
Step2: Perlin noise
Step3: A bunch of different configs
Like masks, can save to disk and then display them, to save on notebook size. Run the next cell to generate the movies first.
Step4: After generating them, you can display them. | Python Code:
img1 = image.load_image('https://upload.wikimedia.org/wikipedia/commons/6/6a/Mona_Lisa.jpg', (220, 350))
img2 = canvas.modify_canvas(img1, {'rot_const': 0.2})
img3 = image.concatenate_images([img1, img2], margin=2)
image.display(img3)
Explanation: Modifying an image
Basic example, rotates the image (rot_const) by 0.2 radians.
End of explanation
config_identity = {
'center':(0.5, 0.5),
'shift':(0.0, 0.0), 'stretch':(1.0, 1.0),
'zoom':1.0, 'expand':0,
'rot_const':0.0, 'rot_ang':0, 'rot_dst':0,
'spiral_margin':0, 'spiral_periods':0,
'noise_rate':(0, 0), 'noise_margin':(0, 0)
}
config = {
'shift':(0.01, -0.01), 'stretch':(1.01, 0.99),
}
img = image.load_image('https://upload.wikimedia.org/wikipedia/commons/thumb/f/f4/The_Scream.jpg/471px-The_Scream.jpg')
canvas.view_canvas(config, (320, 320), 40, img=img, animate=True)
Explanation: Full list of modifications and defaults
center: normalized center point of image, or origin of canvas (default (0.5, 0.5))
shift: translation X and Y as a fraction of width and height
stretch: vertical and horizontal stretch of image (default 1.0, 1.0)
zoom: zoom in (>1.0) or out (<1.0), default (1.0)
expand: goes with zoom
rot_const: constant rotation around center, in radians
rot_ang: rotation as a function of angle from center, in radians
rot_dst: rotation as a function of distance from center, in radians
spiral_margin, spiral_periods: spiraling around center, distance and how many periods to go
noise_rate, noise_margin: Perlin noise speed, and maximum movement
End of explanation
config = {'noise_rate':(0.52, 0.35), 'noise_margin':(2.4, 4.3)}
canvas.view_canvas(config, (256, 256), 30, animate=True)
Explanation: Perlin noise
End of explanation
configs = [
{'center':(0.75, 0.25), 'rot_const':0.03, 'rot_ang':0, 'rot_dst':0.0},
{'spiral_margin':0.02, 'spiral_periods':4},
{'shift':(-0.01, 0.0), 'stretch':(1.0, 1.0), 'zoom':1.0, 'expand':0.5},
{'rot_const':0.0, 'rot_ang':0.02, 'rot_dst':0},
{'rot_const':0.0, 'rot_ang':0, 'rot_dst':0.0001},
{'zoom':0.99, 'expand':0}
]
img = image.load_image('https://upload.wikimedia.org/wikipedia/commons/6/6a/Mona_Lisa.jpg', (320, 360))
for c, config in enumerate(configs):
canvas.save_canvas_video('media/example_canvas_%02d.mp4'%c, config, (320, 360), 60, img=img)
Explanation: A bunch of different configs
Like masks, can save to disk and then display them, to save on notebook size. Run the next cell to generate the movies first.
End of explanation
videos = ['media/example_canvas_%02d.mp4'%c for c in range(len(configs))]
image.display_local(videos)
Explanation: After generating them, you can display them.
End of explanation |
2,918 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lectura y manipulaciรณn de datos con Pandas
Autor
Step1: 1. The Pandas Series Object
A Pandas Series is a one-dimensional array of indexed data. It can be created from a list or array as follows
Step2: As we see in the output, the Series wraps both a sequence of values and a sequence of indices, which we can access with the values and index attributes. The values are simply a familiar NumPy array
Step3: The index is an array-like object of type pd.Index, which we'll discuss in more detail momentarily.
Step4: Like with a NumPy array, data can be accessed by the associated index via the familiar Python square-bracket notation
Step5: Series as generalized NumPy array
From what we've seen so far, it may look like the Series object is basically interchangeable with a one-dimensional NumPy array. The essential difference is the presence of the index
Step6: And the item access works as expected
Step7: Series as specialized dictionary
In this way, you can think of a Pandas Series a bit like a specialization of a Python dictionary. A dictionary is a structure that maps arbitrary keys to a set of arbitrary values, and a Series is a structure which maps typed keys to a set of typed values. This typing is important
Step8: You can notice the indexes were sorted lexicographically. That's the default behaviour in Pandas
Step9: Unlike a dictionary, though, the Series also supports array-style operations such as slicing
Step10: 2. The Pandas DataFrame Object
The next fundamental structure in Pandas is the DataFrame. Like the Series object discussed in the previous section, the DataFrame can be thought of either as a generalization of a NumPy array, or as a specialization of a Python dictionary. We'll now take a look at each of these perspectives.
DataFrame as a generalized NumPy array
If a Series is an analog of a one-dimensional array with flexible indices, a DataFrame is an analog of a two-dimensional array with both flexible row indices and flexible column names.
Step11: Now that we have this along with the population Series from before, we can use a dictionary to construct a single two-dimensional object containing this information
Step12: DataFrame as specialized dictionary
Similarly, we can also think of a DataFrame as a specialization of a dictionary. Where a dictionary maps a key to a value, a DataFrame maps a column name to a Series of column data. For example, asking for the 'area' attribute returns the Series object containing the areas we saw earlier
Step13: Constructing DataFrame objects
A Pandas DataFrame can be constructed in a variety of ways. Here we'll give several examples.
From a single Series objectยถ
A DataFrame is a collection of Series objects, and a single-column DataFrame can be constructed from a single Series
Step14: From a dictionary of Series objects
As we saw before, a DataFrame can be constructed from a dictionary of Series objects as well
Step15: 3. Reading a CSV file and doing common Pandas operations
Step16: 4. Loading ful dataset | Python Code:
import numpy as np
from __future__ import print_function
import pandas as pd
pd.__version__
Explanation: Lectura y manipulaciรณn de datos con Pandas
Autor: Roberto Muรฑoz <br />
E-mail: rmunoz@uc.cl
This notebook shows how to create Series and Dataframes with Pandas. Also, how to read CSV files and creaate pivot tables. The first part is based on the chapter 3 of the <a href=" http://nbviewer.jupyter.org/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/03.01-Introducing-Pandas-Objects.ipynb">Python Data Science Handbook</a>.
End of explanation
data = pd.Series([0.25, 0.5, 0.75, 1.0])
data
Explanation: 1. The Pandas Series Object
A Pandas Series is a one-dimensional array of indexed data. It can be created from a list or array as follows:
End of explanation
data.values
Explanation: As we see in the output, the Series wraps both a sequence of values and a sequence of indices, which we can access with the values and index attributes. The values are simply a familiar NumPy array:
End of explanation
data.index
Explanation: The index is an array-like object of type pd.Index, which we'll discuss in more detail momentarily.
End of explanation
data[1]
Explanation: Like with a NumPy array, data can be accessed by the associated index via the familiar Python square-bracket notation:
End of explanation
data = pd.Series([0.25, 0.5, 0.75, 1.0],
index=['a', 'b', 'c', 'd'])
data
Explanation: Series as generalized NumPy array
From what we've seen so far, it may look like the Series object is basically interchangeable with a one-dimensional NumPy array. The essential difference is the presence of the index: while the Numpy Array has an implicitly defined integer index used to access the values, the Pandas Series has an explicitly defined index associated with the values.
End of explanation
data['b']
Explanation: And the item access works as expected:
End of explanation
population_dict = {'Arica y Parinacota': 243149,
'Antofagasta': 631875,
'Metropolitana de Santiago': 7399042,
'Valparaiso': 1842880,
'Bรญobรญo': 2127902,
'Magallanes y Antรกrtica Chilena': 165547}
population = pd.Series(population_dict)
population
Explanation: Series as specialized dictionary
In this way, you can think of a Pandas Series a bit like a specialization of a Python dictionary. A dictionary is a structure that maps arbitrary keys to a set of arbitrary values, and a Series is a structure which maps typed keys to a set of typed values. This typing is important: just as the type-specific compiled code behind a NumPy array makes it more efficient than a Python list for certain operations, the type information of a Pandas Series makes it much more efficient than Python dictionaries for certain operations.
End of explanation
population['Arica y Parinacota']
Explanation: You can notice the indexes were sorted lexicographically. That's the default behaviour in Pandas
End of explanation
population['Metropolitana':'Valparaรญso']
Explanation: Unlike a dictionary, though, the Series also supports array-style operations such as slicing:
End of explanation
# Area in km^2
area_dict = {'Arica y Parinacota': 16873.3,
'Antofagasta': 126049.1,
'Metropolitana de Santiago': 15403.2,
'Valparaiso': 16396.1,
'Bรญobรญo': 37068.7,
'Magallanes y Antรกrtica Chilena': 1382291.1}
area = pd.Series(area_dict)
area
Explanation: 2. The Pandas DataFrame Object
The next fundamental structure in Pandas is the DataFrame. Like the Series object discussed in the previous section, the DataFrame can be thought of either as a generalization of a NumPy array, or as a specialization of a Python dictionary. We'll now take a look at each of these perspectives.
DataFrame as a generalized NumPy array
If a Series is an analog of a one-dimensional array with flexible indices, a DataFrame is an analog of a two-dimensional array with both flexible row indices and flexible column names.
End of explanation
regions = pd.DataFrame({'population': population,
'area': area})
regions
regions.index
regions.columns
Explanation: Now that we have this along with the population Series from before, we can use a dictionary to construct a single two-dimensional object containing this information:
End of explanation
regions['area']
Explanation: DataFrame as specialized dictionary
Similarly, we can also think of a DataFrame as a specialization of a dictionary. Where a dictionary maps a key to a value, a DataFrame maps a column name to a Series of column data. For example, asking for the 'area' attribute returns the Series object containing the areas we saw earlier:
End of explanation
pd.DataFrame(population, columns=['population'])
Explanation: Constructing DataFrame objects
A Pandas DataFrame can be constructed in a variety of ways. Here we'll give several examples.
From a single Series objectยถ
A DataFrame is a collection of Series objects, and a single-column DataFrame can be constructed from a single Series:
End of explanation
pd.DataFrame({'population': population,
'area': area}, columns=['population', 'area'])
Explanation: From a dictionary of Series objects
As we saw before, a DataFrame can be constructed from a dictionary of Series objects as well:
End of explanation
regiones_file='data/chile_regiones.csv'
provincias_file='data/chile_provincias.csv'
comunas_file='data/chile_comunas.csv'
regiones=pd.read_csv(regiones_file, header=0, sep=',')
provincias=pd.read_csv(provincias_file, header=0, sep=',')
comunas=pd.read_csv(comunas_file, header=0, sep=',')
print('regiones table: ', regiones.columns.values.tolist())
print('provincias table: ', provincias.columns.values.tolist())
print('comunas table: ', comunas.columns.values.tolist())
regiones.head()
provincias.head()
comunas.head()
regiones_provincias=pd.merge(regiones, provincias, how='outer')
regiones_provincias.head()
provincias_comunas=pd.merge(provincias, comunas, how='outer')
provincias_comunas.head()
regiones_provincias_comunas=pd.merge(regiones_provincias, comunas, how='outer')
regiones_provincias_comunas.index.name='ID'
regiones_provincias_comunas.head()
#regiones_provincias_comunas.to_csv('chile_regiones_provincia_comuna.csv', index=False)
Explanation: 3. Reading a CSV file and doing common Pandas operations
End of explanation
data_file='data/chile_demographic.csv'
data=pd.read_csv(data_file, header=0, sep=',')
data
data.sort_values('Poblacion')
data.sort_values('Poblacion', ascending=False)
(data.groupby(['Region'])['Poblacion','Superficie'].sum())
(data.groupby(['Region'])['Poblacion','Superficie'].sum()).sort_values('Poblacion', ascending=False)
data.sort_values(['RegionID']).groupby(['RegionID','Region'])['Poblacion','Superficie'].sum()
Explanation: 4. Loading ful dataset
End of explanation |
2,919 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Project Euler
Contents
Useful Functions
Multiples of 3 and 5
Even Fibonacci numbers
Largest prime factor
Largest palindrome product
Smallest multiple
Sum square difference
10001st prime
Largest product in a series
Special Pythagorean triplet
Summation of primes
Largest product in a grid
Highly divisible triangular number
Large sum
Longest Collatz sequence
Lattice paths
Power digit sum
Number letter counts
Maximum path sum I
Counting Sundays
Factorial digit sum
Amicable numbers
Names scores
Non-abundant sums
Lexicographic permutations
1000-digit Fibonacci number
Reciprocal cycles
Quadratic primes
Number spiral diagonals
Distinct powers
Problem 30
Problem 31
Problem 32
Problem 33
Problem 34
Circular primes
Double-base palindromes
Truncatable primes
Pandigital multiples
Integer right triangles
Champernowne's constant
Pandigital prime
Coded triangle numbers
Sub-string divisibility
Pentagon numbers
Triangular, pentagonal, and hexagonal
Goldbach's other conjecture
Distinct primes factors
Self powers
Prime permutations
Consecutive prime sum
<!--
51. [Prime digit replacements](#Problem-51)
52. [Permuted multiples](#Problem-52)
53. [Combinatoric selections](#Problem-53)
54. [Poker hands](#Problem-54)
55. [Lychrel numbers](#Problem-55)
56. [Powerful digit sum](#Problem-56)
57. [Square root convergents](#Problem-57)
58. [Spiral primes](#Problem-58)
59. [XOR decryption](#Problem-59)
60. [Prime pair sets](#Problem-60)
61. [Cyclical figurate numbers](#Problem-61)
62. [Cubic permutations](#Problem-62)
63. [Powerful digit counts](#Problem-63)
64. [Odd period square roots](#Problem-64)
65. [Convergents of e](#Problem-65)
66. [Diophantine equation](#Problem-66)
-->
\67. Maximum path sum II
<!--
68. [Magic 5-gon ring](#Problem-68)
69. [Totient maximum](#Problem-69)
70. [Totient permutation](#Problem-70)
71. [Ordered fractions](#Problem-71)
72. [Counting fractions](#Problem-72)
73. [Counting fractions in a range](#Problem-73)
74. [Digit factorial chains](#Problem-74)
75. [Singular integer right triangles](#Problem-75)
76. [Counting summations](#Problem-76)
77. [Prime summations](#Problem-77)
78. [Coin partitions](#Problem-78)
79. [Passcode derivation](#Problem-79)
80. [Square root digital expansion](#Problem-80)
81. [Path sum
Step1: Problem 1
Multiples of 3 and 5
If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.
Find the sum of all the multiples of 3 or 5 below 1000.
Step2: Problem 2
Even Fibonacci numbers
Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be
Step3: Problem 3
Largest prime factor
The prime factors of 13195 are 5, 7, 13 and 29.
What is the largest prime factor of the number 600851475143 ?
Step4: Problem 4
Largest palindrome product
A palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 ร 99.
Find the largest palindrome made from the product of two 3-digit numbers.
Some thoughts on strategy
Palindrome P = ab, where a and b are the two 3-digit numbers (which much be between 100 and 999).
Since P is palindromic, it can be represented as xyzzyx.
$$
P=100000x + 10000y + 1000z + 100z + 10y + x
$$
$$
P=100001x + 10010y + 1100z
$$
$$
P=11(9091x + 910y + 100z) = ab
$$
Since 11 is prime, either a or b muct have a factor or 11.
Step5: Problem 5
Smallest multiple
2520 is the smallest number that can be divided by each of the numbers from 1 to 10 without any remainder.
What is the smallest positive number that is evenly divisible by all of the numbers from 1 to 20?
Step6: Problem 6
Sum square difference
The sum of the squares of the first ten natural numbers is,
$$
1^2 + 2^2 + ... + 10^2 = 385
$$
The square of the sum of the first ten natural numbers is,
$$
(1 + 2 + ... + 10)^2 = 552 = 3025
$$
Hence the difference between the sum of the squares of the first ten natural numbers and the square of the sum is $3025 โ 385 = 2640$.
Find the difference between the sum of the squares of the first one hundred natural numbers and the square of the sum.
Step7: Problem 7
10001st prime
By listing the first six prime numbers
Step8: Problem 8
Largest product in a series
The four adjacent digits in the 1000-digit number that have the greatest product are 9 ร 9 ร 8 ร 9 = 5832.
Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?
Step9: Problem 9
Special Pythagorean triplet
A Pythagorean triplet is a set of three natural numbers, $a < b < c$, for which,
$$
a^2 + b^2 = c^2
$$
For example, $3^2 + 4^2 = 9 + 16 = 25 = 5^2$.
There exists exactly one Pythagorean triplet for which $a + b + c = 1000$.
Find the product $abc$.
Step10: Problem 10
Summation of primes
The sum of the primes below 10 is 2 + 3 + 5 + 7 = 17.
Find the sum of all the primes below two million.
Step11: Problem 11
Largest product in a grid
In the 20ร20 grid below, four numbers along a diagonal line have been marked in red.
The product of these numbers is 26 ร 63 ร 78 ร 14 = 1788696.
What is the greatest product of four adjacent numbers in the same direction (up, down, left, right, or diagonally) in the 20ร20 grid?
Step12: Problem 12
Highly divisible triangular number
The sequence of triangle numbers is generated by adding the natural numbers. So the 7th triangle number would be 1 + 2 + 3 + 4 + 5 + 6 + 7 = 28. The first ten terms would be
Step13: Problem 13
Large sum
Work out the first ten digits of the following one-hundred 50-digit numbers.
Step14: Problem 14
Longest Collat sequence
The following iterative sequence is defined for the set of positive integers
Step15: Problem 15
Lattice paths
Starting in the top left corner of a 2ร2 grid, and only being able to move to the right and down, there are exactly 6 routes to the bottom right corner.
How many such routes are there through a 20ร20 grid?
Strategy planning
a - b - c - d
| | | |
e - f - g - h
| | | |
i - j - k - l
| | | |
m - n - o - p
A smaller problem is the number of routes from a to p. We already know that the number of routes from f to p is 6.
At each point we can work out the number of routes to p.
20 - 10 - 4 - 1
| | | |
10 - 6 - 3 - 1
| | | |
4 - 3 - 2 - 1
| | | |
1 - 1 - 1 - 0
You can see that at any point the number of routes is the sum of the number of routes for below and right. This isn't too difficult to program.
Step16: Problem 16
Power digit sum
$2^{15} = 32768$ and the sum of its digits is $3 + 2 + 7 + 6 + 8 = 26$.
What is the sum of the digits of the number $2^{1000}$?
Step17: Problem 17
Number letter counts
If the numbers 1 to 5 are written out in words
Step18: Problem 18
Maximum path sum I
By starting at the top of the triangle below and moving to adjacent numbers on the row below, the maximum total from top to bottom is 23.
3
7 4
2 4 6
8 5 9 3
That is, 3 + 7 + 4 + 9 = 23.
Find the maximum total from top to bottom of the triangle below
Step19: Problem 19
Counting Sundays
You are given the following information, but you may prefer to do some research for yourself.
1 Jan 1900 was a Monday.
Thirty days has September,
April, June and November.
All the rest have thirty-one,
Saving February alone,
Which has twenty-eight, rain or shine.
And on leap years, twenty-nine.
A leap year occurs on any year evenly divisible by 4, but not on a century unless it is divisible by 400.
How many Sundays fell on the first of the month during the twentieth century (1 Jan 1901 to 31 Dec 2000)?
Step20: Problem 20
Factorial digit sum
n! means n ร (n โ 1) ร ... ร 3 ร 2 ร 1
For example, 10! = 10 ร 9 ร ... ร 3 ร 2 ร 1 = 3628800,
and the sum of the digits in the number 10! is 3 + 6 + 2 + 8 + 8 + 0 + 0 = 27.
Find the sum of the digits in the number 100!
Step21: Problem 21
Amicable numbers
Let d(n) be defined as the sum of proper divisors of n (numbers less than n which divide evenly into n).
If d(a) = b and d(b) = a, where a โ b, then a and b are an amicable pair and each of a and b are called amicable numbers.
For example, the proper divisors of 220 are 1, 2, 4, 5, 10, 11, 20, 22, 44, 55 and 110; therefore d(220) = 284. The proper divisors of 284 are 1, 2, 4, 71 and 142; so d(284) = 220.
Evaluate the sum of all the amicable numbers under 10000.
Step22: Problem 22
Name scores
Using names.txt (right click and 'Save Link/Target As...'), a 46K text file containing over five-thousand first names, begin by sorting it into alphabetical order. Then working out the alphabetical value for each name, multiply this value by its alphabetical position in the list to obtain a name score.
For example, when the list is sorted into alphabetical order, COLIN, which is worth 3 + 15 + 12 + 9 + 14 = 53, is the 938th name in the list. So, COLIN would obtain a score of 938 ร 53 = 49714.
What is the total of all the name scores in the file?
Step23: Problem 23
Non-abundant sums
A perfect number is a number for which the sum of its proper divisors is exactly equal to the number. For example, the sum of the proper divisors of 28 would be 1 + 2 + 4 + 7 + 14 = 28, which means that 28 is a perfect number.
A number n is called deficient if the sum of its proper divisors is less than n and it is called abundant if this sum exceeds n.
As 12 is the smallest abundant number, 1 + 2 + 3 + 4 + 6 = 16, the smallest number that can be written as the sum of two abundant numbers is 24. By mathematical analysis, it can be shown that all integers greater than 28123 can be written as the sum of two abundant numbers. However, this upper limit cannot be reduced any further by analysis even though it is known that the greatest number that cannot be expressed as the sum of two abundant numbers is less than this limit.
Find the sum of all the positive integers which cannot be written as the sum of two abundant numbers.
Step24: Problem 24
Lexicographic permutations
A permutation is an ordered arrangement of objects. For example, 3124 is one possible permutation of the digits 1, 2, 3 and 4. If all of the permutations are listed numerically or alphabetically, we call it lexicographic order. The lexicographic permutations of 0, 1 and 2 are
Step25: Problem 25
1000-digit Fibonacci number
The Fibonacci sequence is defined by the recurrence relation
Step26: Problem 26
Reciprocal cycles
A unit fraction contains 1 in the numerator. The decimal representation of the unit fractions with denominators 2 to 10 are given
Step27: Problem 27
Quadratic primes
Euler discovered the remarkable quadratic formula
Step28: Problem 28
Number spiral diagonals
Starting with the number 1 and moving to the right in a clockwise direction a 5 by 5 spiral is formed as follows
Step29: Problem 29
Distinct powers
Consider all integer combinations of $a^b$ for 2 โค a โค 5 and 2 โค b โค 5
If they are then placed in numerical order, with any repeats removed, we get the following sequence of 15 distinct terms
Step30: Problem 35
Circular primes
The number, 197, is called a circular prime because all rotations of the digits
Step31: Problem 37
Truncatable primes
The number 3797 has an interesting property. Being prime itself, it is possible to continuously remove digits from left to right, and remain prime at each stage
Step32: Problem 41
Pandigital prime
We shall say that an n-digit number is pandigital if it makes use of all the digits 1 to n exactly once. For example, 2143 is a 4-digit pandigital and is also prime.
What is the largest n-digit pandigital prime that exists?
Step33: Problem 42
Coded triangle numbers
The nth term of the sequence of triangle numbers is given by, tn = ยฝn(n+1); so the first ten triangle numbers are
Step34: Problem 43
Sub-string divisibility
The number, 1406357289, is a 0 to 9 pandigital number because it is made up of each of the digits 0 to 9 in some order, but it also has a rather interesting sub-string divisibility property.
Let $d_1$ be the 1st digit, $d_2$ be the 2nd digit, and so on. In this way, we note the following
Step35: Problem 44
Pentagonal numbers
Pentagonal numbers are generated by the formula, Pn=n(3nโ1)/2. The first ten pentagonal numbers are
Step36: Problem 45
Triangular, pentagonal, and hexagonal
Triangle, pentagonal, and hexagonal numbers are generated by the following formulae
Step37: Problem 46
Goldbach's other conjecture
It was proposed by Christian Goldbach that every odd composite number can be written as the sum of a prime and twice a square.
$9 = 7 + 2ร1^2$
$15 = 7 + 2ร2^2$
$21 = 3 + 2ร3^2$
$25 = 7 + 2ร3^2$
$27 = 19 + 2ร2^2$
$33 = 31 + 2ร1^2$
It turns out that the conjecture was false.
What is the smallest odd composite that cannot be written as the sum of a prime and twice a square?
A composite number is any number greater than 1, which is not prime.
Step38: Problem 47
Distinct prime factors
The first two consecutive numbers to have two distinct prime factors are
Step39: Problem 48
Self powers
The series, 1^1 + 2^2 + 3^3 + ... + 10^10 = 10405071317.
Find the last ten digits of the series, 1^1 + 2^2 + 3^3 + ... + 1000^1000.
Step40: Problem 49
Prime permutations
The arithmetic sequence, 1487, 4817, 8147, in which each of the terms increases by 3330, is unusual in two ways
Step41: Problem 50
Consecutive prime sum
The prime 41, can be written as the sum of six consecutive primes
Step42: Problem 52
Permuted multiples
It can be seen that the number, 125874, and its double, 251748, contain exactly the same digits, but in a different order.
Find the smallest positive integer, x, such that 2x, 3x, 4x, 5x, and 6x, contain the same digits.
Step43: Problem 53 [NOT CHECKED]
Combinatoric selections
There are exactly ten ways of selecting three from five, 12345
Step44: Problem 55
Lychrel numbers
If we take 47, reverse and add, 47 + 74 = 121, which is palindromic.
Not all numbers produce palindromes so quickly. For example,
349 + 943 = 1292
1292 + 2921 = 4213
4213 + 3124 = 7337
That is, 349 took three iterations to arrive at a palindrome.
Although no one has proved it yet, it is thought that some numbers, like 196, never produce a palindrome. A number that never forms a palindrome through the reverse and add process is called a Lychrel number. Due to the theoretical nature of these numbers, and for the purpose of this problem, we shall assume that a number is Lychrel until proven otherwise. In addition you are given that for every number below ten-thousand, it will either (i) become a palindrome in less than fifty iterations, or, (ii) no one, with all the computing power that exists, has managed so far to map it to a palindrome. In fact, 10677 is the first number to be shown to require over fifty iterations before producing a palindrome
Step45: Problem 56
Powerful digit sum
A googol ($10^{100}$) is a massive number
Step46: Problem 58
Spiral primes
Starting with 1 and spiralling anticlockwise in the following way, a square spiral with side length 7 is formed.
37 36 35 34 33 32 31
38 17 16 15 14 13 30
39 18 5 4 3 12 29
40 19 6 1 2 11 28
41 20 7 8 9 10 27
42 21 22 23 24 25 26
43 44 45 46 47 48 49
It is interesting to note that the odd squares lie along the bottom right diagonal, but what is more interesting is that 8 out of the 13 numbers lying along both diagonals are prime; that is, a ratio of 8/13 โ 62%.
If one complete new layer is wrapped around the spiral above, a square spiral with side length 9 will be formed. If this process is continued, what is the side length of the square spiral for which the ratio of primes along both diagonals first falls below 10%?
Step47: Problem 67
Maximum path sum II
This is the same as Problem 18, so the code it reused.
By starting at the top of the triangle below and moving to adjacent numbers on the row below, the maximum total from top to bottom is 23.
3
7 4
2 4 6
8 5 9 3
That is, 3 + 7 + 4 + 9 = 23.
Find the maximum total from top to bottom in triangle.txt, a 15K text file containing a triangle with one-hundred rows. | Python Code:
def is_prime(n):
if n == 1: return False
if n < 4: return True
if n % 2 == 0: return False
if n < 9: return True # excluded 4, 6, 8 already
if n % 3 == 0: return False
i = 5
while i < n**(0.5) + 1:
if n % i == 0:
return False
if n % (i + 2) == 0:
return False
i += 6
return True
def fibonacci():
a, b = 0, 1
while 1:
yield a
a, b = b, a + b
def triangular():
current = 1
counter = 2
while 1:
yield current
current += counter
counter += 1
def collatz(start):
current = start
while 1:
yield current
if current == 1:
break
if current % 2 == 0:
current /= 2
else:
current = 3 * current + 1
def factors(n):
result = []
for i in range(1, int(n ** 0.5) + 1):
div, mod = divmod(n, i)
if mod == 0:
result.extend([i, div])
return result
def pfactors(n):
i = 2
factors = []
while i * i <= n:
if n % i:
i += 1
else:
n //= i
factors.append(i)
if n > 1:
factors.append(n)
return factors
def is_palindrome(n):
n = str(n)
return n == n[::-1]
from math import gcd
def lcm(a, b):
return a * b // gcd(a, b)
def erat_sieve(m):
primes = [True] * m
primes[0] = primes[1] = False
for i in range(2, m + 1):
for l in range(2, (m // i) + 1):
try:
primes[i * l] = False
except IndexError: pass
return [a for a, b in enumerate(primes) if b]
def factorial(n):
acc = 1
while n > 1:
acc *= n
n -= 1
return acc
def d(n):
return sum(set(factors(n))) - n
def is_pandigital(n):
n = str(n)
for i in range(len(n)):
if n.find(str(i + 1)) == -1:
return False
return True
def is_triangular(x):
n = (pow(1 + 8 * x, 0.5) - 1) / 2.0
d, r = divmod(n, 1)
return not bool(r)
def is_pentagonal(x):
n = (pow(1 + 24 * x, 0.5) + 1) / 6.0
d, r = divmod(n, 1)
return not bool(r)
def choose(n, r):
return factorial(n) / (factorial(r) * factorial(n - r) * 1.0)
def is_lychrel(n, max_iterations=50):
orig = n
for it in range(max_iterations):
n = n + int(str(n)[::-1])
if is_palindrome(n):
#print(orig, it, False)
return False
#print(orig, None, True)
return True
Explanation: Project Euler
Contents
Useful Functions
Multiples of 3 and 5
Even Fibonacci numbers
Largest prime factor
Largest palindrome product
Smallest multiple
Sum square difference
10001st prime
Largest product in a series
Special Pythagorean triplet
Summation of primes
Largest product in a grid
Highly divisible triangular number
Large sum
Longest Collatz sequence
Lattice paths
Power digit sum
Number letter counts
Maximum path sum I
Counting Sundays
Factorial digit sum
Amicable numbers
Names scores
Non-abundant sums
Lexicographic permutations
1000-digit Fibonacci number
Reciprocal cycles
Quadratic primes
Number spiral diagonals
Distinct powers
Problem 30
Problem 31
Problem 32
Problem 33
Problem 34
Circular primes
Double-base palindromes
Truncatable primes
Pandigital multiples
Integer right triangles
Champernowne's constant
Pandigital prime
Coded triangle numbers
Sub-string divisibility
Pentagon numbers
Triangular, pentagonal, and hexagonal
Goldbach's other conjecture
Distinct primes factors
Self powers
Prime permutations
Consecutive prime sum
<!--
51. [Prime digit replacements](#Problem-51)
52. [Permuted multiples](#Problem-52)
53. [Combinatoric selections](#Problem-53)
54. [Poker hands](#Problem-54)
55. [Lychrel numbers](#Problem-55)
56. [Powerful digit sum](#Problem-56)
57. [Square root convergents](#Problem-57)
58. [Spiral primes](#Problem-58)
59. [XOR decryption](#Problem-59)
60. [Prime pair sets](#Problem-60)
61. [Cyclical figurate numbers](#Problem-61)
62. [Cubic permutations](#Problem-62)
63. [Powerful digit counts](#Problem-63)
64. [Odd period square roots](#Problem-64)
65. [Convergents of e](#Problem-65)
66. [Diophantine equation](#Problem-66)
-->
\67. Maximum path sum II
<!--
68. [Magic 5-gon ring](#Problem-68)
69. [Totient maximum](#Problem-69)
70. [Totient permutation](#Problem-70)
71. [Ordered fractions](#Problem-71)
72. [Counting fractions](#Problem-72)
73. [Counting fractions in a range](#Problem-73)
74. [Digit factorial chains](#Problem-74)
75. [Singular integer right triangles](#Problem-75)
76. [Counting summations](#Problem-76)
77. [Prime summations](#Problem-77)
78. [Coin partitions](#Problem-78)
79. [Passcode derivation](#Problem-79)
80. [Square root digital expansion](#Problem-80)
81. [Path sum: two ways](#Problem-81)
82. [Path sum: three ways](#Problem-82)
83. [Path sum: four ways](#Problem-83)
84. [Monopoly odds](#Problem-84)
85. [Counting rectangles](#Problem-85)
86. [Cuboid route](#Problem-86)
87. [Prime power triples](#Problem-87)
88. [Product-sum numbers](#Problem-88)
89. [Roman numerals](#Problem-89)
90. [Cube digit pairs](#Problem-90)
91. [Right triangles with integer coordinates](#Problem-91)
92. [Square digit chains](#Problem-92)
93. [Arithmetic expressions](#Problem-93)
94. [Almost equilateral triangles](#Problem-94)
95. [Amicable chains](#Problem-95)
96. [Su Doku](#Problem-96)
97. [Large non-Mersenne prime](#Problem-97)
98. [Anagramic squares](#Problem-98)
99. [Largest exponential](#Problem-99)
100. [Arranged probability](#Problem-100)
101. [Optimum polynomial](#Problem-101)
102. [Triangle containment](#Problem-102)
103. [Special subset sums: optimum](#Problem-103)
104. [Pandigital Fibonacci ends](#Problem-104)
105. [Special subset sums: testing](#Problem-105)
106. [Special subset sums: meta-testing](#Problem-106)
107. [Minimal network](#Problem-107)
108. [Diophantine reciprocals I](#Problem-108)
109. [Darts](#Problem-109)
110. [Diophantine reciprocals II](#Problem-110)
111. [Primes with runs](#Problem-111)
112. [Bouncy numbers](#Problem-112)
113. [Non-bouncy numbers](#Problem-113)
114. [Counting block combinations I](#Problem-114)
115. [Counting block combinations II](#Problem-115)
116. [Red, green or blue tiles](#Problem-116)
117. [Red, green, and blue tiles](#Problem-117)
118. [Pandigital prime sets](#Problem-118)
119. [Digit power sum](#Problem-119)
120. [Square remainders](#Problem-120)
121. [Disc game prize fund](#Problem-121)
122. [Efficient exponentiation](#Problem-122)
123. [Prime square remainders](#Problem-123)
124. [Ordered radicals](#Problem-124)
125. [Palindromic sums](#Problem-125)
126. [Cuboid layers](#Problem-126)
127. [abc-hits](#Problem-127)
128. [Hexagonal tile differences](#Problem-128)
129. [Repunit divisibility](#Problem-129)
130. [Composites with prime repunit property](#Problem-130)
131. [Prime cube partnership](#Problem-131)
132. [Large repunit factors](#Problem-132)
133. [Repunit nonfactors](#Problem-133)
134. [Prime pair connection](#Problem-134)
135. [Same differences](#Problem-135)
136. [Singleton difference](#Problem-136)
137. [Fibonacci golden nuggets](#Problem-137)
138. [Special isosceles triangles](#Problem-138)
139. [Pythagorean tiles](#Problem-139)
140. [Modified Fibonacci golden nuggets](#Problem-140)
141. [Investigating progressive numbers, n, which are also square](#Problem-141)
142. [Perfect Square Collection](#Problem-142)
143. [Investigating the Torricelli point of a triangle](#Problem-143)
144. [Investigating multiple reflections of a laser beam](#Problem-144)
145. [How many reversible numbers are there below one-billion?](#Problem-145)
146. [Investigating a Prime Pattern ](#Problem-146)
147. [Rectangles in cross-hatched grids](#Problem-147)
148. [Exploring Pascal's triangle](#Problem-148)
149. [Searching for a maximum-sum subsequence](#Problem-149)
150. [Searching a triangular array for a sub-triangle having minimum-sum](#Problem-150)
-->
Useful Functions
End of explanation
def multiples(max_n, a, b):
s = 0
for i in range(1, max_n):
if i % a == 0 or i % b == 0:
s += i
return s
print(multiples(1000, 3, 5))
Explanation: Problem 1
Multiples of 3 and 5
If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.
Find the sum of all the multiples of 3 or 5 below 1000.
End of explanation
def even_fibonacci(max_term):
f = fibonacci()
current = 0
total = 0
while current < max_term:
current = next(f)
if current % 2 == 0:
total += current
return total
print(even_fibonacci(4000000))
Explanation: Problem 2
Even Fibonacci numbers
Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be:
1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...
By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms.
End of explanation
def largest_prime_factor(n):
return max(pfactors(n))
print(largest_prime_factor(600851475143))
Explanation: Problem 3
Largest prime factor
The prime factors of 13195 are 5, 7, 13 and 29.
What is the largest prime factor of the number 600851475143 ?
End of explanation
def largest_palindrome_product():
largest_palindrome = 0
a = 999
while a >= 100:
if a % 11 == 0:
b = 999
db = 1
else:
b = 990
db = 11
while b >= a:
if a * b <= largest_palindrome:
break
if is_palindrome(a * b):
largest_palindrome = a * b
b -= db
a -= 1
return largest_palindrome
print(largest_palindrome_product())
Explanation: Problem 4
Largest palindrome product
A palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 ร 99.
Find the largest palindrome made from the product of two 3-digit numbers.
Some thoughts on strategy
Palindrome P = ab, where a and b are the two 3-digit numbers (which much be between 100 and 999).
Since P is palindromic, it can be represented as xyzzyx.
$$
P=100000x + 10000y + 1000z + 100z + 10y + x
$$
$$
P=100001x + 10010y + 1100z
$$
$$
P=11(9091x + 910y + 100z) = ab
$$
Since 11 is prime, either a or b muct have a factor or 11.
End of explanation
from functools import reduce
def smallest_multiple(start, end):
numbers = range(start, end + 1)
return reduce(lcm, numbers)
print(smallest_multiple(1, 20))
Explanation: Problem 5
Smallest multiple
2520 is the smallest number that can be divided by each of the numbers from 1 to 10 without any remainder.
What is the smallest positive number that is evenly divisible by all of the numbers from 1 to 20?
End of explanation
def square_sum(l):
return sum(l) ** 2
def sum_square(l):
return sum(x*x for x in l)
def sum_square_difference(n):
numbers = range(1, n + 1)
return abs(square_sum(numbers) - sum_square(numbers))
print(sum_square_difference(100))
Explanation: Problem 6
Sum square difference
The sum of the squares of the first ten natural numbers is,
$$
1^2 + 2^2 + ... + 10^2 = 385
$$
The square of the sum of the first ten natural numbers is,
$$
(1 + 2 + ... + 10)^2 = 552 = 3025
$$
Hence the difference between the sum of the squares of the first ten natural numbers and the square of the sum is $3025 โ 385 = 2640$.
Find the difference between the sum of the squares of the first one hundred natural numbers and the square of the sum.
End of explanation
def nth_prime(n):
count = 1
current = 3
while 1:
if is_prime(current):
count += 1
if count == n:
return current
current += 2
print(nth_prime(10001))
Explanation: Problem 7
10001st prime
By listing the first six prime numbers: 2, 3, 5, 7, 11, and 13, we can see that the 6th prime is 13.
What is the 10 001st prime number?
End of explanation
def largest_series_product(fname):
with open(fname) as f:
thousand_digit_number = f.read()
tdn = ''.join(thousand_digit_number.split('\n'))
mul = lambda x, y: int(x) * int(y)
return max(reduce(mul, tdn[i:i+13]) for i in range(len(tdn)-13+1))
print(largest_series_product('p008_thousand_digits.txt'))
Explanation: Problem 8
Largest product in a series
The four adjacent digits in the 1000-digit number that have the greatest product are 9 ร 9 ร 8 ร 9 = 5832.
Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?
End of explanation
def special_pythagorean_triplet(n):
for i in range(1, n):
for j in range(1, n - i):
if i + j > 1000:
continue
k = n - i - j
if i ** 2 + j ** 2 == k ** 2:
return i * j * k
return None
print(special_pythagorean_triplet(1000))
Explanation: Problem 9
Special Pythagorean triplet
A Pythagorean triplet is a set of three natural numbers, $a < b < c$, for which,
$$
a^2 + b^2 = c^2
$$
For example, $3^2 + 4^2 = 9 + 16 = 25 = 5^2$.
There exists exactly one Pythagorean triplet for which $a + b + c = 1000$.
Find the product $abc$.
End of explanation
def prime_summation(n):
return sum(erat_sieve(n))
print(prime_summation(2000000))
Explanation: Problem 10
Summation of primes
The sum of the primes below 10 is 2 + 3 + 5 + 7 = 17.
Find the sum of all the primes below two million.
End of explanation
def largest_grid_product(grid, adjacents):
grid = [[int(i) for i in row.split()] for row in grid.split('\n')]
m = 0
product = lambda sl: reduce(lambda a, b: a * b, sl)
for i in range(len(grid) - adjacents + 1):
for j in range(len(grid) - adjacents + 1):
m = max(m, product(grid[i][j:j+adjacents]))
m = max(m, product([c[j] for c in grid[i:i+adjacents]]))
rdiagonal_product = 1
for adj in range(adjacents):
rdiagonal_product *= grid[i+adj][j+adj]
m = max(m, rdiagonal_product)
if j >= 3 and i <= 17:
ldiagonal_product = 1
for adj in range(adjacents):
ldiagonal_product *= grid[i+adj][j-adj]
m = max(m, ldiagonal_product)
return m
with open('p011_grid.txt') as f:
print(largest_grid_product(f.read(), 4))
Explanation: Problem 11
Largest product in a grid
In the 20ร20 grid below, four numbers along a diagonal line have been marked in red.
The product of these numbers is 26 ร 63 ร 78 ร 14 = 1788696.
What is the greatest product of four adjacent numbers in the same direction (up, down, left, right, or diagonally) in the 20ร20 grid?
End of explanation
def divisible_triangular(divisor_count):
t = triangular()
current = 1
while len(factors(current)) < divisor_count:
current = next(t)
return current
print(divisible_triangular(500))
Explanation: Problem 12
Highly divisible triangular number
The sequence of triangle numbers is generated by adding the natural numbers. So the 7th triangle number would be 1 + 2 + 3 + 4 + 5 + 6 + 7 = 28. The first ten terms would be:
1, 3, 6, 10, 15, 21, 28, 36, 45, 55, ...
Let us list the factors of the first seven triangle numbers:
1: 1
3: 1,3
6: 1,2,3,6
10: 1,2,5,10
15: 1,3,5,15
21: 1,3,7,21
28: 1,2,4,7,14,28
We can see that 28 is the first triangle number to have over five divisors.
What is the value of the first triangle number to have over five hundred divisors?
End of explanation
def large_sum():
with open('p013_numbers.txt') as f:
hundred_50digit_numbers = f.read()
numbers = [int(n) for n in hundred_50digit_numbers.split('\n')]
number_sum = sum(numbers)
return str(number_sum)[:10]
print(large_sum())
Explanation: Problem 13
Large sum
Work out the first ten digits of the following one-hundred 50-digit numbers.
End of explanation
def collatz_length(start):
return len(list(collatz(start)))
def longest_collatz(max_starting):
previous = {}
highest, chainlen = 0, 0
for s in range(max_starting, 0, -1):
chln = collatz_length(s)
if chln > chainlen:
highest, chainlen = s, chln
return highest
print(longest_collatz(1000000))
Explanation: Problem 14
Longest Collat sequence
The following iterative sequence is defined for the set of positive integers:
n โ n/2 (n is even)
n โ 3n + 1 (n is odd)
Using the rule above and starting with 13, we generate the following sequence:
13 โ 40 โ 20 โ 10 โ 5 โ 16 โ 8 โ 4 โ 2 โ 1
It can be seen that this sequence (starting at 13 and finishing at 1) contains 10 terms. Although it has not been proved yet (Collatz Problem), it is thought that all starting numbers finish at 1.
Which starting number, under one million, produces the longest chain?
NOTE: Once the chain starts the terms are allowed to go above one million.
End of explanation
def lattice_paths(size):
size += 1
grid = [[None] * size for i in range(size)]
# setup the grid
grid[0][0] = 0
for i in range(1, size):
grid[0][i] = 1
grid[i][0] = 1
# populate the grid
for r in range(1, size):
for c in range(1, size):
grid[r][c] = grid[r-1][c] + grid[r][c-1]
# find the value in the other corner
return grid[size-1][size-1]
print(lattice_paths(20))
Explanation: Problem 15
Lattice paths
Starting in the top left corner of a 2ร2 grid, and only being able to move to the right and down, there are exactly 6 routes to the bottom right corner.
How many such routes are there through a 20ร20 grid?
Strategy planning
a - b - c - d
| | | |
e - f - g - h
| | | |
i - j - k - l
| | | |
m - n - o - p
A smaller problem is the number of routes from a to p. We already know that the number of routes from f to p is 6.
At each point we can work out the number of routes to p.
20 - 10 - 4 - 1
| | | |
10 - 6 - 3 - 1
| | | |
4 - 3 - 2 - 1
| | | |
1 - 1 - 1 - 0
You can see that at any point the number of routes is the sum of the number of routes for below and right. This isn't too difficult to program.
End of explanation
def power_digit_sum(base, power):
return sum(map(int, list(str(base ** power))))
print(power_digit_sum(2, 1000))
Explanation: Problem 16
Power digit sum
$2^{15} = 32768$ and the sum of its digits is $3 + 2 + 7 + 6 + 8 = 26$.
What is the sum of the digits of the number $2^{1000}$?
End of explanation
def number_letter_counts(start, end):
words = {
0: '', 1: 'one', 2: 'two', 3: 'three', 4: 'four',
5: 'five', 6: 'six', 7: 'seven', 8: 'eight', 9: 'nine',
10: 'ten', 11: 'eleven', 12: 'twelve', 13: 'thirteen',
14: 'fourteen', 15: 'fifteen', 16: 'sixteen',
17: 'seventeen', 18: 'eighteen', 19: 'nineteen',
20: 'twenty', 30: 'thirty', 40: 'forty', 50: 'fifty',
60: 'sixty', 70: 'seventy', 80: 'eighty', 90: 'ninety',
1000: 'onethousand', '00': 'hundred', '&': 'and'
}
def number_count(n):
ncount = words.get(n)
if ncount:
#print('{n: <6} {w: <30} {c}'.format(n=n, w=ncount,
# c=len(ncount)))
return len(ncount)
else:
count = ''
h, rem = divmod(n, 100)
if n >= 100:
count += words.get(h) + words.get('00')
if rem == 0:
#print('{n: <6} {w: <30} {c}'.format(n=n, w=count, c=len(count)))
return len(count)
elif rem <= 20 or rem % 10 == 0:
count += words.get('&') + words.get(rem)
else:
t, u = divmod(rem, 10)
t = int(str(t) + '0')
count += words.get('&') + words.get(t)
count += words.get(u)
else:
if n <= 20 or n % 10 == 0:
count += words.get(n)
else:
t, u = divmod(n, 10)
t = int(str(t) + '0')
count += words.get(t) + words.get(u)
#print('{n: <6} {w: <30} {c}'.format(n=n, w=count, c=len(count)))
return len(count)
total = 0
for i in range(start, end + 1):
total += number_count(i)
return total
print(number_letter_counts(1, 1000))
Explanation: Problem 17
Number letter counts
If the numbers 1 to 5 are written out in words: one, two, three, four, five, then there are 3 + 3 + 5 + 4 + 4 = 19 letters used in total.
If all the numbers from 1 to 1000 (one thousand) inclusive were written out in words, how many letters would be used?
NOTE: Do not count spaces or hyphens. For example, 342 (three hundred and forty-two) contains 23 letters and 115 (one hundred and fifteen) contains 20 letters. The use of "and" when writing out numbers is in compliance with British usage.
End of explanation
maximum_path_pyramid1 = '''75
95 64
17 47 82
18 35 87 10
20 04 82 47 65
19 01 23 75 03 34
88 02 77 73 07 63 67
99 65 04 28 06 16 70 92
41 41 26 56 83 40 80 70 33
41 48 72 33 47 32 37 16 94 29
53 71 44 65 25 43 91 52 97 51 14
70 11 33 28 77 73 17 78 39 68 17 57
91 71 52 38 17 14 91 43 58 50 27 29 48
63 66 04 68 89 53 67 30 73 16 69 87 40 31
04 62 98 27 23 09 70 98 73 93 38 53 60 04 23'''
def maximum_path_sum(pyramid):
for row in range(len(pyramid)-1, 0, -1):
for col in range(0, row):
pyramid[row-1][col] += max(pyramid[row][col], pyramid[row][col+1])
return pyramid[0][0]
print(maximum_path_sum([[int(n) for n in row.split()] for row in maximum_path_pyramid1.split('\n')]))
Explanation: Problem 18
Maximum path sum I
By starting at the top of the triangle below and moving to adjacent numbers on the row below, the maximum total from top to bottom is 23.
3
7 4
2 4 6
8 5 9 3
That is, 3 + 7 + 4 + 9 = 23.
Find the maximum total from top to bottom of the triangle below:
75
95 64
17 47 82
18 35 87 10
20 04 82 47 65
19 01 23 75 03 34
88 02 77 73 07 63 67
99 65 04 28 06 16 70 92
41 41 26 56 83 40 80 70 33
41 48 72 33 47 32 37 16 94 29
53 71 44 65 25 43 91 52 97 51 14
70 11 33 28 77 73 17 78 39 68 17 57
91 71 52 38 17 14 91 43 58 50 27 29 48
63 66 04 68 89 53 67 30 73 16 69 87 40 31
04 62 98 27 23 09 70 98 73 93 38 53 60 04 23
NOTE: As there are only 16384 routes, it is possible to solve this problem by trying every route. However, Problem 67, is the same challenge with a triangle containing one-hundred rows; it cannot be solved by brute force, and requires a clever method!
Strategy
For a triangle with one layer, a solution is trivial - there is only one path.
12
For a triangle with 3 layers, a solution is also trivial - we only need to take the maximum value on the last layer.
88
99 65
41 41 26
We can use the same priciple for the rest of the triangle, by replacing each value with the sum of itself and the maximum of the 2 values below it. Eventually the maximum sum will be the value at the top of the triangle. For the example above, this is demonstrated below.
88 + (99 + 41)
(99 + 41) (65 + 26)
41 41 26
228
140 91
41 41 26
End of explanation
from datetime import timedelta, date
def daterange(start_date, end_date):
for n in range(int ((end_date - start_date).days)):
yield start_date + timedelta(n)
def counting_sundays(start_date, end_date):
total = 0
for d in daterange(start_date, end_date):
if d.weekday() == 6 and d.day == 1:
total += 1
return total
print(counting_sundays(date(1901, 1, 1), date(2000, 12, 31)))
Explanation: Problem 19
Counting Sundays
You are given the following information, but you may prefer to do some research for yourself.
1 Jan 1900 was a Monday.
Thirty days has September,
April, June and November.
All the rest have thirty-one,
Saving February alone,
Which has twenty-eight, rain or shine.
And on leap years, twenty-nine.
A leap year occurs on any year evenly divisible by 4, but not on a century unless it is divisible by 400.
How many Sundays fell on the first of the month during the twentieth century (1 Jan 1901 to 31 Dec 2000)?
End of explanation
def factorial_digit_sum(n):
return sum(map(int, list(str(factorial(n)))))
print(factorial_digit_sum(100))
Explanation: Problem 20
Factorial digit sum
n! means n ร (n โ 1) ร ... ร 3 ร 2 ร 1
For example, 10! = 10 ร 9 ร ... ร 3 ร 2 ร 1 = 3628800,
and the sum of the digits in the number 10! is 3 + 6 + 2 + 8 + 8 + 0 + 0 = 27.
Find the sum of the digits in the number 100!
End of explanation
def amicable_numbers(max_n):
amicables = [False] * max_n
for a in range(1, max_n):
if not amicables[a]:
b = d(a)
if d(b) == a and a != b:
amicables[a] = amicables[b] = True
return sum([i for i, j in enumerate(amicables) if j])
print(amicable_numbers(10000))
Explanation: Problem 21
Amicable numbers
Let d(n) be defined as the sum of proper divisors of n (numbers less than n which divide evenly into n).
If d(a) = b and d(b) = a, where a โ b, then a and b are an amicable pair and each of a and b are called amicable numbers.
For example, the proper divisors of 220 are 1, 2, 4, 5, 10, 11, 20, 22, 44, 55 and 110; therefore d(220) = 284. The proper divisors of 284 are 1, 2, 4, 71 and 142; so d(284) = 220.
Evaluate the sum of all the amicable numbers under 10000.
End of explanation
def name_scores():
with open('p022_names.txt') as f:
names = enumerate(sorted([n.upper().strip('"') for n in f.read().split(',')]))
alpha = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
def name_score(name_tuple):
place, name = name_tuple
name_value = 0
for letter in name:
name_value += alpha.find(letter) + 1
return name_value * (place + 1)
return sum(map(name_score, list(names)))
print(name_scores())
Explanation: Problem 22
Name scores
Using names.txt (right click and 'Save Link/Target As...'), a 46K text file containing over five-thousand first names, begin by sorting it into alphabetical order. Then working out the alphabetical value for each name, multiply this value by its alphabetical position in the list to obtain a name score.
For example, when the list is sorted into alphabetical order, COLIN, which is worth 3 + 15 + 12 + 9 + 14 = 53, is the 938th name in the list. So, COLIN would obtain a score of 938 ร 53 = 49714.
What is the total of all the name scores in the file?
End of explanation
def non_abundant_sums():
abundants = set(i for i in range(1, 28124) if d(i) > i)
def is_abundantsum(n):
return any(n - a in abundants for a in abundants)
return sum(j for j in range(1, 28124) if not is_abundantsum(j))
print(non_abundant_sums())
Explanation: Problem 23
Non-abundant sums
A perfect number is a number for which the sum of its proper divisors is exactly equal to the number. For example, the sum of the proper divisors of 28 would be 1 + 2 + 4 + 7 + 14 = 28, which means that 28 is a perfect number.
A number n is called deficient if the sum of its proper divisors is less than n and it is called abundant if this sum exceeds n.
As 12 is the smallest abundant number, 1 + 2 + 3 + 4 + 6 = 16, the smallest number that can be written as the sum of two abundant numbers is 24. By mathematical analysis, it can be shown that all integers greater than 28123 can be written as the sum of two abundant numbers. However, this upper limit cannot be reduced any further by analysis even though it is known that the greatest number that cannot be expressed as the sum of two abundant numbers is less than this limit.
Find the sum of all the positive integers which cannot be written as the sum of two abundant numbers.
End of explanation
from itertools import permutations
def lexicographic_permutations(digits, index):
return ''.join(map(str, sorted(list(permutations(digits)))[index - 1]))
print(lexicographic_permutations(range(10), 1000000))
Explanation: Problem 24
Lexicographic permutations
A permutation is an ordered arrangement of objects. For example, 3124 is one possible permutation of the digits 1, 2, 3 and 4. If all of the permutations are listed numerically or alphabetically, we call it lexicographic order. The lexicographic permutations of 0, 1 and 2 are:
012 021 102 120 201 210
What is the millionth lexicographic permutation of the digits 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9?
End of explanation
import math
def big_fibonacci(digit_count):
''' phi ** n / sqrt(5) > 10 ** 999
n * log(phi) > 999 * log(10) + log(5)/2
n > (999 * log(10) + log(5) / 2) / log(phi)
'''
phi = (1 + 5**0.5)/2
return math.floor(((999 * math.log(10) + math.log(5) / 2) / math.log(phi)) + 0.5)
print(big_fibonacci(1000))
Explanation: Problem 25
1000-digit Fibonacci number
The Fibonacci sequence is defined by the recurrence relation:
Fn = Fnโ1 + Fnโ2, where F1 = 1 and F2 = 1.
Hence the first 12 terms will be:
F1 = 1
F2 = 1
F3 = 2
F4 = 3
F5 = 5
F6 = 8
F7 = 13
F8 = 21
F9 = 34
F10 = 55
F11 = 89
F12 = 144
The 12th term, F12, is the first term to contain three digits.
What is the index of the first term in the Fibonacci sequence to contain 1000 digits?
End of explanation
def reciprocal_cycles(lim):
for d in erat_sieve(lim)[::-1]:
period = 1
while pow(10, period, d) != 1:
period += 1
if d - 1 == period:
return d
print(reciprocal_cycles(1000))
Explanation: Problem 26
Reciprocal cycles
A unit fraction contains 1 in the numerator. The decimal representation of the unit fractions with denominators 2 to 10 are given:
1/2 = 0.5
1/3 = 0.(3)
1/4 = 0.25
1/5 = 0.2
1/6 = 0.1(6)
1/7 = 0.(142857)
1/8 = 0.125
1/9 = 0.(1)
1/10 = 0.1
Where 0.1(6) means 0.166666..., and has a 1-digit recurring cycle. It can be seen that 1/7 has a 6-digit recurring cycle.
Find the value of d < 1000 for which 1/d contains the longest recurring cycle in its decimal fraction part.
Strategy
Fermat's little theorem says:
1/d has a cycle of $n$ digits if $10^n - 1 \mod d = 0$ for prime $d$
A prime number in the denominator can yield up to d-1 repeating decimal digits, so we need to find the largest prime under 1000 that has d-1 repeating digits.
End of explanation
def quadratic_primes(lim):
besta = bestb = max_res= 0
for a in range(-1000, 1001):
for b in erat_sieve(1000):
if b < -(pow(40, 2) + 40 * a) or b < max_res:
continue
res = n = 0
while is_prime(pow(n, 2) + a * n + b):
res += 1
n += 1
if res > max_res:
besta, bestb, max_res = a, b, res
#print(besta, bestb, max_res)
return besta * bestb
print(quadratic_primes(1000))
Explanation: Problem 27
Quadratic primes
Euler discovered the remarkable quadratic formula:
$$
n^2+n+41
$$
It turns out that the formula will produce 40 primes for the consecutive integer values $0โคnโค39$. However, when $n=40,40^2+40+41=40(40+1)+41$ is divisible by 41, and certainly when $n=41,41^2+41+41$ is clearly divisible by 41.
The incredible formula $n^2โ79n+1601$ was discovered, which produces 80 primes for the consecutive values $0โคnโค79$. The product of the coefficients, โ79 and 1601, is โ126479.
Considering quadratics of the form:
$n^2+an+b$, where $|a|<1000$ and $|b|โค1000$
where $|n|$ is the modulus/absolute value of $n$
e.g. $|11|=11$ and $|โ4|=4$
Find the product of the coefficients, $a$ and $b$, for the quadratic expression that produces the maximum number of primes for consecutive values of $n$, starting with $n=0$.
Strategy
$|b|$ must be prime, since $n=0$ must result in a prime.
If for a combination (a, b), we get m consecutive primes, it must be true that b > m.
When $a < 0$, then $b > -(n^2 + an)$, because the primes must be positive. We know that $n^2 + n + 41$ returns 40 primes, interesting values for $a$ and $b$ should satisfy $b > =(40^2 + 40a)$.
End of explanation
def number_spiral_diagonals(size):
total = 1
counter = 1
skip = 2
while counter < size * size:
for i in range(4):
counter += skip
total += counter
skip += 2
return total
print(number_spiral_diagonals(1001))
Explanation: Problem 28
Number spiral diagonals
Starting with the number 1 and moving to the right in a clockwise direction a 5 by 5 spiral is formed as follows:
21 22 23 24 25
20 7 8 9 10
19 6 1 2 11
18 5 4 3 12
17 16 15 14 13
It can be verified that the sum of the numbers on the diagonals is 101.
What is the sum of the numbers on the diagonals in a 1001 by 1001 spiral formed in the same way?
End of explanation
from itertools import product
def distinct_powers(a_range):
power = lambda t: pow(t[0], t[1])
terms = set(map(power, product(a_range, repeat=2)))
return len(terms)
print(distinct_powers(range(2, 101)))
Explanation: Problem 29
Distinct powers
Consider all integer combinations of $a^b$ for 2 โค a โค 5 and 2 โค b โค 5
If they are then placed in numerical order, with any repeats removed, we get the following sequence of 15 distinct terms:
4, 8, 9, 16, 25, 27, 32, 64, 81, 125, 243, 256, 625, 1024, 3125
How many distinct terms are in the sequence generated by ab for 2 โค a โค 100 and 2 โค b โค 100?
End of explanation
from itertools import permutations
def circular_primes(max_n):
def rotations(n):
rots = [n]
n = str(n)
for i in range(len(n) - 1):
n = n[1:] + n[0]
rots.append(int(n))
return rots
cp = set()
cp.add(2)
for i in range(3, max_n, 2):
if all(map(is_prime, rotations(i))):
cp.add(i)
return len(cp)
print(circular_primes(1000000))
Explanation: Problem 35
Circular primes
The number, 197, is called a circular prime because all rotations of the digits: 197, 971, and 719, are themselves prime.
There are thirteen such primes below 100: 2, 3, 5, 7, 11, 13, 17, 31, 37, 71, 73, 79, and 97.
How many circular primes are there below one million?
End of explanation
'''
Last digit can be 3 or 7. First can be 2, 3, 5 or 7
Possible numbers:
-13
-17
-37
-73
-97
Actual values:
23
37
73
53
57
if has 2 or 5 as not the first digit, then reject
'''
from itertools import product, chain
def primely_truncatable(n):
n = int(n)
def left_truncate(l):
ns = str(l)[1:]
if len(ns) == 0:
return True
else:
if is_prime(int(ns)):
return left_truncate(int(ns))
return False
def right_truncate(r):
ns = str(r)[:-1]
if len(ns) == 0:
return True
else:
if is_prime(int(ns)):
return right_truncate(int(ns))
return False
if is_prime(n):
l, r = left_truncate(n), right_truncate(n)
return l and r
return False
def get_truncatables():
primes = [13, 17, 37, 73, 97]
additional = [2, 3, 5, 7, 9]
working = []
result = [23, 37, 73, 53]
nlen = 2
while len(result) < 11 and nlen < 7:
ptuple = chain(product(additional, primes),product(primes, additional))
for p in ptuple:
if 2 in p[1:] or 5 in p[1:]:
continue
p = ''.join(map(str, p))
if primely_truncatable(p):
result.append(int(p))
if is_prime(int(p)):
working.append(int(p))
result = list(set(result))
primes = working
nlen += 1
return result
double_truncatables = get_truncatables()
print(double_truncatables)
print(sum(double_truncatables))
Explanation: Problem 37
Truncatable primes
The number 3797 has an interesting property. Being prime itself, it is possible to continuously remove digits from left to right, and remain prime at each stage: 3797, 797, 97, and 7. Similarly we can work from right to left: 3797, 379, 37, and 3.
Find the sum of the only eleven primes that are both truncatable from left to right and right to left.
NOTE: 2, 3, 5, and 7 are not considered to be truncatable primes.
End of explanation
from itertools import permutations
def pandigital_prime(n_digits):
if n_digits == 0:
return False
digits = [1, 2, 3, 4, 5, 6, 7, 8, 9][::-1]
pandigitals = permutations(digits[9-n_digits:10])
for i in pandigitals:
n = int(''.join(map(str, i)))
if is_prime(n):
return n
return pandigital_prime(n_digits-1)
print(pandigital_prime(9))
Explanation: Problem 41
Pandigital prime
We shall say that an n-digit number is pandigital if it makes use of all the digits 1 to n exactly once. For example, 2143 is a 4-digit pandigital and is also prime.
What is the largest n-digit pandigital prime that exists?
End of explanation
def coded_triangles(fname):
def score(w):
alpha = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
total = 0
for letter in w:
total += alpha.find(letter) + 1
return total
with open(fname) as f:
words = [w.strip('"') for w in f.read().split(',')]
ct_words = [w for w in words if is_triangular(score(w))]
return len(ct_words)
print(coded_triangles('p042_words.txt'))
Explanation: Problem 42
Coded triangle numbers
The nth term of the sequence of triangle numbers is given by, tn = ยฝn(n+1); so the first ten triangle numbers are:
1, 3, 6, 10, 15, 21, 28, 36, 45, 55, ...
By converting each letter in a word to a number corresponding to its alphabetical position and adding these values we form a word value. For example, the word value for SKY is 19 + 11 + 25 = 55 = t10. If the word value is a triangle number then we shall call the word a triangle word.
Using words.txt (right click and 'Save Link/Target As...'), a 16K text file containing nearly two-thousand common English words, how many are triangle words?
End of explanation
from itertools import permutations
def sub_string_divisible():
primes = [2, 3, 5, 7, 11, 13, 17]
def is_ssd(tup):
for d in range(1, len(tup) - 2):
n = int(''.join(map(str, tup[d:d+3])))
if n % primes[d-1] != 0:
return False
return True
digits = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
ssds = [int(''.join(map(str, i))) for i in permutations(digits) if is_ssd(i)]
#print(ssds)
return sum(ssds)
print(sub_string_divisible())
Explanation: Problem 43
Sub-string divisibility
The number, 1406357289, is a 0 to 9 pandigital number because it is made up of each of the digits 0 to 9 in some order, but it also has a rather interesting sub-string divisibility property.
Let $d_1$ be the 1st digit, $d_2$ be the 2nd digit, and so on. In this way, we note the following:
$d_2d_3d_4=406$ is divisible by 2
$d_3d_4d_5=063$ is divisible by 3
$d_4d_5d_6=635$ is divisible by 5
$d_5d_6d_7=357$ is divisible by 7
$d_6d_7d_8=572$ is divisible by 11
$d_7d_8d_9=728$ is divisible by 13
$d_8d_9d_{10}=289$ is divisible by 17
Find the sum of all 0 to 9 pandigital numbers with this property.
End of explanation
def pentagonal_numbers():
res = 0
found = False
i = 1
while not found:
i += 1
n = i * (3 * i - 1) / 2
for j in range(i - 1, 0, -1):
m = j * (3 * j - 1) / 2
if is_pentagonal(abs(n - m)) and is_pentagonal(n + m):
res = abs(n - m)
found = True
break
return int(res)
print(pentagonal_numbers())
Explanation: Problem 44
Pentagonal numbers
Pentagonal numbers are generated by the formula, Pn=n(3nโ1)/2. The first ten pentagonal numbers are:
1, 5, 12, 22, 35, 51, 70, 92, 117, 145, ...
It can be seen that P4 + P7 = 22 + 70 = 92 = P8. However, their difference, 70 โ 22 = 48, is not pentagonal.
Find the pair of pentagonal numbers, Pj and Pk, for which their sum and difference are pentagonal and D = |Pk โ Pj| is minimised; what is the value of D?
End of explanation
def tph(h_n):
def h(n):
return n * (2 * n - 1)
h_n += 1
current = h(h_n)
while not is_pentagonal(current):
h_n += 1
current = h(h_n)
return current
print(tph(143))
Explanation: Problem 45
Triangular, pentagonal, and hexagonal
Triangle, pentagonal, and hexagonal numbers are generated by the following formulae:
Triangle: $T_n=n(n+1)/2$ --> 1, 3, 6, 10, 15, ...
Pentagonal: $P_n=n(3nโ1)/2$ --> 1, 5, 12, 22, 35, ...
Hexagonal: $H_n=n(2nโ1)$ --> 1, 6, 15, 28, 45, ...
It can be verified that $T_{285}$ = $P_{165}$ = $H_{143}$ = 40755.
Find the next triangle number that is also pentagonal and hexagonal.
Strategy
Obviously all hexagonal numbers are also triangular, so we only need to find overlap of hexagonals and pentagonals, and we can start at $H_{143}$.
End of explanation
def goldbachs_other_conjecture():
n = 5
primes = set()
while 1:
if (all(n % p for p in primes)):
primes.add(n)
else:
if not any((n - 2 * i * i) in primes for i in range(1, n)):
return n
n += 2
print(goldbachs_other_conjecture())
Explanation: Problem 46
Goldbach's other conjecture
It was proposed by Christian Goldbach that every odd composite number can be written as the sum of a prime and twice a square.
$9 = 7 + 2ร1^2$
$15 = 7 + 2ร2^2$
$21 = 3 + 2ร3^2$
$25 = 7 + 2ร3^2$
$27 = 19 + 2ร2^2$
$33 = 31 + 2ร1^2$
It turns out that the conjecture was false.
What is the smallest odd composite that cannot be written as the sum of a prime and twice a square?
A composite number is any number greater than 1, which is not prime.
End of explanation
def distinct_prime_factors(starting_number, consecutive_numbers, distinct_factors):
def has_distinct_factors(n):
return len(set(pfactors(n))) == distinct_factors
numbers = [starting_number + i for i in range(consecutive_numbers)]
while not all(map(has_distinct_factors, numbers)):
numbers = [n + 1 for n in numbers]
print(numbers)
return numbers[0]
print(distinct_prime_factors(2*3*5*7, 4, 4))
Explanation: Problem 47
Distinct prime factors
The first two consecutive numbers to have two distinct prime factors are:
14 = 2 ร 7
15 = 3 ร 5
The first three consecutive numbers to have three distinct prime factors are:
644 = 2ยฒ ร 7 ร 23
645 = 3 ร 5 ร 43
646 = 2 ร 17 ร 19.
Find the first four consecutive integers to have four distinct prime factors. What is the first of these numbers?
End of explanation
def self_powers(series_end):
return str(sum(pow(i, i) for i in range(1, series_end + 1)))[-10:]
print(self_powers(1000))
Explanation: Problem 48
Self powers
The series, 1^1 + 2^2 + 3^3 + ... + 10^10 = 10405071317.
Find the last ten digits of the series, 1^1 + 2^2 + 3^3 + ... + 1000^1000.
End of explanation
def prime_permuations():
diff = 3330
res = []
poss = []
for i in range(1001, 3333, 2):
if is_prime(i):
idigits = sorted(list(str(i)))
poss.append(i)
for c in range(2):
i += diff
if is_prime(i) and sorted(list(str(i))) == idigits:
poss.append(i)
else:
poss = []
break
if poss:
res.append(poss)
#return res
# slightly odd result here with an extra value, but it's
# obvious as to the answer.
return ''.join(list(map(str, res[1]))[:-1])
print(prime_permuations())
Explanation: Problem 49
Prime permutations
The arithmetic sequence, 1487, 4817, 8147, in which each of the terms increases by 3330, is unusual in two ways: (i) each of the three terms are prime, and, (ii) each of the 4-digit numbers are permutations of one another.
There are no arithmetic sequences made up of three 1-, 2-, or 3-digit primes, exhibiting this property, but there is one other 4-digit increasing sequence.
What 12-digit number do you form by concatenating the three terms in this sequence?
End of explanation
def consecutive_prime_sum(lim):
result, prime_count = 0, 0
primes = erat_sieve(lim // 100) # won't actually need that many primes
prime_sum = [0]
for p in primes:
prime_sum.append(prime_sum[-1] + p)
if prime_sum[-1] >= lim: break
c = len(prime_sum)
terms = 1
for i in range(c):
for j in range(c-1, i+terms, -1):
n = prime_sum[j] - prime_sum[i]
if (j-i > terms and is_prime(n)):
terms, max_prime = j-i, n
break
#return terms, max_prime
return max_prime
print(consecutive_prime_sum(1000000))
Explanation: Problem 50
Consecutive prime sum
The prime 41, can be written as the sum of six consecutive primes:
41 = 2 + 3 + 5 + 7 + 11 + 13
This is the longest sum of consecutive primes that adds to a prime below one-hundred.
The longest sum of consecutive primes below one-thousand that adds to a prime, contains 21 terms, and is equal to 953.
Which prime, below one-million, can be written as the sum of the most consecutive primes?
Strategy
Need to find out which prime to start with, and how many primes to add.
Build an array of sums of primes, and then search it to find the highest one.
End of explanation
def permuted_multiples():
result = None
for i in range(100000, 200000):
comp = sorted(list(str(i)))
success = True
for mult in range(2, 7):
if sorted(list(str(i * mult))) != comp:
success = False
if success:
return i
print(permuted_multiples())
Explanation: Problem 52
Permuted multiples
It can be seen that the number, 125874, and its double, 251748, contain exactly the same digits, but in a different order.
Find the smallest positive integer, x, such that 2x, 3x, 4x, 5x, and 6x, contain the same digits.
End of explanation
def combinatoric_selections(max_n, min_val):
values = []
for n in range(23, max_n + 1):
for r in range(2, n - 1):
nCr = choose(n, r)
if nCr > min_val:
values.append((n, r, nCr))
#print(values)
return len(values)
print(combinatoric_selections(100, 1000000))
Explanation: Problem 53 [NOT CHECKED]
Combinatoric selections
There are exactly ten ways of selecting three from five, 12345:
123, 124, 125, 134, 135, 145, 234, 235, 245, and 345
In combinatorics, we use the notation, $^5C_3$ = 10.
In general,
$$
^nC_r = \frac{n!}{r!(nโr)!}
$$
where r โค n, n! = nร(nโ1)ร...ร3ร2ร1, and 0! = 1.
It is not until n = 23, that a value exceeds one-million: $^{23}C_{10}$ = 1144066.
How many, not necessarily distinct, values of $^nC_r$, for 1 โค n โค 100, are greater than one-million?
Strategy
We know that the it is only at 23c10 that the value exceeds 1000000, so we don't need to start looking until then.
If r = 1 or n-1, nCr = n. If r = n, nCr = 1. So we should only look at values of r where 2 <= r <= n-2.
End of explanation
def lychrel_numbers(max_n):
return len([i for i in range(max_n) if is_lychrel(i, 50)])
print(lychrel_numbers(10000))
Explanation: Problem 55
Lychrel numbers
If we take 47, reverse and add, 47 + 74 = 121, which is palindromic.
Not all numbers produce palindromes so quickly. For example,
349 + 943 = 1292
1292 + 2921 = 4213
4213 + 3124 = 7337
That is, 349 took three iterations to arrive at a palindrome.
Although no one has proved it yet, it is thought that some numbers, like 196, never produce a palindrome. A number that never forms a palindrome through the reverse and add process is called a Lychrel number. Due to the theoretical nature of these numbers, and for the purpose of this problem, we shall assume that a number is Lychrel until proven otherwise. In addition you are given that for every number below ten-thousand, it will either (i) become a palindrome in less than fifty iterations, or, (ii) no one, with all the computing power that exists, has managed so far to map it to a palindrome. In fact, 10677 is the first number to be shown to require over fifty iterations before producing a palindrome: 4668731596684224866951378664 (53 iterations, 28-digits).
Surprisingly, there are palindromic numbers that are themselves Lychrel numbers; the first example is 4994.
How many Lychrel numbers are there below ten-thousand?
NOTE: Wording was modified slightly on 24 April 2007 to emphasise the theoretical nature of Lychrel numbers.
End of explanation
def powerful_digit_sum(max_ab):
def digit_sum(n):
return sum(int(i) for i in list(str(n)))
s = 0
all_digit_sums = []
for a in range(100):
for b in range(100):
d = digit_sum(pow(a, b))
all_digit_sums.append((a, b, d))
#print(all_digit_sums)
return max(d for a, b, d in all_digit_sums)
print(powerful_digit_sum(100))
Explanation: Problem 56
Powerful digit sum
A googol ($10^{100}$) is a massive number: one followed by one-hundred zeros; $100^{100}$ is almost unimaginably large: one followed by two-hundred zeros. Despite their size, the sum of the digits in each number is only 1.
Considering natural numbers of the form, $a^b$, where $a$, $b$ < 100, what is the maximum digital sum?
End of explanation
def spiral_primes(ratio_limit):
ratio = 1
n = 1
step = 2
side_length = 1
diagonal_count = 1.0
prime_count = 0
while ratio >= ratio_limit:
diagonal_count += 4
side_length += 2
for i in range(4):
n += step
if is_prime(n):
prime_count += 1
ratio = prime_count / diagonal_count
#print((side_length, prime_count, diagonal_count, ratio))
step += 2
return side_length
print(spiral_primes(0.1))
Explanation: Problem 58
Spiral primes
Starting with 1 and spiralling anticlockwise in the following way, a square spiral with side length 7 is formed.
37 36 35 34 33 32 31
38 17 16 15 14 13 30
39 18 5 4 3 12 29
40 19 6 1 2 11 28
41 20 7 8 9 10 27
42 21 22 23 24 25 26
43 44 45 46 47 48 49
It is interesting to note that the odd squares lie along the bottom right diagonal, but what is more interesting is that 8 out of the 13 numbers lying along both diagonals are prime; that is, a ratio of 8/13 โ 62%.
If one complete new layer is wrapped around the spiral above, a square spiral with side length 9 will be formed. If this process is continued, what is the side length of the square spiral for which the ratio of primes along both diagonals first falls below 10%?
End of explanation
with open('p067_triangle.txt') as f:
print(maximum_path_sum([[int(n) for n in s.split()] for s in f.readlines()]))
Explanation: Problem 67
Maximum path sum II
This is the same as Problem 18, so the code it reused.
By starting at the top of the triangle below and moving to adjacent numbers on the row below, the maximum total from top to bottom is 23.
3
7 4
2 4 6
8 5 9 3
That is, 3 + 7 + 4 + 9 = 23.
Find the maximum total from top to bottom in triangle.txt, a 15K text file containing a triangle with one-hundred rows.
End of explanation |
2,920 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step4: Seminar
Step5: try out snapshots
Step16: MCTS
Step18: Main MCTS loop
With all we implemented, MCTS boils down to a trivial piece of code.
Step19: Plan and execute
In this section, we use the MCTS implementation to find optimal policy.
Step20: Submit to Coursera | Python Code:
from collections import namedtuple
from pickle import dumps, loads
from gym.core import Wrapper
# a container for get_result function below. Works just like tuple, but prettier
ActionResult = namedtuple(
"action_result", ("snapshot", "observation", "reward", "is_done", "info"))
class WithSnapshots(Wrapper):
Creates a wrapper that supports saving and loading environemnt states.
Required for planning algorithms.
This class will have access to the core environment as self.env, e.g.:
- self.env.reset() #reset original env
- self.env.ale.cloneState() #make snapshot for atari. load with .restoreState()
- ...
You can also use reset, step and render directly for convenience.
- s, r, done, _ = self.step(action) #step, same as self.env.step(action)
- self.render(close=True) #close window, same as self.env.render(close=True)
def get_snapshot(self, render=False):
:returns: environment state that can be loaded with load_snapshot
Snapshots guarantee same env behaviour each time they are loaded.
Warning! Snapshots can be arbitrary things (strings, integers, json, tuples)
Don't count on them being pickle strings when implementing MCTS.
Developer Note: Make sure the object you return will not be affected by
anything that happens to the environment after it's saved.
You shouldn't, for example, return self.env.
In case of doubt, use pickle.dumps or deepcopy.
if render:
self.render() # close popup windows since we can't pickle them
self.close()
if self.unwrapped.viewer is not None:
self.unwrapped.viewer.close()
self.unwrapped.viewer = None
return dumps(self.env)
def load_snapshot(self, snapshot, render=False):
Loads snapshot as current env state.
Should not change snapshot inplace (in case of doubt, deepcopy).
assert not hasattr(self, "_monitor") or hasattr(
self.env, "_monitor"), "can't backtrack while recording"
if render:
self.render() # close popup windows since we can't load into them
self.close()
self.env = loads(snapshot)
def get_result(self, snapshot, action):
A convenience function that
- loads snapshot,
- commits action via self.step,
- and takes snapshot again :)
:returns: next snapshot, next_observation, reward, is_done, info
Basically it returns next snapshot and everything that env.step would have returned.
#<your code here load, commit, take snapshot >
self.load_snapshot(snapshot, render=False)
next_observation, reward, is_done, info = self.step(action)
next_snapshot = self.get_snapshot()
return ActionResult(next_snapshot, next_observation, reward, is_done, info) #fill in variables
Explanation: Seminar: Monte-carlo tree search
In this seminar, we'll implement a vanilla MCTS planning and use it to solve some Gym envs.
But before we do that, we first need to modify gym env to allow saving and loading game states to facilitate backtracking.
End of explanation
# make env
env = WithSnapshots(gym.make("CartPole-v0"))
env.reset()
n_actions = env.action_space.n
print("initial_state:")
plt.imshow(env.render('rgb_array'))
env.close()
# create first snapshot
snap0 = env.get_snapshot()
# play without making snapshots (faster)
while True:
is_done = env.step(env.action_space.sample())[2]
if is_done:
print("Whoops! We died!")
break
print("final state:")
plt.imshow(env.render('rgb_array'))
env.close()
# reload initial state
env.load_snapshot(snap0)
print("\n\nAfter loading snapshot")
plt.imshow(env.render('rgb_array'))
env.close()
# get outcome (snapshot, observation, reward, is_done, info)
res = env.get_result(snap0, env.action_space.sample())
snap1, observation, reward = res[:3]
# second step
res2 = env.get_result(snap1, env.action_space.sample())
Explanation: try out snapshots:
End of explanation
assert isinstance(env,WithSnapshots)
class Node:
a tree node for MCTS
#metadata:
parent = None #parent Node
value_sum = 0. #sum of state values from all visits (numerator)
times_visited = 0 #counter of visits (denominator)
def __init__(self,parent,action,):
Creates and empty node with no children.
Does so by commiting an action and recording outcome.
:param parent: parent Node
:param action: action to commit from parent Node
self.parent = parent
self.action = action
self.children = set() #set of child nodes
#get action outcome and save it
res = env.get_result(parent.snapshot, action)
self.snapshot, self.observation, self.immediate_reward, self.is_done,_ = res
def is_leaf(self):
return len(self.children)==0
def is_root(self):
return self.parent is None
def get_mean_value(self):
return self.value_sum / self.times_visited if self.times_visited !=0 else 0
def ucb_score(self, scale=10, max_value=1e100):
Computes ucb1 upper bound using current value and visit counts for node and it's parent.
:param scale: Multiplies upper bound by that. From hoeffding inequality, assumes reward range to be [0,scale].
:param max_value: a value that represents infinity (for unvisited nodes)
if self.times_visited == 0:
return max_value
#compute ucb-1 additive component (to be added to mean value)
#hint: you can use self.parent.times_visited for N times node was considered,
# and self.times_visited for n times it was visited
U = np.sqrt(2 * np.log(self.parent.times_visited) / self.times_visited) # <your code here>
return self.get_mean_value() + scale * U
#MCTS steps
def select_best_leaf(self):
Picks the leaf with highest priority to expand
Does so by recursively picking nodes with best UCB-1 score until it reaches the leaf.
if self.is_leaf():
return self
children = self.children
#<select best child node in terms of node.ucb_score()>
best_i = np.argmax([child.ucb_score() for child in children])
best_child = list(children)[best_i]
return best_child.select_best_leaf()
def expand(self):
Expands the current node by creating all possible child nodes.
Then returns one of those children.
assert not self.is_done, "can't expand from terminal state"
for action in range(n_actions):
self.children.add(Node(self, action))
return self.select_best_leaf()
def rollout(self, t_max=10**4):
Play the game from this state to the end (done) or for t_max steps.
On each step, pick action at random (hint: env.action_space.sample()).
Compute sum of rewards from current state till
Note 1: use env.action_space.sample() for random action
Note 2: if node is terminal (self.is_done is True), just return 0
#set env into the appropriate state
env.load_snapshot(self.snapshot)
obs = self.observation
is_done = self.is_done
# If node is terminal retur 0
if is_done:
return 0
#<your code here - rollout and compute reward>
rollout_reward = 0
# get outcome (snapshot, observation, reward, is_done, info)
snapshot = self.snapshot
for t in range(t_max):
res = env.get_result(snapshot, env.action_space.sample())
snapshot, observation, reward, is_done, _ = res
rollout_reward += reward
if is_done:
break
return rollout_reward
def propagate(self, child_value):
Uses child value (sum of rewards) to update parents recursively.
#compute node value
my_value = self.immediate_reward + child_value
#update value_sum and times_visited
self.value_sum+=my_value
self.times_visited+=1
#propagate upwards
if not self.is_root():
self.parent.propagate(my_value)
def safe_delete(self):
safe delete to prevent memory leak in some python versions
del self.parent
for child in self.children:
child.safe_delete()
del child
class Root(Node):
def __init__(self,snapshot,observation):
creates special node that acts like tree root
:snapshot: snapshot (from env.get_snapshot) to start planning from
:observation: last environment observation
self.parent = self.action = None
self.children = set() #set of child nodes
#root: load snapshot and observation
self.snapshot = snapshot
self.observation = observation
self.immediate_reward = 0
self.is_done=False
@staticmethod
def from_node(node):
initializes node as root
root = Root(node.snapshot,node.observation)
#copy data
copied_fields = ["value_sum","times_visited","children","is_done"]
for field in copied_fields:
setattr(root,field,getattr(node,field))
return root
Explanation: MCTS: Monte-Carlo tree search
In this section, we'll implement the vanilla MCTS algorithm with UCB1-based node selection.
We will start by implementing the Node class - a simple class that acts like MCTS node and supports some of the MCTS algorithm steps.
This MCTS implementation makes some assumptions about the environment, you can find those in the notes section at the end of the notebook.
End of explanation
def plan_mcts(root, n_iters=10):
builds tree with monte-carlo tree search for n_iters iterations
:param root: tree node to plan from
:param n_iters: how many select-expand-simulate-propagete loops to make
for _ in range(n_iters):
node = root.select_best_leaf() #<select best leaf>
if node.is_done:
node.propagate(0)
else: #node is not terminal
#<expand-simulate-propagate loop>
node.expand()
reward = node.rollout()
node.propagate(reward)
Explanation: Main MCTS loop
With all we implemented, MCTS boils down to a trivial piece of code.
End of explanation
env = WithSnapshots(gym.make("CartPole-v0"))
root_observation = env.reset()
root_snapshot = env.get_snapshot()
root = Root(root_snapshot, root_observation)
#plan from root:
plan_mcts(root,n_iters=1000)
from IPython.display import clear_output
from itertools import count
from gym.wrappers import Monitor
total_reward = 0 #sum of rewards
test_env = loads(root_snapshot) #env used to show progress
for i in count():
#get best child
#<select child with highest mean reward>
best_i = np.argmax([child.get_mean_value() for child in root.children])
best_child = list(root.children)[best_i]
#take action
s,r,done,_ = test_env.step(best_child.action)
#show image
clear_output(True)
plt.title("step %i"%i)
plt.imshow(test_env.render('rgb_array'))
plt.show()
total_reward += r
if done:
print("Finished with reward = ",total_reward)
break
#discard unrealized part of the tree [because not every child matters :(]
for child in root.children:
if child != best_child:
child.safe_delete()
#declare best child a new root
root = Root.from_node(best_child)
assert not root.is_leaf(), "We ran out of tree! Need more planning! Try growing tree right inside the loop."
#you may want to expand tree here
#<your code here>
plan_mcts(root,n_iters=100)
Explanation: Plan and execute
In this section, we use the MCTS implementation to find optimal policy.
End of explanation
from submit import submit_mcts
submit_mcts(total_reward, "[email protected]", "pfV2eFsrnmEfih1c")
Explanation: Submit to Coursera
End of explanation |
2,921 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example
Step1: First we build a model representing the system of equations.
Step2: Generate mock data.
Step3: Perform the fit. Let's pretend that for experimental reasons, we can only measure the concentration for MM and F, but not for the intermediate FMM nor the product FMMF. This is no problem, as we can tell symfit to ignore those components by setting the data for them to None. | Python Code:
from symfit import (
variables, parameters, ODEModel, D, Fit
)
from symfit.core.support import key2str
import numpy as np
import matplotlib.pyplot as plt
Explanation: Example: Multiple species Reaction Kinetics using ODEModel
In this example we shall fit to a complex system of ODEs, based on that published by Polgar et al. However, we shall be generating some mock data instead of using the real deal.
End of explanation
t, F, MM, FMM, FMMF = variables('t, F, MM, FMM, FMMF')
k1_f, k1_r, k2_f, k2_r = parameters('k1_f, k1_r, k2_f, k2_r')
MM_0 = 10 # Some made up initial amount of [FF]
model_dict = {
D(F, t): - k1_f * MM * F + k1_r * FMM - k2_f * FMM * F + k2_r * FMMF,
D(FMM, t): k1_f * MM * F - k1_r * FMM - k2_r * FMM * F + k2_f * FMMF,
D(FMMF, t): k2_f * FMM * F - k2_r * FMMF,
D(MM, t): - k1_f * MM * F + k1_r * FMM,
}
model = ODEModel(
model_dict,
initial={t: 0.0, MM: MM_0, FMM: 0.0, FMMF: 0.0, F: 2 * MM_0}
)
print(model)
Explanation: First we build a model representing the system of equations.
End of explanation
tdata = np.linspace(0, 3, 20)
data = model(t=tdata, k1_f=0.1, k1_r=0.2, k2_f=0.3, k2_r=0.3)._asdict()
sigma_data = 0.3
np.random.seed(42)
for var in data:
data[var] += np.random.normal(0, sigma_data, size=len(tdata))
plt.scatter(tdata, data[MM], label='[MM]')
plt.scatter(tdata, data[FMM], label='[FMM]')
plt.scatter(tdata, data[FMMF], label='[FMMF]')
plt.scatter(tdata, data[F], label='[F]')
plt.xlabel(r'$t$')
plt.ylabel(r'$C$')
plt.legend()
plt.show()
Explanation: Generate mock data.
End of explanation
k1_f.min, k1_f.max = 0, 1
k1_r.min, k1_r.max = 0, 1
k2_f.min, k2_f.max = 0, 1
k2_r.min, k2_r.max = 0, 1
fit = Fit(model, t=tdata, MM=data[MM], F=data[F],
FMMF=None, FMM=None,
sigma_F=sigma_data, sigma_MM=sigma_data)
fit_result = fit.execute()
print(fit_result)
taxis = np.linspace(tdata.min(), tdata.max(), 1000)
model_fit = model(t=taxis, **fit_result.params)._asdict()
for var in data:
plt.scatter(tdata, data[var], label='[{}]'.format(var.name))
plt.plot(taxis, model_fit[var], label='[{}]'.format(var.name))
plt.legend()
plt.show()
Explanation: Perform the fit. Let's pretend that for experimental reasons, we can only measure the concentration for MM and F, but not for the intermediate FMM nor the product FMMF. This is no problem, as we can tell symfit to ignore those components by setting the data for them to None.
End of explanation |
2,922 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Input
To do any computation, you need to have data. Getting the data in the framework of a workflow is therefore the first step of every analysis. Nipype provides many different modules to grab or select the data
Step1: Second, we know that the two files we desire are the the following location
Step2: Now, comes the most important part of DataGrabber. We need to specify the template structure to find the specific data. This can be done as follows.
Step3: You'll notice that we use %s, %02d and * for placeholders in the data paths. %s is a placeholder for a string and is filled out by task_name or ses_name. %02d is a placeholder for a integer number and is filled out by subject_id. * is used as a wild card, e.g. a placeholder for any possible string combination. This is all to set up the DataGrabber node.
Now it is up to you how you want to feed the dynamic parameters into the node. You can either do this by using another node (e.g. IdentityInterface) and feed subject_id, ses_name and task_name as connections to the DataGrabber node or specify them directly as node inputs.
Step4: Now you only have to connect infosource with your DataGrabber and run the workflow to iterate over subjects 1 and 2.
You can also provide the inputs to the DataGrabber node directly, for one subject you can do this as follows
Step5: Now let's run the DataGrabber node and let's look at the output
Step6: SelectFiles
SelectFiles is a more flexible alternative to DataGrabber. It uses the {}-based string formating syntax to plug values into string templates and collect the data. These templates can also be combined with glob wild cards. The field names in the formatting template (i.e. the terms in braces) will become inputs fields on the interface, and the keys in the templates dictionary will form the output fields.
Let's focus again on the data we want to import
Step7: Let's check if we get what we wanted.
Step8: Perfect! But why is SelectFiles more flexible than DataGrabber? First, you perhaps noticed that with the {}-based string, we can reuse the same input (e.g. subject_id) multiple time in the same string, without feeding it multiple times into the template.
Additionally, you can also select multiple files without the need of an iterable node. For example, let's assume we want to select both anatomical images ('sub-01' and 'sub-02') at once. We can do this by using the following file template
Step9: As you can see, now anat contains two file paths, one for the first and one for the second subject. As a side node, you could have also gotten them same thing with the wild card *
Step10: Now, before you can run FreeSurferSource, you first have to specify the path to the FreeSurfer output folder, i.e. you have to specify the SUBJECTS_DIR variable. This can be done as follows
Step11: To create the FreeSurferSource node, do as follows
Step12: Let's now run it for a specific subject.
Step13: Did it work? Let's try to access multiple FreeSurfer outputs
Step14: It seems to be working as it should. But as you can see, the inflated output actually contains the file location for both hemispheres. With FreeSurferSource we can also restrict the file selection to a single hemisphere. To do this, we use the hemi input filed
Step15: Let's take a look again at the inflated output. | Python Code:
from nipype import DataGrabber, Node
# Create DataGrabber node
dg = Node(DataGrabber(infields=['subject_id', 'ses_name', 'task_name'],
outfields=['anat', 'func']),
name='datagrabber')
# Location of the dataset folder
dg.inputs.base_directory = '/data/ds000114'
# Necessary default parameters
dg.inputs.template = '*'
dg.inputs.sort_filelist = True
Explanation: Data Input
To do any computation, you need to have data. Getting the data in the framework of a workflow is therefore the first step of every analysis. Nipype provides many different modules to grab or select the data:
DataFinder
DataGrabber
FreeSurferSource
JSONFileGrabber
S3DataGrabber
SSHDataGrabber
SelectFiles
XNATSource
This tutorial will only cover some of them. For the rest, see the section interfaces.io on the official homepage.
Dataset structure
To be able to import data, you first need to be aware about the structure of your dataset. The structure of the dataset for this tutorial is according to BIDS, and looks as follows:
ds000114
โโโ CHANGES
โโโ dataset_description.json
โโโ derivatives
โย ย โโโ fmriprep
โย ย โย ย โโโ sub01...sub10
โย ย โย ย โโโ ...
โย ย โโโ freesurfer
โย ย โโโ fsaverage
โย ย โโโ fsaverage5
โย ย โย ย โโโ sub01...sub10
โย ย โย ย โโโ ...
โโโ dwi.bval
โโโ dwi.bvec
โโโ sub-01
โย ย โโโ ses-retest
โย ย โโโ anat
โย ย โย ย โโโ sub-01_ses-retest_T1w.nii.gz
โย ย โโโfunc
โย ย โโโ sub-01_ses-retest_task-covertverbgeneration_bold.nii.gz
โย ย โโโ sub-01_ses-retest_task-fingerfootlips_bold.nii.gz
โย ย โโโ sub-01_ses-retest_task-linebisection_bold.nii.gz
โย ย โโโ sub-01_ses-retest_task-linebisection_events.tsv
โย ย โโโ sub-01_ses-retest_task-overtverbgeneration_bold.nii.gz
โย ย โโโ sub-01_ses-retest_task-overtwordrepetition_bold.nii.gz
โ โโโ dwi
โ โโโ sub-01_ses-retest_dwi.nii.gz
โย ย โโโ ses-test
โย ย โโโ anat
โย ย โย ย โโโ sub-01_ses-test_T1w.nii.gz
โย ย โโโfunc
โย ย โโโ sub-01_ses-test_task-covertverbgeneration_bold.nii.gz
โย ย โโโ sub-01_ses-test_task-fingerfootlips_bold.nii.gz
โย ย โโโ sub-01_ses-test_task-linebisection_bold.nii.gz
โย ย โโโ sub-01_ses-test_task-linebisection_events.tsv
โย ย โโโ sub-01_ses-test_task-overtverbgeneration_bold.nii.gz
โย ย โโโ sub-01_ses-test_task-overtwordrepetition_bold.nii.gz
โ โโโ dwi
โ โโโ sub-01_ses-retest_dwi.nii.gz
โโโ sub-02..sub-10
โย ย โโโ ...
โโโ task-covertverbgeneration_bold.json
โโโ task-covertverbgeneration_events.tsv
โโโ task-fingerfootlips_bold.json
โโโ task-fingerfootlips_events.tsv
โโโ task-linebisection_bold.json
โโโ task-overtverbgeneration_bold.json
โโโ task-overtverbgeneration_events.tsv
โโโ task-overtwordrepetition_bold.json
โโโ task-overtwordrepetition_events.tsv
DataGrabber
DataGrabber is a generic data grabber module that wraps around glob to select your neuroimaging data in an intelligent way. As an example, let's assume we want to grab the anatomical and functional images of a certain subject.
First, we need to create the DataGrabber node. This node needs to have some input fields for all dynamic parameters (e.g. subject identifier, task identifier), as well as the two desired output fields anat and func.
End of explanation
dg.inputs.template_args = {'anat': [['subject_id', 'ses_name']],
'func': [['subject_id', 'ses_name', 'task_name']]}
Explanation: Second, we know that the two files we desire are the the following location:
anat = /data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz
func = /data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz
We see that the two files only have three dynamic parameters between subjects and task names:
subject_id: in this case 'sub-01'
task_name: in this case fingerfootlips
ses_name: test
This means that we can rewrite the paths as follows:
anat = /data/ds102/[subject_id]/ses-[ses_name]/anat/sub-[subject_id]_ses-[ses_name]_T1w.nii.gz
func = /data/ds102/[subject_id]/ses-[ses_name]/func/sub-[subject_id]_ses-[ses_name]_task-[task_name]_bold.nii.gz
Therefore, we need the parameters subject_id and ses_name for the anatomical image and the parameters subject_id, ses_name and task_name for the functional image. In the context of DataGabber, this is specified as follows:
End of explanation
dg.inputs.field_template = {'anat': 'sub-%02d/ses-%s/anat/*_T1w.nii.gz',
'func': 'sub-%02d/ses-%s/func/*task-%s_bold.nii.gz'}
Explanation: Now, comes the most important part of DataGrabber. We need to specify the template structure to find the specific data. This can be done as follows.
End of explanation
# Using the IdentityInterface
from nipype import IdentityInterface
infosource = Node(IdentityInterface(fields=['subject_id', 'task_name']),
name="infosource")
infosource.inputs.task_name = "fingerfootlips"
infosource.inputs.ses_name = "test"
subject_id_list = [1, 2]
infosource.iterables = [('subject_id', subject_id_list)]
Explanation: You'll notice that we use %s, %02d and * for placeholders in the data paths. %s is a placeholder for a string and is filled out by task_name or ses_name. %02d is a placeholder for a integer number and is filled out by subject_id. * is used as a wild card, e.g. a placeholder for any possible string combination. This is all to set up the DataGrabber node.
Now it is up to you how you want to feed the dynamic parameters into the node. You can either do this by using another node (e.g. IdentityInterface) and feed subject_id, ses_name and task_name as connections to the DataGrabber node or specify them directly as node inputs.
End of explanation
# Specifying the input fields of DataGrabber directly
dg.inputs.subject_id = 1
dg.inputs.ses_name = "test"
dg.inputs.task_name = "fingerfootlips"
Explanation: Now you only have to connect infosource with your DataGrabber and run the workflow to iterate over subjects 1 and 2.
You can also provide the inputs to the DataGrabber node directly, for one subject you can do this as follows:
End of explanation
dg.run().outputs
Explanation: Now let's run the DataGrabber node and let's look at the output:
End of explanation
from nipype import SelectFiles, Node
# String template with {}-based strings
templates = {'anat': 'sub-{subject_id}/ses-{ses_name}/anat/sub-{subject_id}_ses-{ses_name}_T1w.nii.gz',
'func': 'sub-{subject_id}/ses-{ses_name}/func/sub-{subject_id}_ses-{ses_name}_task-{task_name}_bold.nii.gz'}
# Create SelectFiles node
sf = Node(SelectFiles(templates),
name='selectfiles')
# Location of the dataset folder
sf.inputs.base_directory = '/data/ds000114'
# Feed {}-based placeholder strings with values
sf.inputs.subject_id = '01'
sf.inputs.ses_name = "test"
sf.inputs.task_name = 'fingerfootlips'
Explanation: SelectFiles
SelectFiles is a more flexible alternative to DataGrabber. It uses the {}-based string formating syntax to plug values into string templates and collect the data. These templates can also be combined with glob wild cards. The field names in the formatting template (i.e. the terms in braces) will become inputs fields on the interface, and the keys in the templates dictionary will form the output fields.
Let's focus again on the data we want to import:
anat = /data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz
func = /data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz
Now, we can replace those paths with the accoridng {}-based strings.
anat = /data/ds000114/sub-{subject_id}/ses-{ses_name}/anat/sub-{subject_id}_ses-{ses_name}_T1w.nii.gz
func = /data/ds000114/sub-{subject_id}/ses-{ses_name}/func/ \
sub-{subject_id}_ses-{ses_name}_task-{task_name}_bold.nii.gz
How would this look like as a SelectFiles node?
End of explanation
sf.run().outputs
Explanation: Let's check if we get what we wanted.
End of explanation
from nipype import SelectFiles, Node
from os.path import abspath as opap
# String template with {}-based strings
templates = {'anat': 'sub-0[1,2]/ses-{ses_name}/anat/sub-0[1,2]_ses-{ses_name}_T1w.nii.gz'}
# Create SelectFiles node
sf = Node(SelectFiles(templates),
name='selectfiles')
# Location of the dataset folder
sf.inputs.base_directory = '/data/ds000114'
# Feed {}-based placeholder strings with values
sf.inputs.ses_name = 'test'
# Print SelectFiles output
sf.run().outputs
Explanation: Perfect! But why is SelectFiles more flexible than DataGrabber? First, you perhaps noticed that with the {}-based string, we can reuse the same input (e.g. subject_id) multiple time in the same string, without feeding it multiple times into the template.
Additionally, you can also select multiple files without the need of an iterable node. For example, let's assume we want to select both anatomical images ('sub-01' and 'sub-02') at once. We can do this by using the following file template:
'sub-0[1,2]/anat/sub-0[1,2]_T1w.nii.gz'
Let's see how this works:
End of explanation
!datalad get -r -J4 /data/ds000114/derivatives/freesurfer/sub-01/
Explanation: As you can see, now anat contains two file paths, one for the first and one for the second subject. As a side node, you could have also gotten them same thing with the wild card *:
'sub-0*/ses-test/anat/sub-0*_ses-test_T1w.nii.gz'
FreeSurferSource
FreeSurferSource is a specific case of a file grabber that felicitates the data import of outputs from the FreeSurfer recon-all algorithm. This of course requires that you've already run recon-all on your subject.
For the tutorial dataset ds000114, recon-all was already run. So, let's make sure that you have the anatomy output of one subject on your system:
End of explanation
from nipype.interfaces.freesurfer import FSCommand
from os.path import abspath as opap
# Path to your freesurfer output folder
fs_dir = opap('/data/ds000114/derivatives/freesurfer/')
# Set SUBJECTS_DIR
FSCommand.set_default_subjects_dir(fs_dir)
Explanation: Now, before you can run FreeSurferSource, you first have to specify the path to the FreeSurfer output folder, i.e. you have to specify the SUBJECTS_DIR variable. This can be done as follows:
End of explanation
from nipype import Node
from nipype.interfaces.io import FreeSurferSource
# Create FreeSurferSource node
fssource = Node(FreeSurferSource(subjects_dir=fs_dir),
name='fssource')
Explanation: To create the FreeSurferSource node, do as follows:
End of explanation
fssource.inputs.subject_id = 'sub-01'
result = fssource.run()
Explanation: Let's now run it for a specific subject.
End of explanation
print('aparc_aseg: %s\n' % result.outputs.aparc_aseg)
print('brainmask: %s\n' % result.outputs.brainmask)
print('inflated: %s\n' % result.outputs.inflated)
Explanation: Did it work? Let's try to access multiple FreeSurfer outputs:
End of explanation
fssource.inputs.hemi = 'lh'
result = fssource.run()
Explanation: It seems to be working as it should. But as you can see, the inflated output actually contains the file location for both hemispheres. With FreeSurferSource we can also restrict the file selection to a single hemisphere. To do this, we use the hemi input filed:
End of explanation
result.outputs.inflated
Explanation: Let's take a look again at the inflated output.
End of explanation |
2,923 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Content and Objectives
Show pulse shaping (rect and raised-cosine) for random data
Spectra are determined based on the theoretical pulse shape as well as for the random signals when applying estimation
Import
Step1: Function for determining the impulse response of an RC filter
Step2: Parameters
Step3: Signals and their spectra
Step4: Real data-modulated Tx-signal
Step5: Plotting | Python Code:
# importing
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
# showing figures inline
%matplotlib inline
# plotting options
font = {'size' : 16}
plt.rc('font', **font)
plt.rc('text', usetex=matplotlib.checkdep_usetex(True))
matplotlib.rc('figure', figsize=(18, 8) )
Explanation: Content and Objectives
Show pulse shaping (rect and raised-cosine) for random data
Spectra are determined based on the theoretical pulse shape as well as for the random signals when applying estimation
Import
End of explanation
########################
# find impulse response of an RC filter
########################
def get_rc_ir(K, n_up, t_symbol, beta):
'''
Determines coefficients of an RC filter
Formula out of: K.-D. Kammeyer, Nachrichtenรผbertragung
At poles, l'Hospital was used
NOTE: Length of the IR has to be an odd number
IN: length of IR, upsampling factor, symbol time, roll-off factor
OUT: filter coefficients
'''
# check that IR length is odd
assert K % 2 == 1, 'Length of the impulse response should be an odd number'
# map zero r to close-to-zero
if beta == 0:
beta = 1e-32
# initialize output length and sample time
rc = np.zeros( K )
t_sample = t_symbol / n_up
# time indices and sampled time
k_steps = np.arange( -(K-1) / 2.0, (K-1) / 2.0 + 1 )
t_steps = k_steps * t_sample
for k in k_steps.astype(int):
if t_steps[k] == 0:
rc[ k ] = 1. / t_symbol
elif np.abs( t_steps[k] ) == t_symbol / ( 2.0 * beta ):
rc[ k ] = beta / ( 2.0 * t_symbol ) * np.sin( np.pi / ( 2.0 * beta ) )
else:
rc[ k ] = np.sin( np.pi * t_steps[k] / t_symbol ) / np.pi / t_steps[k] \
* np.cos( beta * np.pi * t_steps[k] / t_symbol ) \
/ ( 1.0 - ( 2.0 * beta * t_steps[k] / t_symbol )**2 )
return rc
Explanation: Function for determining the impulse response of an RC filter
End of explanation
# modulation scheme and constellation points
M = 2
constellation_points = [ 0, 1 ]
# symbol time and number of symbols
t_symb = 1.0
n_symb = 100
# parameters of the RRC filter
beta = .33
n_up = 8 # samples per symbol
syms_per_filt = 4 # symbols per filter (plus minus in both directions)
K_filt = 2 * syms_per_filt * n_up + 1 # length of the fir filter
# parameters for frequency regime
N_fft = 512
Omega = np.linspace( -np.pi, np.pi, N_fft)
f_vec = Omega / ( 2 * np.pi * t_symb / n_up )
Explanation: Parameters
End of explanation
# get RC pulse and rectangular pulse,
# both being normalized to energy 1
rc = get_rc_ir( K_filt, n_up, t_symb, beta )
rc /= np.linalg.norm( rc )
rect = np.append( np.ones( n_up ), np.zeros( len( rc ) - n_up ) )
rect /= np.linalg.norm( rect )
# get pulse spectra
RC_PSD = np.abs( np.fft.fftshift( np.fft.fft( rc, N_fft ) ) )**2
RC_PSD /= n_up
RECT_PSD = np.abs( np.fft.fftshift( np.fft.fft( rect, N_fft ) ) )**2
RECT_PSD /= n_up
Explanation: Signals and their spectra
End of explanation
# number of realizations along which to average the psd estimate
n_real = 10
# initialize two-dimensional field for collecting several realizations along which to average
S_rc = np.zeros( (n_real, N_fft ), dtype=complex )
S_rect = np.zeros( (n_real, N_fft ), dtype=complex )
# loop for multiple realizations in order to improve spectral estimation
for k in range( n_real ):
# generate random binary vector and
# modulate the specified modulation scheme
data = np.random.randint( M, size = n_symb )
s = [ constellation_points[ d ] for d in data ]
# apply RC filtering/pulse-shaping
s_up_rc = np.zeros( n_symb * n_up )
s_up_rc[ : : n_up ] = s
s_rc = np.convolve( rc, s_up_rc)
# apply RECTANGULAR filtering/pulse-shaping
s_up_rect = np.zeros( n_symb * n_up )
s_up_rect[ : : n_up ] = s
s_rect = np.convolve( rect, s_up_rect)
# get spectrum using Bartlett method
S_rc[k, :] = np.fft.fftshift( np.fft.fft( s_rc, N_fft ) )
S_rect[k, :] = np.fft.fftshift( np.fft.fft( s_rect, N_fft ) )
# average along realizations
RC_PSD_sim = np.average( np.abs( S_rc )**2, axis=0 )
RC_PSD_sim /= np.max( RC_PSD_sim )
RECT_PSD_sim = np.average( np.abs( S_rect )**2, axis=0 )
RECT_PSD_sim /= np.max( RECT_PSD_sim )
Explanation: Real data-modulated Tx-signal
End of explanation
plt.subplot(221)
plt.plot( np.arange( np.size( rc ) ) * t_symb / n_up, rc, linewidth=2.0, label='RC' )
plt.plot( np.arange( np.size( rect ) ) * t_symb / n_up, rect, linewidth=2.0, label='Rect' )
plt.ylim( (-.1, 1.1 ) )
plt.grid( True )
plt.legend( loc='upper right' )
#plt.title( '$g(t), s(t)$' )
plt.ylabel('$g(t)$')
plt.subplot(222)
np.seterr(divide='ignore') # ignore warning for logarithm of 0
plt.plot( f_vec, 10*np.log10( RC_PSD ), linewidth=2.0, label='RC theory' )
plt.plot( f_vec, 10*np.log10( RECT_PSD ), linewidth=2.0, label='Rect theory' )
np.seterr(divide='warn') # enable warning for logarithm of 0
plt.grid( True )
plt.legend( loc='upper right' )
plt.ylabel( '$|G(f)|^2$' )
plt.ylim( (-60, 10 ) )
plt.subplot(223)
plt.plot( np.arange( np.size( s_rc[:20*n_up])) * t_symb / n_up, s_rc[:20*n_up], linewidth=2.0, label='RC' )
plt.plot( np.arange( np.size( s_rect[:20*n_up])) * t_symb / n_up, s_rect[:20*n_up], linewidth=2.0, label='Rect' )
#plt.plot( np.arange( np.size( s_up_rc[:20*n_up])) * t_symb / n_up, s_up_rc[:20*n_up], 'o', linewidth=2.0, label='Syms' )
plt.ylim( (-0.1, 1.1 ) )
plt.grid(True)
plt.legend(loc='upper right')
plt.xlabel('$t/T$')
plt.ylabel('$s(t)$')
plt.subplot(224)
np.seterr(divide='ignore') # ignore warning for logarithm of 0
plt.plot( f_vec, 10*np.log10( RC_PSD_sim ), linewidth=2.0, label='RC' )
plt.plot( f_vec, 10*np.log10( RECT_PSD_sim ), linewidth=2.0, label='Rect' )
np.seterr(divide='warn') # enable warning for logarithm of 0
plt.grid(True);
plt.xlabel('$fT$');
plt.ylabel( '$|S(f)|^2$' )
plt.legend(loc='upper right')
plt.ylim( (-60, 10 ) )
plt.savefig('rect_pulse_shape.pdf',bbox_inches='tight')
Explanation: Plotting
End of explanation |
2,924 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Unit Tests
Overview and Principles
Testing is the process by which you exercise your code to determine if it performs as expected. The code you are testing is referred to as the code under test.
There are two parts to writing tests.
1. invoking the code under test so that it is exercised in a particular way;
1. evaluating the results of executing code under test to determine if it behaved as expected.
The collection of tests performed are referred to as the test cases. The fraction of the code under test that is executed as a result of running the test cases is referred to as test coverage.
For dynamical languages such as Python, it's extremely important to have a high test coverage. In fact, you should try to get 100% coverage. This is because little checking is done when the source code is read by the Python interpreter. For example, the code under test might contain a line that has a function that is undefined. This would not be detected until that line of code is executed.
Test cases can be of several types. Below are listed some common classifications of test cases.
- Smoke test. This is an invocation of the code under test to see if there is an unexpected exception. It's useful as a starting point, but this doesn't tell you anything about the correctness of the results of a computation.
- One-shot test. In this case, you call the code under test with arguments for which you know the expected result.
- Edge test. The code under test is invoked with arguments that should cause an exception, and you evaluate if the expected exception occurrs.
- Pattern test - Based on your knowledge of the calculation (not implementation) of the code under test, you construct a suite of test cases for which the results are known or there are known patterns in these results that are used to evaluate the results returned.
Another principle of testing is to limit what is done in a single test case. Generally, a test case should focus on one use of one function. Sometimes, this is a challenge since the function being tested may call other functions that you are testing. This means that bugs in the called functions may cause failures in the tests of the calling functions. Often, you sort this out by knowing the structure of the code and focusing first on failures in lower level tests. In other situations, you may use more advanced techniques called mocking. A discussion of mocking is beyond the scope of this course.
A best practice is to develop your tests while you are developing your code. Indeed, one school of thought in software engineering, called test-driven development, advocates that you write the tests before you implement the code under test so that the test cases become a kind of specification for what the code under test should do.
Examples of Test Cases
This section presents examples of test cases. The code under test is the calculation of entropy.
Entropy of a set of probabilities
$$
H = -\sum_i p_i \log(p_i)
$$
where $\sum_i p_i = 1$.
Step1: Suppose that all of the probability of a distribution is at one point. An example of this is a coin with two heads. Whenever you flip it, you always get heads. That is, the probability of a head is 1.
What is the entropy of such a distribution? From the calculation above, we see that the entropy should be $log(1)$, which is 0. This means that we have a test case where we know the result!
Step2: Question
Step3: Now let's consider a pattern test. Examining the structure of the calculation of $H$, we consider a situation in which there are $n$ equal probabilities. That is, $p_i = \frac{1}{n}$.
$$
H = -\sum_{i=1}^{n} p_i \log(p_i)
= -\sum_{i=1}^{n} \frac{1}{n} \log(\frac{1}{n})
= n (-\frac{1}{n} \log(\frac{1}{n}) )
= -\log(\frac{1}{n})
$$
For example, entropy([0.5, 0.5]) should be $-log(0.5)$.
Step4: You see that there are many, many cases to test. So far, we've been writing special codes for each test case. We can do better.
Unittest Infrastructure
There are several reasons to use a test infrastructure
Step7: Code for homework or your work should use test files. In this lesson, we'll show how to write test codes in a Jupyter notebook. This is done for pedidogical reasons. It is NOT not something you should do in practice, except as an intermediate exploratory approach.
As expected, the first test passes, but the second test fails.
Exercise
Rewrite the above one-shot test for entropy using the unittest infrastructure.
Step8: Testing For Exceptions
Edge test cases often involves handling exceptions. One approach is to code this directly.
Step9: unittest provides help with testing exceptions.
Step10: Test Files
Although I presented the elements of unittest in a notebook. your tests should be in a file. If the name of module with the code under test is foo.py, then the name of the test file should be test_foo.py.
The structure of the test file will be very similar to cells above. You will import unittest. You must also import the module with the code under test. Take a look at test_prime.py in this directory to see an example.
Discussion
Question | Python Code:
import numpy as np
# Code Under Test
def entropy(ps):
if any([(p < 0.0) or (p > 1.0) for p in ps]):
raise ValueError("Bad input.")
if sum(ps) > 1:
raise ValueError("Bad input.")
items = ps * np.log(ps)
new_items = []
for item in items:
if np.isnan(item):
new_items.append(0)
else:
new_items.append(item)
return np.abs(-np.sum(new_items))
# Smoke test
def smoke_test(ps):
try:
entropy(ps)
return True
except:
return False
smoke_test([0.5, 0.5])
# One shot test
0.0 == entropy([1, 0, 0, 0])
# Edge tests
def edge_test(ps):
try:
entropy(ps)
except ValueError:
return True
return False
edge_test([-1, 2])
Explanation: Unit Tests
Overview and Principles
Testing is the process by which you exercise your code to determine if it performs as expected. The code you are testing is referred to as the code under test.
There are two parts to writing tests.
1. invoking the code under test so that it is exercised in a particular way;
1. evaluating the results of executing code under test to determine if it behaved as expected.
The collection of tests performed are referred to as the test cases. The fraction of the code under test that is executed as a result of running the test cases is referred to as test coverage.
For dynamical languages such as Python, it's extremely important to have a high test coverage. In fact, you should try to get 100% coverage. This is because little checking is done when the source code is read by the Python interpreter. For example, the code under test might contain a line that has a function that is undefined. This would not be detected until that line of code is executed.
Test cases can be of several types. Below are listed some common classifications of test cases.
- Smoke test. This is an invocation of the code under test to see if there is an unexpected exception. It's useful as a starting point, but this doesn't tell you anything about the correctness of the results of a computation.
- One-shot test. In this case, you call the code under test with arguments for which you know the expected result.
- Edge test. The code under test is invoked with arguments that should cause an exception, and you evaluate if the expected exception occurrs.
- Pattern test - Based on your knowledge of the calculation (not implementation) of the code under test, you construct a suite of test cases for which the results are known or there are known patterns in these results that are used to evaluate the results returned.
Another principle of testing is to limit what is done in a single test case. Generally, a test case should focus on one use of one function. Sometimes, this is a challenge since the function being tested may call other functions that you are testing. This means that bugs in the called functions may cause failures in the tests of the calling functions. Often, you sort this out by knowing the structure of the code and focusing first on failures in lower level tests. In other situations, you may use more advanced techniques called mocking. A discussion of mocking is beyond the scope of this course.
A best practice is to develop your tests while you are developing your code. Indeed, one school of thought in software engineering, called test-driven development, advocates that you write the tests before you implement the code under test so that the test cases become a kind of specification for what the code under test should do.
Examples of Test Cases
This section presents examples of test cases. The code under test is the calculation of entropy.
Entropy of a set of probabilities
$$
H = -\sum_i p_i \log(p_i)
$$
where $\sum_i p_i = 1$.
End of explanation
# One-shot test. Need to know the correct answer.
entries = [
[0, [1]],
]
for entry in entries:
ans = entry[0]
prob = entry[1]
if not np.isclose(entropy(prob), ans):
print("Test failed!")
print ("Test completed!")
Explanation: Suppose that all of the probability of a distribution is at one point. An example of this is a coin with two heads. Whenever you flip it, you always get heads. That is, the probability of a head is 1.
What is the entropy of such a distribution? From the calculation above, we see that the entropy should be $log(1)$, which is 0. This means that we have a test case where we know the result!
End of explanation
# Edge test. This is something that should cause an exception.
#entropy([-0.5])
Explanation: Question: What is an example of another one-shot test? (Hint: You need to know the expected result.)
One edge test of interest is to provide an input that is not a distribution in that probabilities don't sum to 1.
End of explanation
# Pattern test
def test_equal_probabilities(n):
prob = 1.0/n
ps = np.repeat(prob , n)
if np.isclose(entropy(ps), -np.log(prob)):
print("Worked!")
else:
import pdb; pdb.set_trace()
print ("Bad result.")
# Run a test
test_equal_probabilities(100000)
Explanation: Now let's consider a pattern test. Examining the structure of the calculation of $H$, we consider a situation in which there are $n$ equal probabilities. That is, $p_i = \frac{1}{n}$.
$$
H = -\sum_{i=1}^{n} p_i \log(p_i)
= -\sum_{i=1}^{n} \frac{1}{n} \log(\frac{1}{n})
= n (-\frac{1}{n} \log(\frac{1}{n}) )
= -\log(\frac{1}{n})
$$
For example, entropy([0.5, 0.5]) should be $-log(0.5)$.
End of explanation
import unittest
# Define a class in which the tests will run
class UnitTests(unittest.TestCase):
# Each method in the class to execute a test
def test_success(self):
self.assertEqual(1, 2)
def test_success1(self):
self.assertTrue(1 == 1)
def test_failure(self):
self.assertLess(1, 2)
suite = unittest.TestLoader().loadTestsFromTestCase(UnitTests)
_ = unittest.TextTestRunner().run(suite)
# Function the handles test loading
#def test_setup(argument ?):
Explanation: You see that there are many, many cases to test. So far, we've been writing special codes for each test case. We can do better.
Unittest Infrastructure
There are several reasons to use a test infrastructure:
- If you have many test cases (which you should!), the test infrastructure will save you from writing a lot of code.
- The infrastructure provides a uniform way to report test results, and to handle test failures.
- A test infrastructure can tell you about coverage so you know what tests to add.
We'll be using the unittest framework. This is a separate Python package. Using this infrastructure, requires the following:
1. import the unittest module
1. define a class that inherits from unittest.TestCase
1. write methods that run the code to be tested and check the outcomes.
The last item has two subparts. First, we must identify which methods in the class inheriting from unittest.TestCase are tests. You indicate that a method is to be run as a test by having the method name begin with "test".
Second, the "test methods" should communicate with the infrastructure the results of evaluating output from the code under test. This is done by using assert statements. For example, self.assertEqual takes two arguments. If these are objects for which == returns True, then the test passes. Otherwise, the test fails.
End of explanation
# Implementating a pattern test. Use functions in the test.
import unittest
# Define a class in which the tests will run
class TestEntropy(unittest.TestCase):
def test_equal_probability(self):
def test(count):
Invokes the entropy function for a number of values equal to count
that have the same probability.
:param int count:
raise RuntimeError ("Not implemented.")
#
test(2)
test(20)
test(200)
#test_setup(TestEntropy)
import unittest
# Define a class in which the tests will run
class TestEntropy(unittest.TestCase):
Write the full set of tests.
Explanation: Code for homework or your work should use test files. In this lesson, we'll show how to write test codes in a Jupyter notebook. This is done for pedidogical reasons. It is NOT not something you should do in practice, except as an intermediate exploratory approach.
As expected, the first test passes, but the second test fails.
Exercise
Rewrite the above one-shot test for entropy using the unittest infrastructure.
End of explanation
import unittest
# Define a class in which the tests will run
class TestEntropy(unittest.TestCase):
def test_invalid_probability(self):
try:
entropy([0.1, 0.5])
self.assertTrue(False)
except ValueError:
self.assertTrue(True)
#test_setup(TestEntropy)
Explanation: Testing For Exceptions
Edge test cases often involves handling exceptions. One approach is to code this directly.
End of explanation
import unittest
# Define a class in which the tests will run
class TestEntropy(unittest.TestCase):
def test_invalid_probability(self):
with self.assertRaises(ValueError):
entropy([0.1, 0.5])
suite = unittest.TestLoader().loadTestsFromTestCase(TestEntropy)
_ = unittest.TextTestRunner().run(suite)
Explanation: unittest provides help with testing exceptions.
End of explanation
import unittest
# Define a class in which the tests will run
class TestEntryopy(unittest.TestCase):
def test_oneshot(self):
self.assertEqual(geomean([1,1]), 1)
def test_oneshot2(self):
self.assertEqual(geomean([3, 3, 3]), 3)
#test_setup(TestGeomean)
#def geomean(argument?):
# return ?
Explanation: Test Files
Although I presented the elements of unittest in a notebook. your tests should be in a file. If the name of module with the code under test is foo.py, then the name of the test file should be test_foo.py.
The structure of the test file will be very similar to cells above. You will import unittest. You must also import the module with the code under test. Take a look at test_prime.py in this directory to see an example.
Discussion
Question: What tests would you write for a plotting function?
Test Driven Development
Start by writing the tests. Then write the code.
We illustrate this by considering a function geomean that takes a list of numbers as input and produces the geometric mean on output.
End of explanation |
2,925 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Flowers Image Classification with TensorFlow on Cloud ML Engine TPU
This notebook demonstrates how to do image classification from scratch on a flowers dataset using the Estimator API. Unlike flowers_fromscratch.ipynb, here we do it on a TPU.
Therefore, this will work only if you have quota for TPUs (not in Qwiklabs). It will cost about $3 if you want to try it out.
Step1: After doing a pip install, click on Reset Session so that the Python environment picks up the new package
Step2: Preprocess JPEG images to TF Records
While using a GPU, it is okay to read the JPEGS directly from our input_fn. However, TPUs are too fast and it will be very wasteful to have the TPUs wait on I/O. Therefore, we'll preprocess the JPEGs into TF Records.
This runs on Cloud Dataflow and will take <b> 15-20 minutes </b>
Step3: Run as a Python module
First run locally without --use_tpu -- don't be concerned if the process gets killed for using too much memory.
Step4: Then, run it on Cloud ML Engine with --use_tpu
Step5: Monitoring training with TensorBoard
Use this cell to launch tensorboard
Step6: Deploying and predicting with model
Deploy the model
Step7: To predict with the model, let's take one of the example images that is available on Google Cloud Storage <img src="http
Step8: The online prediction service expects images to be base64 encoded as described here.
Step9: Send it to the prediction service | Python Code:
%%bash
pip install apache-beam[gcp]
Explanation: Flowers Image Classification with TensorFlow on Cloud ML Engine TPU
This notebook demonstrates how to do image classification from scratch on a flowers dataset using the Estimator API. Unlike flowers_fromscratch.ipynb, here we do it on a TPU.
Therefore, this will work only if you have quota for TPUs (not in Qwiklabs). It will cost about $3 if you want to try it out.
End of explanation
import os
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
MODEL_TYPE = 'tpu'
# do not change these
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
os.environ['MODEL_TYPE'] = MODEL_TYPE
os.environ['TFVERSION'] = '1.8' # Tensorflow version
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
Explanation: After doing a pip install, click on Reset Session so that the Python environment picks up the new package
End of explanation
%%bash
gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | sed 's/,/ /g' | awk '{print $2}' | sort | uniq > /tmp/labels.txt
%%bash
gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | wc -l
gsutil cat gs://cloud-ml-data/img/flower_photos/eval_set.csv | wc -l
%%bash
export PYTHONPATH=${PYTHONPATH}:${PWD}/flowersmodeltpu
gsutil -m rm -rf gs://${BUCKET}/tpu/flowers/data
python -m trainer.preprocess \
--train_csv gs://cloud-ml-data/img/flower_photos/train_set.csv \
--validation_csv gs://cloud-ml-data/img/flower_photos/eval_set.csv \
--labels_file /tmp/labels.txt \
--project_id $PROJECT \
--output_dir gs://${BUCKET}/tpu/flowers/data
%%bash
gsutil ls gs://${BUCKET}/tpu/flowers/data/
Explanation: Preprocess JPEG images to TF Records
While using a GPU, it is okay to read the JPEGS directly from our input_fn. However, TPUs are too fast and it will be very wasteful to have the TPUs wait on I/O. Therefore, we'll preprocess the JPEGs into TF Records.
This runs on Cloud Dataflow and will take <b> 15-20 minutes </b>
End of explanation
%%bash
WITHOUT_TPU="--train_batch_size=2 --train_steps=5"
OUTDIR=./flowers_trained
rm -rf $OUTDIR
export PYTHONPATH=${PYTHONPATH}:${PWD}/flowersmodeltpu
python -m flowersmodeltpu.task \
--output_dir=$OUTDIR \
--num_train_images=3300 \
--num_eval_images=370 \
$WITHOUT_TPU \
--learning_rate=0.01 \
--project=${PROJECT} \
--train_data_path=gs://${BUCKET}/tpu/flowers/data/train* \
--eval_data_path=gs://${BUCKET}/tpu/flowers/data/validation*
Explanation: Run as a Python module
First run locally without --use_tpu -- don't be concerned if the process gets killed for using too much memory.
End of explanation
%%bash
WITH_TPU="--train_batch_size=256 --train_steps=3000 --batch_norm --use_tpu"
WITHOUT_TPU="--train_batch_size=2 --train_steps=5"
OUTDIR=gs://${BUCKET}/flowers/trained_${MODEL_TYPE}_delete
JOBNAME=flowers_${MODEL_TYPE}_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=flowersmodeltpu.task \
--package-path=${PWD}/flowersmodeltpu \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC_TPU \
--runtime-version=$TFVERSION \
-- \
--output_dir=$OUTDIR \
--num_train_images=3300 \
--num_eval_images=370 \
$WITH_TPU \
--learning_rate=0.01 \
--project=${PROJECT} \
--train_data_path=gs://${BUCKET}/tpu/flowers/data/train-* \
--eval_data_path=gs://${BUCKET}/tpu/flowers/data/validation-*
%%bash
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/flowers/trained_${MODEL_TYPE}/export/exporter | tail -1)
saved_model_cli show --dir $MODEL_LOCATION --all
Explanation: Then, run it on Cloud ML Engine with --use_tpu
End of explanation
from google.datalab.ml import TensorBoard
TensorBoard().start('gs://{}/flowers/trained_{}'.format(BUCKET, MODEL_TYPE))
for pid in TensorBoard.list()['pid']:
TensorBoard().stop(pid)
print 'Stopped TensorBoard with pid {}'.format(pid)
Explanation: Monitoring training with TensorBoard
Use this cell to launch tensorboard
End of explanation
%%bash
MODEL_NAME="flowers"
MODEL_VERSION=${MODEL_TYPE}
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/flowers/trained_${MODEL_TYPE}/export/exporter | tail -1)
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
#gcloud ml-engine versions delete --quiet ${MODEL_VERSION} --model ${MODEL_NAME}
#gcloud ml-engine models delete ${MODEL_NAME}
#gcloud ml-engine models create ${MODEL_NAME} --regions $REGION
gcloud alpha ml-engine versions create ${MODEL_VERSION} --machine-type mls1-c4-m4 --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version=$TFVERSION
Explanation: Deploying and predicting with model
Deploy the model:
End of explanation
%%bash
gcloud alpha ml-engine models list
Explanation: To predict with the model, let's take one of the example images that is available on Google Cloud Storage <img src="http://storage.googleapis.com/cloud-ml-data/img/flower_photos/sunflowers/1022552002_2b93faf9e7_n.jpg" />
End of explanation
%%bash
IMAGE_URL=gs://cloud-ml-data/img/flower_photos/sunflowers/1022552002_2b93faf9e7_n.jpg
# Copy the image to local disk.
gsutil cp $IMAGE_URL flower.jpg
# Base64 encode and create request message in json format.
python -c 'import base64, sys, json; img = base64.b64encode(open("flower.jpg", "rb").read()).decode(); print(json.dumps({"image_bytes":{"b64": img}}))' &> request.json
Explanation: The online prediction service expects images to be base64 encoded as described here.
End of explanation
%%bash
gcloud ml-engine predict \
--model=flowers2 \
--version=${MODEL_TYPE} \
--json-instances=./request.json
Explanation: Send it to the prediction service
End of explanation |
2,926 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1A.algo - Des problรจmes de graphes
Dรฉcouvrir les graphes avec des problรจmes pas trop compliquรฉs. Composantes connexes, plus court chemin et...
Step1: Un graphe | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: 1A.algo - Des problรจmes de graphes
Dรฉcouvrir les graphes avec des problรจmes pas trop compliquรฉs. Composantes connexes, plus court chemin et...
End of explanation
# tutoriel_graphe
noeuds = {0: 'le', 1: 'silences', 2: 'quelques', 3: '\xe9crit', 4: 'non-dits.', 5: 'Et', 6: 'risque', 7: '\xe0', 8: "qu'elle,", 9: 'parfois', 10: 'aim\xe9', 11: 'lorsque', 12: 'que', 13: 'plus', 14: 'les', 15: 'Minelli,', 16: "n'oublierai", 17: 'je', 18: 'prises', 19: 'sa', 20: 'la', 21: 'jeune,', 22: "qu'elle,", 23: '\xe0', 24: 'ont', 25: "j'ai", 26: 'chemin', 27: '\xe9tranger', 28: 'lente', 29: 'de', 30: 'voir', 31: 'quand', 32: 'la', 33: 'recul,', 34: 'de', 35: 'trop', 36: 'ce', 37: 'Je', 38: 'Il', 39: "l'extr\xeame", 40: "J'ai", 41: 'silences,', 42: "qu'elle,", 43: 'le', 44: 'trace,', 45: 'avec', 46: 'seras', 47: 'dire,', 48: 'femme', 49: 'soit'}
arcs = {(3, 15): None, (46, 47): None, (42, 33): None, (35, 45): None, (1, 14): None, (22, 26): None, (26, 28): None, (43, 29): None, (40, 41): None, (29, 44): None, (17, 3): None, (32, 37): None, (24, 19): None, (46, 34): None, (11, 19): None, (34, 49): None, (22, 2): None, (37, 48): None, (14, 12): None, (3, 10): None, (5, 18): None, (12, 24): None, (34, 32): None, (45, 39): None, (37, 26): None, (33, 45): None, (34, 47): None, (36, 31): None, (29, 47): None, (13, 11): None, (12, 21): None, (2, 16): None, (5, 4): None, (33, 35): None, (28, 49): None, (25, 49): None, (21, 0): None, (3, 13): None, (18, 24): None, (12, 7): None, (13, 15): None, (11, 1): None, (16, 23): None, (37, 45): None, (27, 32): None, (32, 41): None, (8, 24): None, (10, 1): None, (2, 24): None, (24, 11): None, (2, 14): None, (47, 36): None, (48, 39): None, (30, 25): None, (30, 43): None, (15, 14): None, (26, 27): None, (6, 8): None, (20, 10): None, (19, 17): None, (5, 7): None, (44, 25): None, (27, 38): None, (2, 0): None, (3, 18): None, (3, 9): None, (25, 33): None, (42, 48): None, (2, 15): None, (26, 48): None, (26, 38): None, (7, 8): None, (8, 4): None}
from mlstatpy.graph.graphviz_helper import draw_graph_graphviz
draw_graph_graphviz(noeuds, arcs, "image.png")
from IPython.display import Image
Image("image.png", width=400)
Explanation: Un graphe
End of explanation |
2,927 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Epochs data structure
Step1:
Step2: As we saw in the tut-events-vs-annotations tutorial, we can extract an
events array from
Step3: <div class="alert alert-info"><h4>Note</h4><p>We could also have loaded the events from file, using
Step4: You'll see from the output that
Step5: Notice that the Event IDs are in quotes; since we didn't provide an event
dictionary, the
Step6: This time let's pass preload=True and provide an event dictionary; our
provided dictionary will get stored as the event_id attribute and will
make referencing events and pooling across event types easier
Step7: Notice that the output now mentions "1 bad epoch dropped". In the tutorial
section tut-reject-epochs-section we saw how you can specify channel
amplitude criteria for rejecting epochs, but here we haven't specified any
such criteria. In this case, it turns out that the last event was too close
the end of the (cropped) raw file to accommodate our requested tmax of
0.7 seconds, so the final epoch was dropped because it was too short. Here
are the drop_log entries for the last 4 epochs (empty lists indicate
epochs that were not dropped)
Step8: <div class="alert alert-info"><h4>Note</h4><p>If you forget to provide the event dictionary to the
Step9: Notice that the individual epochs are sequentially numbered along the bottom
axis; the event ID associated with the epoch is marked on the top axis;
epochs are separated by vertical dashed lines; and a vertical solid green
line marks time=0 for each epoch (i.e., in this case, the stimulus onset
time for each trial). Epoch plots are interactive (similar to
Step10: We can also pool across conditions easily, thanks to how MNE-Python handles
the / character in epoch labels (using what is sometimes called
"tag-based indexing")
Step11: You can also pool conditions by passing multiple tags as a list. Note that
MNE-Python will not complain if you ask for tags not present in the object,
as long as it can find some match
Step12: However, if no match is found, an error is returned
Step13: Selecting epochs by index
Step14: Selecting, dropping, and reordering channels
You can use the
Step15: Changing channel name and type
You can change the name or type of a channel using
Step16: Selection in the time domain
To change the temporal extent of the
Step17: However, if you wanted to expand the time domain of an
Step18: Note that although time shifting respects the sampling frequency (the spacing
between samples), it does not enforce the assumption that there is a sample
occurring at exactly time=0.
Extracting data in other forms
The
Step19: Note that if your analysis requires repeatedly extracting single epochs from
an
Step20: See the tut-epochs-dataframe tutorial for many more examples of the
Step21: The MNE-Python naming convention for epochs files is that the file basename
(the part before the .fif or .fif.gz extension) should end with
-epo or _epo, and a warning will be issued if the filename you
provide does not adhere to that convention.
As a final note, be aware that the class of the epochs object is different
when epochs are loaded from disk rather than generated from a
Step22: In almost all cases this will not require changing anything about your code.
However, if you need to do type checking on epochs objects, you can test
against the base class that these classes are derived from
Step23: Iterating over Epochs
Iterating over an
Step24: If you want to iterate over | Python Code:
import os
import mne
Explanation: The Epochs data structure: discontinuous data
This tutorial covers the basics of creating and working with :term:epoched
<epochs> data. It introduces the :class:~mne.Epochs data structure in
detail, including how to load, query, subselect, export, and plot data from an
:class:~mne.Epochs object. For more information about visualizing
:class:~mne.Epochs objects, see tut-visualize-epochs. For info on
creating an :class:~mne.Epochs object from (possibly simulated) data in a
:class:NumPy array <numpy.ndarray>, see tut_creating_data_structures.
:depth: 2
As usual we'll start by importing the modules we need:
End of explanation
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False).crop(tmax=60)
Explanation: :class:~mne.Epochs objects are a data structure for representing and
analyzing equal-duration chunks of the EEG/MEG signal. :class:~mne.Epochs
are most often used to represent data that is time-locked to repeated
experimental events (such as stimulus onsets or subject button presses), but
can also be used for storing sequential or overlapping frames of a continuous
signal (e.g., for analysis of resting-state activity; see
fixed-length-events). Inside an :class:~mne.Epochs object, the data
are stored in an :class:array <numpy.ndarray> of shape (n_epochs,
n_channels, n_times).
:class:~mne.Epochs objects have many similarities with :class:~mne.io.Raw
objects, including:
They can be loaded from and saved to disk in .fif format, and their
data can be exported to a :class:NumPy array <numpy.ndarray> through the
:meth:~mne.Epochs.get_data method or to a :class:Pandas DataFrame
<pandas.DataFrame> through the :meth:~mne.Epochs.to_data_frame method.
Both :class:~mne.Epochs and :class:~mne.io.Raw objects support channel
selection by index or name, including :meth:~mne.Epochs.pick,
:meth:~mne.Epochs.pick_channels and :meth:~mne.Epochs.pick_types
methods.
:term:SSP projector <projector> manipulation is possible through
:meth:~mne.Epochs.add_proj, :meth:~mne.Epochs.del_proj, and
:meth:~mne.Epochs.plot_projs_topomap methods.
Both :class:~mne.Epochs and :class:~mne.io.Raw objects have
:meth:~mne.Epochs.copy, :meth:~mne.Epochs.crop,
:meth:~mne.Epochs.time_as_index, :meth:~mne.Epochs.filter, and
:meth:~mne.Epochs.resample methods.
Both :class:~mne.Epochs and :class:~mne.io.Raw objects have
:attr:~mne.Epochs.times, :attr:~mne.Epochs.ch_names,
:attr:~mne.Epochs.proj, and :class:info <mne.Info> attributes.
Both :class:~mne.Epochs and :class:~mne.io.Raw objects have built-in
plotting methods :meth:~mne.Epochs.plot, :meth:~mne.Epochs.plot_psd,
and :meth:~mne.Epochs.plot_psd_topomap.
Creating Epoched data from a Raw object
The example dataset we've been using thus far doesn't include pre-epoched
data, so in this section we'll load the continuous data and create epochs
based on the events recorded in the :class:~mne.io.Raw object's STIM
channels. As we often do in these tutorials, we'll :meth:~mne.io.Raw.crop
the :class:~mne.io.Raw data to save memory:
End of explanation
events = mne.find_events(raw, stim_channel='STI 014')
Explanation: As we saw in the tut-events-vs-annotations tutorial, we can extract an
events array from :class:~mne.io.Raw objects using :func:mne.find_events:
End of explanation
epochs = mne.Epochs(raw, events, tmin=-0.3, tmax=0.7)
Explanation: <div class="alert alert-info"><h4>Note</h4><p>We could also have loaded the events from file, using
:func:`mne.read_events`::
sample_data_events_file = os.path.join(sample_data_folder,
'MEG', 'sample',
'sample_audvis_raw-eve.fif')
events_from_file = mne.read_events(sample_data_events_file)
See `tut-section-events-io` for more details.</p></div>
The :class:~mne.io.Raw object and the events array are the bare minimum
needed to create an :class:~mne.Epochs object, which we create with the
:class:mne.Epochs class constructor. However, you will almost surely want
to change some of the other default parameters. Here we'll change tmin
and tmax (the time relative to each event at which to start and end each
epoch). Note also that the :class:~mne.Epochs constructor accepts
parameters reject and flat for rejecting individual epochs based on
signal amplitude. See the tut-reject-epochs-section section for
examples.
End of explanation
print(epochs)
Explanation: You'll see from the output that:
all 320 events were used to create epochs
baseline correction was automatically applied (by default, baseline is
defined as the time span from tmin to 0, but can be customized with
the baseline parameter)
no additional metadata was provided (see tut-epochs-metadata for
details)
the projection operators present in the :class:~mne.io.Raw file were
copied over to the :class:~mne.Epochs object
If we print the :class:~mne.Epochs object, we'll also see a note that the
epochs are not copied into memory by default, and a count of the number of
epochs created for each integer Event ID.
End of explanation
print(epochs.event_id)
Explanation: Notice that the Event IDs are in quotes; since we didn't provide an event
dictionary, the :class:mne.Epochs constructor created one automatically and
used the string representation of the integer Event IDs as the dictionary
keys. This is more clear when viewing the event_id attribute:
End of explanation
event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'face': 5, 'buttonpress': 32}
epochs = mne.Epochs(raw, events, tmin=-0.3, tmax=0.7, event_id=event_dict,
preload=True)
print(epochs.event_id)
del raw # we're done with raw, free up some memory
Explanation: This time let's pass preload=True and provide an event dictionary; our
provided dictionary will get stored as the event_id attribute and will
make referencing events and pooling across event types easier:
End of explanation
print(epochs.drop_log[-4:])
Explanation: Notice that the output now mentions "1 bad epoch dropped". In the tutorial
section tut-reject-epochs-section we saw how you can specify channel
amplitude criteria for rejecting epochs, but here we haven't specified any
such criteria. In this case, it turns out that the last event was too close
the end of the (cropped) raw file to accommodate our requested tmax of
0.7 seconds, so the final epoch was dropped because it was too short. Here
are the drop_log entries for the last 4 epochs (empty lists indicate
epochs that were not dropped):
End of explanation
epochs.plot(n_epochs=10)
Explanation: <div class="alert alert-info"><h4>Note</h4><p>If you forget to provide the event dictionary to the :class:`~mne.Epochs`
constructor, you can add it later by assigning to the ``event_id``
attribute::
epochs.event_id = event_dict</p></div>
Basic visualization of Epochs objects
The :class:~mne.Epochs object can be visualized (and browsed interactively)
using its :meth:~mne.Epochs.plot method:
End of explanation
print(epochs['face'])
Explanation: Notice that the individual epochs are sequentially numbered along the bottom
axis; the event ID associated with the epoch is marked on the top axis;
epochs are separated by vertical dashed lines; and a vertical solid green
line marks time=0 for each epoch (i.e., in this case, the stimulus onset
time for each trial). Epoch plots are interactive (similar to
:meth:raw.plot() <mne.io.Raw.plot>) and have many of the same interactive
controls as :class:~mne.io.Raw plots. Horizontal and vertical scrollbars
allow browsing through epochs or channels (respectively), and pressing
:kbd:? when the plot is focused will show a help screen with all the
available controls. See tut-visualize-epochs for more details (as well
as other ways of visualizing epoched data).
Subselecting epochs
Now that we have our :class:~mne.Epochs object with our descriptive event
labels added, we can subselect epochs easily using square brackets. For
example, we can load all the "catch trials" where the stimulus was a face:
End of explanation
# pool across left + right
print(epochs['auditory'])
assert len(epochs['auditory']) == (len(epochs['auditory/left']) +
len(epochs['auditory/right']))
# pool across auditory + visual
print(epochs['left'])
assert len(epochs['left']) == (len(epochs['auditory/left']) +
len(epochs['visual/left']))
Explanation: We can also pool across conditions easily, thanks to how MNE-Python handles
the / character in epoch labels (using what is sometimes called
"tag-based indexing"):
End of explanation
print(epochs[['right', 'bottom']])
Explanation: You can also pool conditions by passing multiple tags as a list. Note that
MNE-Python will not complain if you ask for tags not present in the object,
as long as it can find some match: the below example is parsed as
(inclusive) 'right' or 'bottom', and you can see from the output
that it selects only auditory/right and visual/right.
End of explanation
try:
print(epochs[['top', 'bottom']])
except KeyError:
print('Tag-based selection with no matches raises a KeyError!')
Explanation: However, if no match is found, an error is returned:
End of explanation
print(epochs[:10]) # epochs 0-9
print(epochs[1:8:2]) # epochs 1, 3, 5, 7
print(epochs['buttonpress'][:4]) # first 4 "buttonpress" epochs
print(epochs['buttonpress'][[0, 1, 2, 3]]) # same as previous line
Explanation: Selecting epochs by index
:class:~mne.Epochs objects can also be indexed with integers, :term:slices
<slice>, or lists of integers. This method of selection ignores event
labels, so if you want the first 10 epochs of a particular type, you can
select the type first, then use integers or slices:
End of explanation
epochs_eeg = epochs.copy().pick_types(meg=False, eeg=True)
print(epochs_eeg.ch_names)
new_order = ['EEG 002', 'STI 014', 'EOG 061', 'MEG 2521']
epochs_subset = epochs.copy().reorder_channels(new_order)
print(epochs_subset.ch_names)
del epochs_eeg, epochs_subset
Explanation: Selecting, dropping, and reordering channels
You can use the :meth:~mne.Epochs.pick, :meth:~mne.Epochs.pick_channels,
:meth:~mne.Epochs.pick_types, and :meth:~mne.Epochs.drop_channels methods
to modify which channels are included in an :class:~mne.Epochs object. You
can also use :meth:~mne.Epochs.reorder_channels for this purpose; any
channel names not provided to :meth:~mne.Epochs.reorder_channels will be
dropped. Note that these channel selection methods modify the object
in-place (unlike the square-bracket indexing to select epochs seen above)
so in interactive/exploratory sessions you may want to create a
:meth:~mne.Epochs.copy first.
End of explanation
epochs.rename_channels({'EOG 061': 'BlinkChannel'})
epochs.set_channel_types({'EEG 060': 'ecg'})
print(list(zip(epochs.ch_names, epochs.get_channel_types()))[-4:])
# let's set them back to the correct values before moving on
epochs.rename_channels({'BlinkChannel': 'EOG 061'})
epochs.set_channel_types({'EEG 060': 'eeg'})
Explanation: Changing channel name and type
You can change the name or type of a channel using
:meth:~mne.Epochs.rename_channels or :meth:~mne.Epochs.set_channel_types.
Both methods take :class:dictionaries <dict> where the keys are existing
channel names, and the values are the new name (or type) for that channel.
Existing channels that are not in the dictionary will be unchanged.
End of explanation
shorter_epochs = epochs.copy().crop(tmin=-0.1, tmax=0.1, include_tmax=True)
for name, obj in dict(Original=epochs, Cropped=shorter_epochs).items():
print('{} epochs has {} time samples'
.format(name, obj.get_data().shape[-1]))
Explanation: Selection in the time domain
To change the temporal extent of the :class:~mne.Epochs, you can use the
:meth:~mne.Epochs.crop method:
End of explanation
# shift times so that first sample of each epoch is at time zero
later_epochs = epochs.copy().shift_time(tshift=0., relative=False)
print(later_epochs.times[:3])
# shift times by a relative amount
later_epochs.shift_time(tshift=-7, relative=True)
print(later_epochs.times[:3])
del shorter_epochs, later_epochs
Explanation: However, if you wanted to expand the time domain of an :class:~mne.Epochs
object, you would need to go back to the :class:~mne.io.Raw data and
recreate the :class:~mne.Epochs with different values for tmin and/or
tmax.
It is also possible to change the "zero point" that defines the time values
in an :class:~mne.Epochs object, with the :meth:~mne.Epochs.shift_time
method. :meth:~mne.Epochs.shift_time allows shifting times relative to the
current values, or specifying a fixed time to set as the new time value of
the first sample (deriving the new time values of subsequent samples based on
the :class:~mne.Epochs object's sampling frequency).
End of explanation
eog_data = epochs.get_data(picks='EOG 061')
meg_data = epochs.get_data(picks=['mag', 'grad'])
channel_4_6_8 = epochs.get_data(picks=slice(4, 9, 2))
for name, arr in dict(EOG=eog_data, MEG=meg_data, Slice=channel_4_6_8).items():
print('{} contains {} channels'.format(name, arr.shape[1]))
Explanation: Note that although time shifting respects the sampling frequency (the spacing
between samples), it does not enforce the assumption that there is a sample
occurring at exactly time=0.
Extracting data in other forms
The :meth:~mne.Epochs.get_data method returns the epoched data as a
:class:NumPy array <numpy.ndarray>, of shape (n_epochs, n_channels,
n_times); an optional picks parameter selects a subset of channels by
index, name, or type:
End of explanation
df = epochs.to_data_frame(index=['condition', 'epoch', 'time'])
df.sort_index(inplace=True)
print(df.loc[('auditory/left', slice(0, 10), slice(100, 107)),
'EEG 056':'EEG 058'])
del df
Explanation: Note that if your analysis requires repeatedly extracting single epochs from
an :class:~mne.Epochs object, epochs.get_data(item=2) will be much
faster than epochs[2].get_data(), because it avoids the step of
subsetting the :class:~mne.Epochs object first.
You can also export :class:~mne.Epochs data to :class:Pandas DataFrames
<pandas.DataFrame>. Here, the :class:~pandas.DataFrame index will be
constructed by converting the time of each sample into milliseconds and
rounding it to the nearest integer, and combining it with the event types and
epoch numbers to form a hierarchical :class:~pandas.MultiIndex. Each
channel will appear in a separate column. Then you can use any of Pandas'
tools for grouping and aggregating data; for example, here we select any
epochs numbered 10 or less from the auditory/left condition, and extract
times between 100 and 107 ms on channels EEG 056 through EEG 058
(note that slice indexing within Pandas' :obj:~pandas.DataFrame.loc is
inclusive of the endpoint):
End of explanation
epochs.save('saved-audiovisual-epo.fif', overwrite=True)
epochs_from_file = mne.read_epochs('saved-audiovisual-epo.fif', preload=False)
Explanation: See the tut-epochs-dataframe tutorial for many more examples of the
:meth:~mne.Epochs.to_data_frame method.
Loading and saving Epochs objects to disk
:class:~mne.Epochs objects can be loaded and saved in the .fif format
just like :class:~mne.io.Raw objects, using the :func:mne.read_epochs
function and the :meth:~mne.Epochs.save method. Functions are also
available for loading data that was epoched outside of MNE-Python, such as
:func:mne.read_epochs_eeglab and :func:mne.read_epochs_kit.
End of explanation
print(type(epochs))
print(type(epochs_from_file))
Explanation: The MNE-Python naming convention for epochs files is that the file basename
(the part before the .fif or .fif.gz extension) should end with
-epo or _epo, and a warning will be issued if the filename you
provide does not adhere to that convention.
As a final note, be aware that the class of the epochs object is different
when epochs are loaded from disk rather than generated from a
:class:~mne.io.Raw object:
End of explanation
print(all([isinstance(epochs, mne.BaseEpochs),
isinstance(epochs_from_file, mne.BaseEpochs)]))
Explanation: In almost all cases this will not require changing anything about your code.
However, if you need to do type checking on epochs objects, you can test
against the base class that these classes are derived from:
End of explanation
for epoch in epochs[:3]:
print(type(epoch))
Explanation: Iterating over Epochs
Iterating over an :class:~mne.Epochs object will yield :class:arrays
<numpy.ndarray> rather than single-trial :class:~mne.Epochs objects:
End of explanation
for index in range(3):
print(type(epochs[index]))
Explanation: If you want to iterate over :class:~mne.Epochs objects, you can use an
integer index as the iterator:
End of explanation |
2,928 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
US energy consumption
This example is based on the Sankey diagrams of US energy consumption from the Lawrence Livermore National Laboratory (thanks to John Muth for the suggestion and transcribing the data). We jump straight to the final result โ for more explanation of the steps and concepts, see the tutorials.
Step1: Load the dataset
Step2: This defines the order the nodes appear in
Step3: Now define the Sankey diagram definition.
Step4: Define the colours to roughly imitate the original Sankey diagram
Step5: And here's the result! | Python Code:
from floweaver import *
Explanation: US energy consumption
This example is based on the Sankey diagrams of US energy consumption from the Lawrence Livermore National Laboratory (thanks to John Muth for the suggestion and transcribing the data). We jump straight to the final result โ for more explanation of the steps and concepts, see the tutorials.
End of explanation
dataset = Dataset.from_csv('us-energy-consumption.csv',
dim_process_filename='us-energy-consumption-processes.csv')
Explanation: Load the dataset:
End of explanation
sources = ['Solar', 'Nuclear', 'Hydro', 'Wind', 'Geothermal',
'Natural_Gas', 'Coal', 'Biomass', 'Petroleum']
uses = ['Residential', 'Commercial', 'Industrial', 'Transportation']
Explanation: This defines the order the nodes appear in:
End of explanation
nodes = {
'sources': ProcessGroup('type == "source"', Partition.Simple('process', sources), title='Sources'),
'imports': ProcessGroup(['Net_Electricity_Import'], title='Net electricity imports'),
'electricity': ProcessGroup(['Electricity_Generation'], title='Electricity Generation'),
'uses': ProcessGroup('type == "use"', partition=Partition.Simple('process', uses)),
'energy_services': ProcessGroup(['Energy_Services'], title='Energy services'),
'rejected': ProcessGroup(['Rejected_Energy'], title='Rejected energy'),
'direct_use': Waypoint(Partition.Simple('source', [
# This is a hack to hide the labels of the partition, there should be a better way...
(' '*i, [k]) for i, k in enumerate(sources)
])),
}
ordering = [
[[], ['sources'], []],
[['imports'], ['electricity', 'direct_use'], []],
[[], ['uses'], []],
[[], ['rejected', 'energy_services'], []]
]
bundles = [
Bundle('sources', 'electricity'),
Bundle('sources', 'uses', waypoints=['direct_use']),
Bundle('electricity', 'uses'),
Bundle('imports', 'uses'),
Bundle('uses', 'energy_services'),
Bundle('uses', 'rejected'),
Bundle('electricity', 'rejected'),
]
Explanation: Now define the Sankey diagram definition.
End of explanation
palette = {
'Solar': 'gold',
'Nuclear': 'red',
'Hydro': 'blue',
'Wind': 'purple',
'Geothermal': 'brown',
'Natural_Gas': 'steelblue',
'Coal': 'black',
'Biomass': 'lightgreen',
'Petroleum': 'green',
'Electricity': 'orange',
'Rejected energy': 'lightgrey',
'Energy services': 'dimgrey',
}
Explanation: Define the colours to roughly imitate the original Sankey diagram:
End of explanation
sdd = SankeyDefinition(nodes, bundles, ordering,
flow_partition=dataset.partition('type'))
weave(sdd, dataset, palette=palette) \
.to_widget(width=700, height=450, margins=dict(left=100, right=120), debugging=True)
Explanation: And here's the result!
End of explanation |
2,929 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Photo-z Determination for SpIES High-z Candidates
Notebook that actually applies the algorithms from SpIESHighzQuasarPhotoz.ipynb to the quasar candidates.
Step1: Since we are running on separate test data, we don't need to do a train_test_split here. But we will scale the data. Need to remember to scale the test data later!
Step2: Applying to Quasars Candidates
Quasars candidates from the legacy KDE algorithm are in<br>
GTR-ADM-QSO-ir-testhighz_kdephotoz_lup_2016_quasar_candidates.dat
Quasars candidates from the Random Forest Algorithm are in<br>
GTR-ADM-QSO-ir_good_test_2016_out.fits
Quasar candidates from the RF, SVM, and/or bagging algorithms are in<br>
GTR-ADM-QSO-ir_good_test_2016_out_Stripe82all.fits<br>
In the case of the latter file, this includes Stripe82 only. If we run on the other files, we might want to limit to Stripe 82 to keep the computing time reasonable.
Step3: If you want to compare ZSPEC to ZPHOT, use the cells below for test set
Step4: Scale the test data
Step5: Not currently executing the next 2 cells, but putting the code here in case we want to do it later.
Step6: Instantiate Photo-z Algorithm of Choice
Here using Nadaraya-Watson and Random Forests
Step7: Apply Photo-z Algorithm(s)
Random Forest
Step8: Nadaraya-Watson
Step9: Only need this if Xtest is too big | Python Code:
## Read in the Training Data and Instantiating the Photo-z Algorithm
%matplotlib inline
from astropy.table import Table
import numpy as np
import matplotlib.pyplot as plt
#data = Table.read('GTR-ADM-QSO-ir-testhighz_findbw_lup_2016_starclean.fits')
#JT PATH ON TRITON to training set after classification
#data = Table.read('/Users/johntimlin/Catalogs/QSO_candidates/Training_set/GTR-ADM-QSO-ir-testhighz_findbw_lup_2016_starclean_with_shenlabel.fits')
data = Table.read('/Users/johntimlin/Catalogs/QSO_candidates/Training_set/GTR-ADM-QSO-Trainingset-with-McGreer-VVDS-DR12Q_splitlabel_VCVcut_best.fits')
#JT PATH HOME USE SHEN ZCUT
#data = Table.read('/home/john/Catalogs/QSO_Candidates/Training_set/GTR-ADM-QSO-ir-testhighz_findbw_lup_2016_starclean_with_shenlabel.fits')
#data = data.filled()
# Remove stars
qmask = (data['zspec']>0)
qdata = data[qmask]
print len(qdata)
# X is in the format need for all of the sklearn tools, it just has the colors
#Xtrain = np.vstack([ qdata['ug'], qdata['gr'], qdata['ri'], qdata['iz'], qdata['zs1'], qdata['s1s2']]).T
Xtrain = np.vstack([np.asarray(qdata[name]) for name in ['ug', 'gr', 'ri', 'iz', 'zs1', 's1s2']]).T
#y = np.array(data['labels'])
ytrain = np.array(qdata['zspec'])
Explanation: Photo-z Determination for SpIES High-z Candidates
Notebook that actually applies the algorithms from SpIESHighzQuasarPhotoz.ipynb to the quasar candidates.
End of explanation
# For algorithms that need scaled data:
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(Xtrain) # Don't cheat - fit only on training data
Explanation: Since we are running on separate test data, we don't need to do a train_test_split here. But we will scale the data. Need to remember to scale the test data later!
End of explanation
#testdata = Table.read('GTR-ADM-QSO-ir_good_test_2016_out_Stripe82all.fits')
# TEST DATA USING 3.5<z<5 zrange ON TRITON
#testdata = Table.read('/Users/johntimlin/Catalogs/QSO_candidates/Final_S82_candidates_full/GTR-ADM-QSO-ir_good_test_2016_out_Stripe82all.fits')
# TEST DATA USING 2.9<z<5.4 zrange ON HOME
#testdata = Table.read('/Users/johntimlin/Catalogs/QSO_Candidates/photoz/SpIES_SHELA_Quasar_Canidates_Shen_zrange_JTmultiproc.fits')
#testdata = Table.read('./catalogs/HZ_forphotoz.fits')
testdata = Table.read('/Users/johntimlin/Catalogs/QSO_candidates/New_training_candidates/Test_point_source_classifier/Final_sets/HZLZ_combined_all_wphotoz_alldata_allclassifiers.fits')
#Limit to objects that have been classified as quasars
#qsocandmask = ((testdata['ypredRFC']==0) | (testdata['ypredSVM']==0) | (testdata['ypredBAG']==0))
testdatacand = testdata#[qsocandmask]
print len(testdata),len(testdatacand)
Explanation: Applying to Quasars Candidates
Quasars candidates from the legacy KDE algorithm are in<br>
GTR-ADM-QSO-ir-testhighz_kdephotoz_lup_2016_quasar_candidates.dat
Quasars candidates from the Random Forest Algorithm are in<br>
GTR-ADM-QSO-ir_good_test_2016_out.fits
Quasar candidates from the RF, SVM, and/or bagging algorithms are in<br>
GTR-ADM-QSO-ir_good_test_2016_out_Stripe82all.fits<br>
In the case of the latter file, this includes Stripe82 only. If we run on the other files, we might want to limit to Stripe 82 to keep the computing time reasonable.
End of explanation
## Test zspec objects with zspec >=2.9 and see how well the zphot matches with zspec
#testdata = Table.read('/Users/johntimlin/Catalogs/QSO_candidates/Final_S82_candidates_full/QSOs_S82_wzspec_wcolors.fits')
#Limit to objects that have been classified as quasars
#qsocandmask = ((testdata['ypredRFC']==0) | (testdata['ypredSVM']==0) | (testdata['ypredBAG']==0))
#qsocandmask = (testdata['ZSPEC'] >= 2.9)
#testdatacand = testdata#[qsocandmask]
#print len(testdata),len(testdatacand)
Explanation: If you want to compare ZSPEC to ZPHOT, use the cells below for test set
End of explanation
#Xtest = np.vstack([ testdatacand['ug'], testdatacand['gr'], testdatacand['ri'], testdatacand['iz'], testdatacand['zs1'], testdatacand['s1s2']]).T
Xtest = np.vstack([np.asarray(testdatacand[name]) for name in ['ug', 'gr', 'ri', 'iz', 'zs1', 's1s2']]).T
XStest = scaler.transform(Xtest) # apply same transformation to test data
Explanation: Scale the test data
End of explanation
# Read in KDE candidates
dataKDE = Table.read('GTR-ADM-QSO-ir-testhighz_kdephotoz_lup_2016_quasar_candidates.dat', format='ascii')
print dataKDE.keys()
print len(XKDE)
XKDE = np.vstack([ dataKDE['ug'], dataKDE['gr'], dataKDE['ri'], dataKDE['iz'], dataKDE['zch1'], dataKDE['ch1ch2'] ]).T
# Read in RF candidates
dataRF = Table.read('GTR-ADM-QSO-ir_good_test_2016_out.fits')
print dataRF.keys()
print len(dataRF)
# Canidates only
maskRF = (dataRF['ypred']==0)
dataRF = dataRF[maskRF]
print len(dataRF)
# X is in the format need for all of the sklearn tools, it just has the colors
XRF = np.vstack([ dataRF['ug'], dataRF['gr'], dataRF['ri'], dataRF['iz'], dataRF['zs1'], dataRF['s1s2']]).T
Explanation: Not currently executing the next 2 cells, but putting the code here in case we want to do it later.
End of explanation
import numpy as np
from astroML.linear_model import NadarayaWatson
model = NadarayaWatson('gaussian', 0.05)
model.fit(Xtrain,ytrain)
from sklearn.ensemble import RandomForestRegressor
modelRF = RandomForestRegressor()
modelRF.fit(Xtrain,ytrain)
Explanation: Instantiate Photo-z Algorithm of Choice
Here using Nadaraya-Watson and Random Forests
End of explanation
zphotRF = modelRF.predict(Xtest)
Explanation: Apply Photo-z Algorithm(s)
Random Forest
End of explanation
zphotNW = model.predict(Xtest)
Explanation: Nadaraya-Watson
End of explanation
from dask import compute, delayed
def process(Xin):
return model.predict(Xin)
# Create dask objects
dobjs = [delayed(process)(x.reshape(1,-1)) for x in Xtest]
import dask.threaded
ypred = compute(*dobjs, get=dask.threaded.get)
# The dask output needs to be reformatted.
zphotNW = np.array(ypred).reshape(1,-1)[0]
testdatacand['zphotNW'] = zphotNW
testdatacand['zphotRF'] = zphotRF
#TRITON PATH
#testdatacand.write('/Users/johntimlin/Catalogs/QSO_candidates/photoz/Candidates_photoz_S82_shenzrange.fits', format='fits')
#HOME PATH
#testdatacand.write('/home/john/Catalogs/QSO_Candidates/photoz/Candidates_photoz_S82_shenzrange.fits', format='fits')
testdatacand.write('./HZLZ_combined_all_hzclassifiers_wphotoz.fits')
from densityplot import *
from pylab import *
fig = plt.figure(figsize=(5,5))
hex_scatter(testdatacand['zphotNW'],testdatacand['ug'], min_cnt=10, levels=2, std=True, smoothing=1,
hkwargs={'gridsize': 100, 'cmap': plt.cm.Blues},
skwargs={'color': 'k'})
plt.xlabel('zphot')
plt.ylabel('u-g')
#plt.xlim([-0.1,5.5])
#plt.ylim([-0.1,5.5])
plt.show()
from astroML.plotting import hist as fancyhist
fancyhist(testdatacand['zphotRF'], bins="freedman", histtype="step")
Explanation: Only need this if Xtest is too big
End of explanation |
2,930 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
For ecoinvent3.4 and ecoinvent3.5, the LCIA_implmentation.xls file does not include units for emissions of the pollutants anymore. This is a requirement however, in ecopold2matrix. This notebook introduces these units for emissions based on previous LCIA_implementation.xls files (e.g., based on ecoinvent3.3).
This notebook relies on pandas to import and manipulate data from excel and requires the user to have access to LCIA_implementation files for ecoinvent3.4 or 3.5 that they want to ready for ecospold2matrix, as well as an old LCIA_implementation_file.
Step1: Pollutants which were already present in your old version will have their exchange unit introduced in the incomplete_LCIA. New pollutants of the recent LCIA_implementation file however, will still have NaN as their unit (i.e., no unit specified).
Step2: Export the completed version of the LCIA_implmentation file where you want (e.g., in your ecoinvent folder with datasets and MasterData) | Python Code:
import pandas as pd
old_LCIA = pd.read_excel('put_the_path_to_your_old_LCIA_implementation_file.xls_here','CFs')
incomplete_LCIA = pd.read_excel('put_the_path_to_your_incomplete_LCIA_implementation_file.xls_here','CFs')
complete_LCIA = incomplete_LCIA.merge(old_LCIA,how='left')
# drop obsolete columns
complete_LCIA = complete_LCIA.drop([i for i in old_LCIA.columns if i not in incomplete_LCIA.columns
and i != 'exchange unit'],axis=1)
Explanation: For ecoinvent3.4 and ecoinvent3.5, the LCIA_implmentation.xls file does not include units for emissions of the pollutants anymore. This is a requirement however, in ecopold2matrix. This notebook introduces these units for emissions based on previous LCIA_implementation.xls files (e.g., based on ecoinvent3.3).
This notebook relies on pandas to import and manipulate data from excel and requires the user to have access to LCIA_implementation files for ecoinvent3.4 or 3.5 that they want to ready for ecospold2matrix, as well as an old LCIA_implementation_file.
End of explanation
# m2*year for land use/land occupation categories
complete_LCIA.loc[[i for i in complete_LCIA.index if 'land' in complete_LCIA.category[i]
and type(complete_LCIA.loc[i,'exchange unit']) == float],'exchange unit'] = 'm2*year'
# kBq for pollutant linked to radioactivity
complete_LCIA.loc[[i for i in complete_LCIA.index if (complete_LCIA.category[i] == 'ionising radiation'
or complete_LCIA.category[i] == 'radioactive waste to deposit')
and type(complete_LCIA.loc[i,'exchange unit']) == float],'exchange unit'] = 'kBq'
# m3 for amounts of water
complete_LCIA.loc[[i for i in complete_LCIA.index if complete_LCIA.category[i] == 'water depletion'
and type(complete_LCIA.loc[i,'exchange unit']) == float],'exchange unit'] = 'm3'
# kg for the rest
complete_LCIA.loc[[i for i in complete_LCIA.index if type(complete_LCIA.loc[i,'exchange unit']) == float],
'exchange unit'] = 'kg'
Explanation: Pollutants which were already present in your old version will have their exchange unit introduced in the incomplete_LCIA. New pollutants of the recent LCIA_implementation file however, will still have NaN as their unit (i.e., no unit specified).
End of explanation
complete_LCIA.to_excel('put_the_path_where_you_want_this_completed_version_to_be_stored.xls')
Explanation: Export the completed version of the LCIA_implmentation file where you want (e.g., in your ecoinvent folder with datasets and MasterData)
End of explanation |
2,931 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Our test set here includes the 16 million molecules from the old ZINC clean set that could be successfully processed by the RDKit.
We use the Standard InChI that comes with ChEMBL and a non-standard InChI (options "/FixedH /SUU") that allows tautomers to be distinguished. Here's the sequence of psql commands used to generate that set
Step1: Big caveat here
Step2: grouping on the main layer
Step3: Look at a few of the common main layer groups
Step4: Charges
Step5: Stereo grouping
Step6: an aside
I discovered this little InChI pathology while doing this work. I spent a good half hour trying to track down the RDKit bug that made it happen before realizing that it's by design. Note that this is with FixedH InChIs.
Step7: Sucks to be you if it's important to you that those molecules be different and you're using InChI.
Note that at least ZINC12405219 and ZINC19940218 are, according to ZINC, separately available from vendors
Isotopes
Step8: No need here, this set has no labelled compounds. That's likely a property of how the ZINC clean set was constructed.
examples where the tautomerism leads to new tetrahedral symmetry
Step9: Not much interesting there. There's no simple query to find questionable tautomer motion. | Python Code:
%sql postgresql://localhost/inchi_split \
select count(*) from zinc_clean_nonstandard;
Explanation: Our test set here includes the 16 million molecules from the old ZINC clean set that could be successfully processed by the RDKit.
We use the Standard InChI that comes with ChEMBL and a non-standard InChI (options "/FixedH /SUU") that allows tautomers to be distinguished. Here's the sequence of psql commands used to generate that set:
End of explanation
d = %sql \
select formula,count(zinc_id) freq from zinc_clean_nonstandard group by formula \
order by freq desc limit 10;
d
Explanation: Big caveat here: I forgot the last commit in my loading script, so the last block of structures is missing.
Formula level grouping
End of explanation
d = %sql \
select formula,skeleton,hydrogens,count(zinc_id) freq from zinc_clean_nonstandard group by \
(formula,skeleton,hydrogens) \
order by freq desc limit 10;
Explanation: grouping on the main layer
End of explanation
d[:5]
tpl=d[0][:-1]
print(tpl)
rows = %sql \
select zinc_id,smiles from zinc_clean join zinc_clean_nonstandard using (zinc_id) where \
(formula,skeleton,hydrogens) = :tpl
cids = [x for x,y in rows][:9]
ms = [Chem.MolFromSmiles(y) for x,y in rows][:9]
Draw.MolsToGridImage(ms,legends=cids)
tpl=d[1][:-1]
print(tpl)
rows = %sql \
select zinc_id,smiles from zinc_clean join zinc_clean_nonstandard using (zinc_id) where \
(formula,skeleton,hydrogens) = :tpl
cids = [x for x,y in rows][:9]
ms = [Chem.MolFromSmiles(y) for x,y in rows][:9]
Draw.MolsToGridImage(ms,legends=cids)
tpl=d[4][:-1]
print(tpl)
rows = %sql \
select zinc_id,smiles from zinc_clean join zinc_clean_nonstandard using (zinc_id) where \
(formula,skeleton,hydrogens) = :tpl
cids = [x for x,y in rows][:9]
ms = [Chem.MolFromSmiles(y) for x,y in rows][:9]
Draw.MolsToGridImage(ms,legends=cids)
Explanation: Look at a few of the common main layer groups
End of explanation
d = %sql \
select formula,skeleton,hydrogens,charge,protonation,count(zinc_id) freq from zinc_clean_nonstandard group by \
(formula,skeleton,hydrogens,charge,protonation) \
order by freq desc limit 10;
d[:5]
tpl=d[0][:-1]
tpl = tuple(x if x is not None else '' for x in tpl)
print(tpl)
rows = %sql \
select zinc_id,smiles from zinc_clean join zinc_clean_nonstandard using (zinc_id) where \
(formula,skeleton,hydrogens,coalesce(charge,''),coalesce(protonation,'')) = :tpl
cids = [x for x,y in rows][:9]
ms = [Chem.MolFromSmiles(y) for x,y in rows][:9]
Draw.MolsToGridImage(ms,legends=cids)
Explanation: Charges
End of explanation
d = %sql \
select formula,skeleton,hydrogens,charge,protonation,stereo_bond,stereo_tet,stereo_m,stereo_s,count(zinc_id) freq \
from zinc_clean_nonstandard where stereo_bond is not null or stereo_tet is not null \
group by \
(formula,skeleton,hydrogens,charge,protonation,stereo_bond,stereo_tet,stereo_m,stereo_s) \
order by freq desc limit 10;
d[:5]
tpl=d[0][:-1]
tpl = tuple(x if x is not None else '' for x in tpl)
print(tpl)
rows = %sql \
select zinc_id,smiles from zinc_clean join zinc_clean_nonstandard using (zinc_id) where \
(formula,skeleton,hydrogens,\
coalesce(charge,''),coalesce(protonation,''),coalesce(stereo_bond,''),\
coalesce(stereo_tet,''),coalesce(stereo_m,''),coalesce(stereo_s,'')) = :tpl
cids = [x for x,y in rows]
ms = [Chem.MolFromSmiles(y) for x,y in rows]
Draw.MolsToGridImage(ms,legends=cids)
tpl=d[1][:-1]
tpl = tuple(x if x is not None else '' for x in tpl)
print(tpl)
rows = %sql \
select zinc_id,smiles from zinc_clean join zinc_clean_nonstandard using (zinc_id) where \
(formula,skeleton,hydrogens,\
coalesce(charge,''),coalesce(protonation,''),coalesce(stereo_bond,''),\
coalesce(stereo_tet,''),coalesce(stereo_m,''),coalesce(stereo_s,'')) = :tpl
cids = [x for x,y in rows]
ms = [Chem.MolFromSmiles(y) for x,y in rows]
Draw.MolsToGridImage(ms,legends=cids)
Explanation: Stereo grouping
End of explanation
td = %sql \
select t2.zinc_id,t2.nonstandard_inchi,t2.smiles from zinc_clean_nonstandard t1 join zinc_clean t2 using (zinc_id) \
where (formula,skeleton,hydrogens,charge)=\
('/C29H33N2','/c1-28(2)22-16-12-14-18-24(22)30(5)26(28)20-10-8-7-9-11-21-27-29(3,4)23-17-13-15-19-25(23)31(27)6',\
'/h7-21H,1-6H3','/q+1')
print(td)
cids = [x for x,y,z in td]
ms = [Chem.MolFromSmiles(z) for x,y,z in td]
Draw.MolsToGridImage(ms,legends=cids)
Explanation: an aside
I discovered this little InChI pathology while doing this work. I spent a good half hour trying to track down the RDKit bug that made it happen before realizing that it's by design. Note that this is with FixedH InChIs.
End of explanation
%sql \
select count(*) \
from zinc_clean_nonstandard where isotope is not null
Explanation: Sucks to be you if it's important to you that those molecules be different and you're using InChI.
Note that at least ZINC12405219 and ZINC19940218 are, according to ZINC, separately available from vendors
Isotopes
End of explanation
rows = %sql \
select zinc_id,smiles,nonstandard_inchi from zinc_clean join zinc_clean_nonstandard using (zinc_id) where \
fixedh_stereo_tet is not null and position('?' in fixedh_stereo_tet)<=0 and stereo_tet!=fixedh_stereo_tet
len(rows)
cids = [x for x,y,z in rows][:10]
ms = [Chem.MolFromSmiles(y) for x,y,z in rows][:10]
Draw.MolsToGridImage(ms,legends=cids)
Explanation: No need here, this set has no labelled compounds. That's likely a property of how the ZINC clean set was constructed.
examples where the tautomerism leads to new tetrahedral symmetry
End of explanation
rows = %sql \
select zinc_id,smiles,nonstandard_inchi from zinc_clean join zinc_clean_nonstandard using (zinc_id) where \
fixedh_stereo_bond is not null and fixedh_stereo_bond!='/b' and position('?' in fixedh_stereo_bond)<=0 and stereo_bond!=fixedh_stereo_bond
len(rows)
cids = [x for x,y,z in rows][:10]
ms = [Chem.MolFromSmiles(y) for x,y,z in rows][:10]
Draw.MolsToGridImage(ms,legends=cids)
Explanation: Not much interesting there. There's no simple query to find questionable tautomer motion. :-)
Examples where tautomerism leads to new bond stereochemistry
End of explanation |
2,932 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Importar librerรญas
Step2: Cargar base de datos
Step3: Desechar imรกgenes no cuadradas
Step4: Normalizar
Step5: Balance de Clases, definir conjuntos de train, val, y test
Step6: Red Neuronal
Preliminares
Step7: Al parecer este bloque no hace nada a las variables x_train y similares
Step9: Configurar Red!
Step10: Entrenamiento | Python Code:
from google.colab import drive
drive.mount('/content/drive')
Explanation: <a href="https://colab.research.google.com/github/kevinracso/01Tarea/blob/master/Copia_de_Copia_de_Preprocesamiento_y_Red_test.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Permite importar archivos desde Google Drive
End of explanation
import numpy as np
import pandas as pd
import matplotlib
#%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from skimage import io, img_as_float
import scipy.interpolate as itp
import random
from __future__ import print_function
from time import time
from sklearn import metrics
from sklearn.cluster import KMeans
from sklearn.cluster import DBSCAN
from sklearn.cluster import AgglomerativeClustering
from sklearn.datasets import load_digits
from sklearn.decomposition import PCA
from sklearn.preprocessing import scale
from warnings import simplefilter
simplefilter(action='ignore', category=FutureWarning) # Comando para ignorar los warnings
import keras # Importando la libreria keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten # Comandos de keras que importan las capas dense, dropout y flatten
from keras.layers import Conv2D, MaxPooling2D, ZeroPadding2D # Comandos de keras que importan las capas convolucionales y maxpooling2D
from keras import backend as K
from keras.layers.advanced_activations import LeakyReLU
Explanation: Importar librerรญas
End of explanation
# Cargando base de datos Alerce.
data = pd.read_pickle('/content/drive/My Drive/Supernova/ALERCE_stamps.pkl') # Base de datos original
#data = pd.read_pickle('/content/drive/My Drive/Supernova/ALERCE_stamps_1k.pkl') # Base de datos reducida
imgs = data['images'] # Lista de 31727 imagenes
labels = data['labels'] # Lista de 31727 labels
img_0 = imgs[0] # Primera imagen de imgs, ndarray 63x63, 3 canales
label_0 = labels[0] # Primer label, entero
np.shape(imgs)
Explanation: Cargar base de datos
End of explanation
shap_def = (63, 63, 3) # Estructura correcta de imagenes, cuadradas, 3 canales
img_sq = [] # Inicializar lista vacia para almacenar imagenes que tienen
# estructura correcta
label_sq = [] # Inicializar lista vacia para almacenar labels de imagenes que
# tienen estructura correcta
for j in list(range(len(imgs))): # recorrer imagenes
#print('Revisando si la imagen '+str(j)+' es cuadrada')
if np.shape(imgs[j]) == shap_def: # si tiene estructura correcta, agregar
img_sq.append(imgs[j])
label_sq.append(labels[j])
Explanation: Desechar imรกgenes no cuadradas
End of explanation
img_sq = np.asarray(img_sq)
label_sq = np.asarray(label_sq)
# Funcion auxiliar que normaliza UNA matriz
def Norm_Data_matrix(matrix):
max_data = np.nanmax(matrix)
min_data = np.nanmin(matrix)
Norm_matrix = (matrix-min_data)/(max_data-min_data)
return Norm_matrix
# Normalizacion de las imagenes
img_norm = []
for i in range(len(img_sq)):
muestra = img_sq[i]
#print('Normalizando muestra ' + str(i))
muestra[:, :, 0] = Norm_Data_matrix(muestra[:, :, 0])
muestra[:, :, 1] = Norm_Data_matrix(muestra[:, :, 1])
muestra[:, :, 2] = Norm_Data_matrix(muestra[:, :, 2])
img_norm.append(muestra)
img_norm = np.array(img_norm) # Convertir a arreglo numpy
nan_index = np.argwhere(np.isnan(img_norm))
#print('Indices de nan en arreglo normalizado', nan_index)
img_no_nan = np.nan_to_num(img_norm) # Convertir nan a cero
nonan_index = np.argwhere(np.isnan(img_no_nan))
#print('Indices de nan en arreglo al eliminar nan', nonan_index)
np.shape(img_no_nan) # Dimensiones de la base corregida
Explanation: Normalizar
End of explanation
"Objetos a tratar"
" Clases = {0: 'AGN', 1:'SN', 2:'VS', 3:'asteroid', 4:'bogus'}"
"AGN: Active Galactic Nuclei"
"SN: Supernova"
"VS: Variable Star"
"bogus: Artifacts"
img_no_nan # imagenes, np array
label_sq # labels, np array
unique, counts = np.unique(label_sq, return_counts=True)
class_counter = dict(zip(unique, counts)) # muestra cuantas veces se repite cada clase
print('Numero de muestras por clase:', class_counter)
#test = np.concatenate((img_no_nan, label_sq), axis=0)
#test_1 = np.concatenate((img_no_nan, label_sq), axis=1)
s_index = np.arange(label_sq.shape[0]) # indexar arreglos
np.random.shuffle(s_index) # revolver
# label_sq.shape[0], label_sq.shape
img_no_nan = img_no_nan[s_index] # reemplazar arreglos originales por arreglos revueltos
label_sq = label_sq[s_index]
"Encontrar ubicaciones de todos los tipos de objeto"
agn_ind = np.squeeze(np.argwhere(label_sq == 0))
sn_ind = np.squeeze(np.argwhere(label_sq == 1))
vs_ind = np.squeeze(np.argwhere(label_sq == 2))
ast_ind = np.squeeze(np.argwhere(label_sq == 3))
bog_ind = np.squeeze(np.argwhere(label_sq == 4))
"Una lista para cada tipo de objeto"
img_AGN = img_no_nan[agn_ind]
img_SN = img_no_nan[sn_ind]
img_VS = img_no_nan[vs_ind]
img_AST = img_no_nan[ast_ind]
img_bog = img_no_nan[bog_ind]
# Separaciรณn de la base de datos en un conjunto de entrenamiento y validaciรณn.
train_size = int(len(img_no_nan)*.75) # Definiendo el tamaรฑo del conjunto de entrenamiento con los requisitos solicitados
train_total = int(len(img_no_nan)) # Tamaรฑo total del conjunto original.
x_train = img_no_nan[0:train_size,:] # Nuevo conjunto horizontal de entrenamiento.
x_val = img_no_nan[train_size:train_total,:] # Conjunto de validacion horizontal. (imagenes)
y_train = y_data[:train_size] # Nuevo conjunto vertical de entrenamiento. (clases)
y_val = y_data[train_size:] # Conjunto de validacion vertical (clases)
input_shape = (img_rows, img_cols, 3)
Explanation: Balance de Clases, definir conjuntos de train, val, y test
End of explanation
# Para poder procesar las clases es necesario pasarlas a one hot encoding
# Funcion para tener las clases como one hot encoding
def one_hot(a, num_classes):
return np.squeeze(np.eye(num_classes)[a.reshape(-1)])
onehot_labels = one_hot(label_sq, 5)
batch_size = 128 # Definiendo el tamaรฑo de los batches
num_classes = 5 # Numero de clases del conjunto
epochs = 12 # Cantidad de epocas
#pool_value = 0.5
# input image dimensions
img_rows, img_cols = 63, 63 # Definiendo las filas y columnas de las imagenes
y_data = onehot_labels
np.shape(img_no_nan)
Explanation: Red Neuronal
Preliminares
End of explanation
x_train = x_train.astype('float32')
x_val = x_val.astype('float32')
x_train /= 255
x_val /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_val.shape[0], 'test samples')
Explanation: Al parecer este bloque no hace nada a las variables x_train y similares
End of explanation
PROGRAMACIรN DE LA RED NEURONAL CONVOLUCIONAL!!
En los siguientes bloques se configura una CNN usando la estructura entregada en el paper.
"Convierten a one hot encoding"
y_train_1 = keras.utils.to_categorical(y_train, num_classes) # Categorizaciรณn de las muestras del conjunto vertical de entrenamiento.
y_val_1 = keras.utils.to_categorical(y_val, num_classes) # Categorizaciรณn de las muestras del conjunto vertical de prueba.
"Convierten a one hot encoding"
time_i = time()
model = Sequential() # Inicio de la configuracion del clasificador , a partir de este punto se le aรฑadiran las capas que conformaran la CNN
model.add(Conv2D(32, kernel_size=(4, 4), # Aรฑadiendo una capa convolucional al modelo con un kernel de 3x3
activation = None, # Con un funcion de activacion relu y las dimensiones configuradas en el bloque anterior
input_shape=input_shape))
model.add(Conv2D(32, (3, 3), activation = None)) # Aรฑadiendo otra capa convolucional con una funcion de activacion relu
model.add(MaxPooling2D(pool_size=(2, 2),strides = 2)) # Utilizando el mรฉtodo maxpooling en esta red.
model.add(Conv2D(64, (3, 3), activation = None))
model.add(Conv2D(64, (3, 3), activation = None))
model.add(Conv2D(64, (3, 3), activation = None))
model.add(MaxPooling2D(pool_size=(2, 2), strides = 2))
#model.add(Dropout(0.5)) # Aรฑadiendo Dropout a la red
model.add(Flatten()) # Este comando remueve todas las dimensiones de un tensor exceptuano una.
#model.add(Dense(128, activation=LeakyReLU)) # En esta capa se implementa la funcion de activacion relu a la capa anterior de la la red neuronal.
model.add(Dense(64)) # 64 salidas
model.add(LeakyReLU()) # LeakyReLU
model.add(Dropout(0.5)) # Se agrega la tecnica de Dropout para abordar el overfitting.
model.add(Dense(64))
model.add(LeakyReLU()) # LeakyReLU
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax')) # Se aplica a la red siendo esta vez usando la funcion softmax.
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy']) # Con este coomando se configura el modelo para el entrenamiento.
Explanation: Configurar Red!
End of explanation
# Entrenamiento del modelo
model_train = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_val, y_val)) # Comando para entrenar el modelo.
model_train
score = model.evaluate(x_val, y_val, verbose=0) # Con este comando se obtiene el valor de perdida y de las metricas en el modo de prueba.
time_f = time()
time_execution = time_f - time_i # Con este comando se obtiene el valor de perdida y de las metricas en el modo de prueba.
print('Valid loss:', score[0]) # Se imprime la perdida para la prueba realizada.
print('Valid accuracy:', score[1]) # Se imprimre el accuracy para la prueba realizada.
print('Tiempo de entrenamiento ' + str(time_execution))
import numpy as np
x = np.array([0, 3, 1, 2, 4, 1, 0, 4])
# x = np.reshape(x, (4, 25))
print(x)
ind = np.squeeze(np.argwhere(x%7 == 0))
print(ind)
#y = np.squeeze(x[ind])
y = x[ind]
print(y)
z = keras.utils.to_categorical(x, 5)
print(z)
Explanation: Entrenamiento
End of explanation |
2,933 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reproducible Experiments with pynoddy
All pynoddy experiments can be defined in a Python script, and if all settings are appropriate, then this script can be re-run to obtain a reproduction of the results. However, it is often more convenient to encapsulate all elements of an experiment within one class. We show here how this is done in the pynoddy.experiment.Experiment class and how this class can be used to define simple reproducible experiments with kinematic models.
Step1: Defining an experiment
We are considering the following scenario
Step2: For simpler visualisation in this notebook, we will analyse the following steps in a section view of the model.
We consider a section in y-direction through the model
Step3: Before we start to draw random realisations of the model, we should first store the base state of the model for later reference. This is simply possibel with the freeze() method which stores the current state of the model as the "base-state"
Step4: We now intialise the random generator. We can directly assign a random seed to simplify reproducibility (note that this is not essential, as it would be for the definition in a script function
Step5: The next step is to define probability distributions to the relevant event parameters. Let's first look at the different events
Step6: Next, we define the probability distributions for the uncertain input parameters | Python Code:
from IPython.core.display import HTML
css_file = 'pynoddy.css'
HTML(open(css_file, "r").read())
%matplotlib inline
# here the usual imports. If any of the imports fails,
# make sure that pynoddy is installed
# properly, ideally with 'python setup.py develop'
# or 'python setup.py install'
import sys, os
import matplotlib.pyplot as plt
import numpy as np
# adjust some settings for matplotlib
from matplotlib import rcParams
# print rcParams
rcParams['font.size'] = 15
# determine path of repository to set paths corretly below
repo_path = os.path.realpath('../..')
import pynoddy.history
import pynoddy.experiment
reload(pynoddy.experiment)
rcParams.update({'font.size': 15})
Explanation: Reproducible Experiments with pynoddy
All pynoddy experiments can be defined in a Python script, and if all settings are appropriate, then this script can be re-run to obtain a reproduction of the results. However, it is often more convenient to encapsulate all elements of an experiment within one class. We show here how this is done in the pynoddy.experiment.Experiment class and how this class can be used to define simple reproducible experiments with kinematic models.
End of explanation
reload(pynoddy.history)
reload(pynoddy.experiment)
from pynoddy.experiment import monte_carlo
model_url = 'http://tectonique.net/asg/ch3/ch3_7/his/typeb.his'
ue = pynoddy.experiment.Experiment(url = model_url)
Explanation: Defining an experiment
We are considering the following scenario: we defined a kinematic model of a prospective geological unit at depth. As we know that the estimates of the (kinematic) model parameters contain a high degree of uncertainty, we would like to represent this uncertainty with the model.
Our approach is here to perform a randomised uncertainty propagation analysis with a Monte Carlo sampling method. Results should be presented in several figures (2-D slice plots and a VTK representation in 3-D).
To perform this analysis, we need to perform the following steps (see main paper for more details):
Define kinematic model parameters and construct the initial (base) model;
Assign probability distributions (and possible parameter correlations) to relevant uncertain input parameters;
Generate a set of n random realisations, repeating the following steps:
Draw a randomised input parameter set from the parameter distribu- tion;
Generate a model with this parameter set;
Analyse the generated model and store results;
Finally: perform postprocessing, generate figures of results
It would be possible to write a Python script to perform all of these steps in one go. However, we will here take another path and use the implementation in a Pynoddy Experiment class. Initially, this requires more work and a careful definition of the experiment - but, finally, it will enable a higher level of flexibility, extensibility, and reproducibility.
Loading an example model from the Atlas of Structural Geophysics
As in the example for geophysical potential-field simulation, we will use a model from the Atlas of Structural Geophysics as an examlpe model for this simulation. We use a model for a fold interference structure. A discretised 3-D version of this model is presented in the figure below. The model represents a fold interference pattern of "Type 1" according to the definition of Ramsey (1967).
Instead of loading the model into a history object, we are now directly creating an experiment object:
End of explanation
ue.write_history("typeb_tmp3.his")
ue.write_history("typeb_tmp2.his")
ue.change_cube_size(100)
ue.plot_section('y')
Explanation: For simpler visualisation in this notebook, we will analyse the following steps in a section view of the model.
We consider a section in y-direction through the model:
End of explanation
ue.freeze()
Explanation: Before we start to draw random realisations of the model, we should first store the base state of the model for later reference. This is simply possibel with the freeze() method which stores the current state of the model as the "base-state":
End of explanation
ue.set_random_seed(12345)
Explanation: We now intialise the random generator. We can directly assign a random seed to simplify reproducibility (note that this is not essential, as it would be for the definition in a script function: the random state is preserved within the model and could be retrieved at a later stage, as well!):
End of explanation
ue.info(events_only = True)
ev2 = ue.events[2]
ev2.properties
Explanation: The next step is to define probability distributions to the relevant event parameters. Let's first look at the different events:
End of explanation
param_stats = [{'event' : 2,
'parameter': 'Amplitude',
'stdev': 100.0,
'type': 'normal'},
{'event' : 2,
'parameter': 'Wavelength',
'stdev': 500.0,
'type': 'normal'},
{'event' : 2,
'parameter': 'X',
'stdev': 500.0,
'type': 'normal'}]
ue.set_parameter_statistics(param_stats)
resolution = 100
ue.change_cube_size(resolution)
tmp = ue.get_section('y')
prob_4 = np.zeros_like(tmp.block[:,:,:])
n_draws = 100
for i in range(n_draws):
ue.random_draw()
tmp = ue.get_section('y', resolution = resolution)
prob_4 += (tmp.block[:,:,:] == 4)
# Normalise
prob_4 = prob_4 / float(n_draws)
fig = plt.figure(figsize = (12,8))
ax = fig.add_subplot(111)
ax.imshow(prob_4.transpose()[:,0,:],
origin = 'lower left',
interpolation = 'none')
plt.title("Estimated probability of unit 4")
plt.xlabel("x (E-W)")
plt.ylabel("z")
Explanation: Next, we define the probability distributions for the uncertain input parameters:
End of explanation |
2,934 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h2> ppm = 2.5 </h2>
There weren't engouh retention-tie correlation groups here, so I should either switch to linear interpolation (not loess). So this is probably kinda meh data in real life... (though comparable in AUC to other ppm settings)
Step1: <h2> Import the dataframe and remove any features that are all zero </h2>
Step2: <h2> Get mappings between sample names, file names, and sample classes </h2>
Step3: <h2> Plot the distribution of classification accuracy across multiple cross-validation splits - Kinda Dumb</h2>
Turns out doing this is kind of dumb, because you're not taking into account the prediction score your classifier assigned. Use AUC's instead. You want to give your classifier a lower score if it is really confident and wrong, than vice-versa
Step4: <h2> pqn normalize your features </h2>
Step5: <h2>Random Forest & adaBoost with PQN-normalized data</h2>
Step6: <h2> RF & adaBoost with PQN-normalized, log-transformed data </h2>
Turns out a monotonic transformation doesn't really affect any of these things.
I guess they're already close to unit varinace...?
Step7: <h2> Great, you can classify things. But make null models and do a sanity check to make
sure you arent just classifying garbage </h2>
Step8: <h2> Let's check out some PCA plots </h2>
Step9: <h2> What about with all thre classes? </h2> | Python Code:
import time
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
from sklearn import preprocessing
from sklearn.ensemble import RandomForestClassifier
from sklearn.cross_validation import StratifiedShuffleSplit
from sklearn.cross_validation import cross_val_score
#from sklearn.model_selection import StratifiedShuffleSplit
#from sklearn.model_selection import cross_val_score
from sklearn.ensemble import AdaBoostClassifier
from sklearn.metrics import roc_curve, auc
from sklearn.utils import shuffle
from scipy import interp
%matplotlib inline
def remove_zero_columns(X, threshold=1e-20):
# convert zeros to nan, drop all nan columns, the replace leftover nan with zeros
X_non_zero_colum = X.replace(0, np.nan).dropna(how='all', axis=1).replace(np.nan, 0)
#.dropna(how='all', axis=0).replace(np.nan,0)
return X_non_zero_colum
def zero_fill_half_min(X, threshold=1e-20):
# Fill zeros with 1/2 the minimum value of that column
# input dataframe. Add only to zero values
# Get a vector of 1/2 minimum values
half_min = X[X > threshold].min(axis=0)*0.5
# Add the half_min values to a dataframe where everything that isn't zero is NaN.
# then convert NaN's to 0
fill_vals = (X[X < threshold] + half_min).fillna(value=0)
# Add the original dataframe to the dataframe of zeros and fill-values
X_zeros_filled = X + fill_vals
return X_zeros_filled
toy = pd.DataFrame([[1,2,3,0],
[0,0,0,0],
[0.5,1,0,0]], dtype=float)
toy_no_zeros = remove_zero_columns(toy)
toy_filled_zeros = zero_fill_half_min(toy_no_zeros)
print toy
print toy_no_zeros
print toy_filled_zeros
Explanation: <h2> ppm = 2.5 </h2>
There weren't engouh retention-tie correlation groups here, so I should either switch to linear interpolation (not loess). So this is probably kinda meh data in real life... (though comparable in AUC to other ppm settings)
End of explanation
### Subdivide the data into a feature table
data_path = '/home/irockafe/Dropbox (MIT)/Alm_Lab/projects/revo_healthcare/data/processed/MTBLS315/'\
'uhplc_pos/xcms_camera_results.csv'
## Import the data and remove extraneous columns
df = pd.read_csv(data_path, index_col=0)
df.shape
df.head()
# Make a new index of mz:rt
mz = df.loc[:,"mz"].astype('str')
rt = df.loc[:,"rt"].astype('str')
idx = mz+':'+rt
df.index = idx
df
# separate samples from xcms/camera things to make feature table
not_samples = ['mz', 'mzmin', 'mzmax', 'rt', 'rtmin', 'rtmax',
'npeaks', 'uhplc_pos',
'isotopes', 'adduct', 'pcgroup' ]
samples_list = df.columns.difference(not_samples)
mz_rt_df = df[not_samples]
# convert to samples x features
X_df_raw = df[samples_list].T
# Remove zero-full columns and fill zeroes with 1/2 minimum values
X_df = remove_zero_columns(X_df_raw)
X_df_zero_filled = zero_fill_half_min(X_df)
print "original shape: %s \n# zeros: %f\n" % (X_df_raw.shape, (X_df_raw < 1e-20).sum().sum())
print "zero-columns repalced? shape: %s \n# zeros: %f\n" % (X_df.shape,
(X_df < 1e-20).sum().sum())
print "zeros filled shape: %s \n#zeros: %f\n" % (X_df_zero_filled.shape,
(X_df_zero_filled < 1e-20).sum().sum())
# Convert to numpy matrix to play nicely with sklearn
X = X_df.as_matrix()
print X.shape
Explanation: <h2> Import the dataframe and remove any features that are all zero </h2>
End of explanation
# Get mapping between sample name and assay names
path_sample_name_map = '/home/irockafe/Dropbox (MIT)/Alm_Lab/projects/revo_healthcare/data/raw/'\
'MTBLS315/metadata/a_UPLC_POS_nmfi_and_bsi_diagnosis.txt'
# Index is the sample name
sample_df = pd.read_csv(path_sample_name_map,
sep='\t', index_col=0)
sample_df = sample_df['MS Assay Name']
sample_df.shape
print sample_df.head(10)
# get mapping between sample name and sample class
path_sample_class_map = '/home/irockafe/Dropbox (MIT)/Alm_Lab/projects/revo_healthcare/data/raw/'\
'MTBLS315/metadata/s_NMFI and BSI diagnosis.txt'
class_df = pd.read_csv(path_sample_class_map,
sep='\t')
# Set index as sample name
class_df.set_index('Sample Name', inplace=True)
class_df = class_df['Factor Value[patient group]']
print class_df.head(10)
# convert all non-malarial classes into a single classes
# (collapse non-malarial febril illness and bacteremia together)
class_map_df = pd.concat([sample_df, class_df], axis=1)
class_map_df.rename(columns={'Factor Value[patient group]': 'class'}, inplace=True)
class_map_df
binary_class_map = class_map_df.replace(to_replace=['non-malarial febrile illness', 'bacterial bloodstream infection' ],
value='non-malarial fever')
binary_class_map
# convert classes to numbers
le = preprocessing.LabelEncoder()
le.fit(binary_class_map['class'])
y = le.transform(binary_class_map['class'])
Explanation: <h2> Get mappings between sample names, file names, and sample classes </h2>
End of explanation
def rf_violinplot(X, y, n_iter=25, test_size=0.3, random_state=1,
n_estimators=1000):
cross_val_skf = StratifiedShuffleSplit(y, n_iter=n_iter, test_size=test_size,
random_state=random_state)
clf = RandomForestClassifier(n_estimators=n_estimators, random_state=random_state)
scores = cross_val_score(clf, X, y, cv=cross_val_skf)
sns.violinplot(scores,inner='stick')
rf_violinplot(X,y)
# TODO - Switch to using caret for this bs..?
# Do multi-fold cross validation for adaboost classifier
def adaboost_violinplot(X, y, n_iter=25, test_size=0.3, random_state=1,
n_estimators=200):
cross_val_skf = StratifiedShuffleSplit(y, n_iter=n_iter, test_size=test_size, random_state=random_state)
clf = AdaBoostClassifier(n_estimators=n_estimators, random_state=random_state)
scores = cross_val_score(clf, X, y, cv=cross_val_skf)
sns.violinplot(scores,inner='stick')
adaboost_violinplot(X,y)
# TODO PQN normalization, and log-transformation,
# and some feature selection (above certain threshold of intensity, use principal components), et
def pqn_normalize(X, integral_first=False, plot=False):
'''
Take a feature table and run PQN normalization on it
'''
# normalize by sum of intensities in each sample first. Not necessary
if integral_first:
sample_sums = np.sum(X, axis=1)
X = (X / sample_sums[:,np.newaxis])
# Get the median value of each feature across all samples
mean_intensities = np.median(X, axis=0)
# Divde each feature by the median value of each feature -
# these are the quotients for each feature
X_quotients = (X / mean_intensities[np.newaxis,:])
if plot: # plot the distribution of quotients from one sample
for i in range(1,len(X_quotients[:,1])):
print 'allquotients reshaped!\n\n',
#all_quotients = X_quotients.reshape(np.prod(X_quotients.shape))
all_quotients = X_quotients[i,:]
print all_quotients.shape
x = np.random.normal(loc=0, scale=1, size=len(all_quotients))
sns.violinplot(all_quotients)
plt.title("median val: %f\nMax val=%f" % (np.median(all_quotients), np.max(all_quotients)))
plt.plot( title="median val: ")#%f" % np.median(all_quotients))
plt.xlim([-0.5, 5])
plt.show()
# Define a quotient for each sample as the median of the feature-specific quotients
# in that sample
sample_quotients = np.median(X_quotients, axis=1)
# Quotient normalize each samples
X_pqn = X / sample_quotients[:,np.newaxis]
return X_pqn
# Make a fake sample, with 2 samples at 1x and 2x dilutions
X_toy = np.array([[1,1,1,],
[2,2,2],
[3,6,9],
[6,12,18]], dtype=float)
print X_toy
print X_toy.reshape(1, np.prod(X_toy.shape))
X_toy_pqn_int = pqn_normalize(X_toy, integral_first=True, plot=True)
print X_toy_pqn_int
print '\n\n\n'
X_toy_pqn = pqn_normalize(X_toy)
print X_toy_pqn
Explanation: <h2> Plot the distribution of classification accuracy across multiple cross-validation splits - Kinda Dumb</h2>
Turns out doing this is kind of dumb, because you're not taking into account the prediction score your classifier assigned. Use AUC's instead. You want to give your classifier a lower score if it is really confident and wrong, than vice-versa
End of explanation
X_pqn = pqn_normalize(X)
Explanation: <h2> pqn normalize your features </h2>
End of explanation
rf_violinplot(X_pqn, y)
# Do multi-fold cross validation for adaboost classifier
adaboost_violinplot(X_pqn, y)
Explanation: <h2>Random Forest & adaBoost with PQN-normalized data</h2>
End of explanation
X_pqn_nlog = np.log(X_pqn)
rf_violinplot(X_pqn_nlog, y)
adaboost_violinplot(X_pqn_nlog, y)
def roc_curve_cv(X, y, clf, cross_val,
path='/home/irockafe/Desktop/roc.pdf',
save=False, plot=True):
t1 = time.time()
# collect vals for the ROC curves
tpr_list = []
mean_fpr = np.linspace(0,1,100)
auc_list = []
# Get the false-positive and true-positive rate
for i, (train, test) in enumerate(cross_val):
clf.fit(X[train], y[train])
y_pred = clf.predict_proba(X[test])[:,1]
# get fpr, tpr
fpr, tpr, thresholds = roc_curve(y[test], y_pred)
roc_auc = auc(fpr, tpr)
#print 'AUC', roc_auc
#sns.plt.plot(fpr, tpr, lw=10, alpha=0.6, label='ROC - AUC = %0.2f' % roc_auc,)
#sns.plt.show()
tpr_list.append(interp(mean_fpr, fpr, tpr))
tpr_list[-1][0] = 0.0
auc_list.append(roc_auc)
if (i % 10 == 0):
print '{perc}% done! {time}s elapsed'.format(perc=100*float(i)/cross_val.n_iter, time=(time.time() - t1))
# get mean tpr and fpr
mean_tpr = np.mean(tpr_list, axis=0)
# make sure it ends up at 1.0
mean_tpr[-1] = 1.0
mean_auc = auc(mean_fpr, mean_tpr)
std_auc = np.std(auc_list)
if plot:
# plot mean auc
plt.plot(mean_fpr, mean_tpr, label='Mean ROC - AUC = %0.2f $\pm$ %0.2f' % (mean_auc,
std_auc),
lw=5, color='b')
# plot luck-line
plt.plot([0,1], [0,1], linestyle = '--', lw=2, color='r',
label='Luck', alpha=0.5)
# plot 1-std
std_tpr = np.std(tpr_list, axis=0)
tprs_upper = np.minimum(mean_tpr + std_tpr, 1)
tprs_lower = np.maximum(mean_tpr - std_tpr, 0)
plt.fill_between(mean_fpr, tprs_lower, tprs_upper, color='grey', alpha=0.2,
label=r'$\pm$ 1 stdev')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC curve, {iters} iterations of {cv} cross validation'.format(
iters=cross_val.n_iter, cv='{train}:{test}'.format(test=cross_val.test_size, train=(1-cross_val.test_size)))
)
plt.legend(loc="lower right")
if save:
plt.savefig(path, format='pdf')
plt.show()
return tpr_list, auc_list, mean_fpr
rf_estimators = 1000
n_iter = 3
test_size = 0.3
random_state = 1
cross_val_rf = StratifiedShuffleSplit(y, n_iter=n_iter, test_size=test_size, random_state=random_state)
clf_rf = RandomForestClassifier(n_estimators=rf_estimators, random_state=random_state)
rf_graph_path = '''/home/irockafe/Dropbox (MIT)/Alm_Lab/projects/revolutionizing_healthcare/data/MTBLS315/\
isaac_feature_tables/uhplc_pos/rf_roc_{trees}trees_{cv}cviter.pdf'''.format(trees=rf_estimators, cv=n_iter)
print cross_val_rf.n_iter
print cross_val_rf.test_size
tpr_vals, auc_vals, mean_fpr = roc_curve_cv(X_pqn, y, clf_rf, cross_val_rf,
path=rf_graph_path, save=False)
# For adaboosted
n_iter = 3
test_size = 0.3
random_state = 1
adaboost_estimators = 200
adaboost_path = '''/home/irockafe/Dropbox (MIT)/Alm_Lab/projects/revolutionizing_healthcare/data/MTBLS315/\
isaac_feature_tables/uhplc_pos/adaboost_roc_{trees}trees_{cv}cviter.pdf'''.format(trees=adaboost_estimators,
cv=n_iter)
cross_val_adaboost = StratifiedShuffleSplit(y, n_iter=n_iter, test_size=test_size, random_state=random_state)
clf = AdaBoostClassifier(n_estimators=adaboost_estimators, random_state=random_state)
adaboost_tpr, adaboost_auc, adaboost_fpr = roc_curve_cv(X_pqn, y, clf, cross_val_adaboost,
path=adaboost_path)
Explanation: <h2> RF & adaBoost with PQN-normalized, log-transformed data </h2>
Turns out a monotonic transformation doesn't really affect any of these things.
I guess they're already close to unit varinace...?
End of explanation
# Make a null model AUC curve
def make_null_model(X, y, clf, cross_val, random_state=1, num_shuffles=5, plot=True):
'''
Runs the true model, then sanity-checks by:
Shuffles class labels and then builds cross-validated ROC curves from them.
Compares true AUC vs. shuffled auc by t-test (assumes normality of AUC curve)
'''
null_aucs = []
print y.shape
print X.shape
tpr_true, auc_true, fpr_true = roc_curve_cv(X, y, clf, cross_val)
# shuffle y lots of times
for i in range(0, num_shuffles):
#Iterate through the shuffled y vals and repeat with appropriate params
# Retain the auc vals for final plotting of distribution
y_shuffle = shuffle(y)
cross_val.y = y_shuffle
cross_val.y_indices = y_shuffle
print 'Number of differences b/t original and shuffle: %s' % (y == cross_val.y).sum()
# Get auc values for number of iterations
tpr, auc, fpr = roc_curve_cv(X, y_shuffle, clf, cross_val, plot=False)
null_aucs.append(auc)
#plot the outcome
if plot:
flattened_aucs = [j for i in null_aucs for j in i]
my_dict = {'true_auc': auc_true, 'null_auc': flattened_aucs}
df_poop = pd.DataFrame.from_dict(my_dict, orient='index').T
df_tidy = pd.melt(df_poop, value_vars=['true_auc', 'null_auc'],
value_name='auc', var_name='AUC_type')
#print flattened_aucs
sns.violinplot(x='AUC_type', y='auc',
inner='points', data=df_tidy)
# Plot distribution of AUC vals
plt.title("Distribution of aucs")
#sns.plt.ylabel('count')
plt.xlabel('AUC')
#sns.plt.plot(auc_true, 0, color='red', markersize=10)
plt.show()
# Do a quick t-test to see if odds of randomly getting an AUC that good
return auc_true, null_aucs
# Make a null model AUC curve & compare it to null-model
# Random forest magic!
rf_estimators = 1000
n_iter = 50
test_size = 0.3
random_state = 1
cross_val_rf = StratifiedShuffleSplit(y, n_iter=n_iter, test_size=test_size, random_state=random_state)
clf_rf = RandomForestClassifier(n_estimators=rf_estimators, random_state=random_state)
true_auc, all_aucs = make_null_model(X_pqn, y, clf_rf, cross_val_rf, num_shuffles=5)
# make dataframe from true and false aucs
flattened_aucs = [j for i in all_aucs for j in i]
my_dict = {'true_auc': true_auc, 'null_auc': flattened_aucs}
df_poop = pd.DataFrame.from_dict(my_dict, orient='index').T
df_tidy = pd.melt(df_poop, value_vars=['true_auc', 'null_auc'],
value_name='auc', var_name='AUC_type')
print df_tidy.head()
#print flattened_aucs
sns.violinplot(x='AUC_type', y='auc',
inner='points', data=df_tidy, bw=0.7)
plt.show()
Explanation: <h2> Great, you can classify things. But make null models and do a sanity check to make
sure you arent just classifying garbage </h2>
End of explanation
from sklearn.decomposition import PCA
# Check PCA of things
def PCA_plot(X, y, n_components, plot_color, class_nums, class_names, title='PCA'):
pca = PCA(n_components=n_components)
X_pca = pca.fit(X).transform(X)
print zip(plot_color, class_nums, class_names)
for color, i, target_name in zip(plot_color, class_nums, class_names):
# plot one class at a time, first plot all classes y == 0
#print color
#print y == i
xvals = X_pca[y == i, 0]
print xvals.shape
yvals = X_pca[y == i, 1]
plt.scatter(xvals, yvals, color=color, alpha=0.8, label=target_name)
plt.legend(bbox_to_anchor=(1.01,1), loc='upper left', shadow=False)#, scatterpoints=1)
plt.title('PCA of Malaria data')
plt.show()
PCA_plot(X, y, 2, ['red', 'blue'], [0,1], ['malaria', 'non-malaria fever'])
Explanation: <h2> Let's check out some PCA plots </h2>
End of explanation
# convert classes to numbers
le = preprocessing.LabelEncoder()
le.fit(class_map_df['class'])
y_three_class = le.transform(class_map_df['class'])
print class_map_df.head(10)
print y_three_class
print X.shape
print y_three_class.shape
y_labels = np.sort(class_map_df['class'].unique())
print y_labels
colors = ['green', 'red', 'blue']
print np.unique(y_three_class)
PCA_plot(X, y_three_class, 2, colors, np.unique(y_three_class), y_labels)
Explanation: <h2> What about with all thre classes? </h2>
End of explanation |
2,935 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classification
Timothy Helton
<br>
<font color="red">
NOTE
Step1: Exercise 1
This question should be answered using the Weekly data set. This data is similar in nature to the Smarket data from earlier, except that it contains 1,089
weekly returns for 21 years, from the beginning of 1990 to the end of
2010.
Produce some numerical and graphical summaries of the Weekly
data. Do there appear to be any patterns?
Use the full data set to perform a logistic regression with
Direction as the response and the five lag variables plus Volume
as predictors. Use the summary function to print the results. Do
any of the predictors appear to be statistically significant? If so,
which ones?
Compute the confusion matrix and overall fraction of correct
predictions. Explain what the confusion matrix is telling you
about the types of mistakes made by logistic regression.
Now fit the logistic regression model using a training data period
from 1990 to 2008, with Lag2 as the only predictor. Compute the
confusion matrix and the overall fraction of correct predictions
for the held out data (that is, the data from 2009 and 2010).
Repeat (4) using LDA.
Repeat (4) using QDA.
Repeat (4) using KNN with K = 1.
Which of these methods appears to provide the best results on
this data?
Experiment with different combinations of predictors, including
possible transformations and interactions, for each of the
methods. Report the variables, method, and associated confusion
matrix that appears to provide the best results on the held
out data. Note that you should
1. Produce some numerical and graphical summaries of the Weekly
data. Do there appear to be any patterns?
Step2: FINDINGS
There does not appear to be noticable patterns in dataset.
All field variables except volume appear to follow a Gausian distribution.
2. Use the full data set to perform a logistic regression with
Direction as the response and the five lag variables plus Volume
as predictors. Use the summary function to print the results. Do
any of the predictors appear to be statistically significant? If so,
which ones?
Step3: FINDINGS
The intercept and lag2 features have P-values below the 0.05 threshold and appear statistically significant.
3. Compute the confusion matrix and overall fraction of correct
Step4: FINDINGS
The model is not well suited to the data.
The Precision measures the accuracy of the Positive predictions.
$$\frac{T_p}{T_p - F_p}$$
The Recall measures the fraction of the model correctly identified.
$$\frac{T_p}{T_p + F_n}$$
The F1-score is the harmonic mean of the precision and recall.
Harmonic Mean is used when the average of rates is desired.
$$\frac{2 \times Precision \times Recall}{Precision + Recall}$$
The Support is the total number of each class.
The sum each row of the confusion matrix.
4. Now fit the logistic regression model using a training data period
Step5: FINDINGS
Using 80% of the data as a training set did not improve the models accuracy.
Step6: FINDINGS
Testing the model on the remaining 20% of the data yield a result worse than just randomly guessing.
5. Repeat (4) using LDA.
Step7: FINDINGS
This model is extremely acurate.
6. Repeat (4) using QDA.
Step8: FINDINGS
This model is better than the logistic regression, but not as good as the LDA model.
7. Repeat (4) using KNN with K = 1.
Step9: FINDINGS
This model is better than the logistic regression, but not as good as the QDA.
8.Which of these methods appears to provide the best results on
this data?
The model acuracy in decending order is the following
Step10: 2. Explore the data graphically in order to investigate the association
between mpg01 and the other features. Which of the other
features seem most likely to be useful in predicting mpg01? Scatterplots
and boxplots may be useful tools to answer this question.
Describe your findings.
Step11: FINDINGS
The following features appear to have a direct impact on the vehicles gas milage.
Displacement
Cylinders are related to Displacement and will not be included.
Horsepower
Weight
Origin
3. Split the data into a training set and a test set.
Step12: 4. Perform LDA on the training data in order to predict mpg01
using the variables that seemed most associated with mpg01 in
(2). What is the test error of the model obtained?
Step13: 5. Perform QDA on the training data in order to predict mpg01
using the variables that seemed most associated with mpg01 in
(2). What is the test error of the model obtained?
Step14: 6. Perform logistic regression on the training data in order to predict
mpg01 using the variables that seemed most associated with
mpg01 in (2). What is the test error of the model obtained?
Step15: 7. Perform KNN on the training data, with several values of K, in
order to predict mpg01. Use only the variables that seemed most
associated with mpg01 in (2). What test errors do you obtain?
Which value of K seems to perform the best on this data set? | Python Code:
from k2datascience import classification
from k2datascience import plotting
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
%matplotlib inline
Explanation: Classification
Timothy Helton
<br>
<font color="red">
NOTE:
<br>
This notebook uses code found in the
<a href="https://github.com/TimothyHelton/k2datascience/blob/master/k2datascience/classification.py">
<strong>k2datascience.classification</strong></a> module.
To execute all the cells do one of the following items:
<ul>
<li>Install the k2datascience package to the active Python interpreter.</li>
<li>Add k2datascience/k2datascience to the PYTHON_PATH system variable.</li>
<li>Create a link to the classification.py file in the same directory as this notebook.</li>
</font>
Imports
End of explanation
weekly = classification.Weekly()
weekly.data.info()
weekly.data.describe()
weekly.data.head()
plotting.correlation_heatmap_plot(
data=weekly.data, title='Weekly Stockmarket')
plotting.correlation_pair_plot(
weekly.data, title='Weekly Stockmarket')
Explanation: Exercise 1
This question should be answered using the Weekly data set. This data is similar in nature to the Smarket data from earlier, except that it contains 1,089
weekly returns for 21 years, from the beginning of 1990 to the end of
2010.
Produce some numerical and graphical summaries of the Weekly
data. Do there appear to be any patterns?
Use the full data set to perform a logistic regression with
Direction as the response and the five lag variables plus Volume
as predictors. Use the summary function to print the results. Do
any of the predictors appear to be statistically significant? If so,
which ones?
Compute the confusion matrix and overall fraction of correct
predictions. Explain what the confusion matrix is telling you
about the types of mistakes made by logistic regression.
Now fit the logistic regression model using a training data period
from 1990 to 2008, with Lag2 as the only predictor. Compute the
confusion matrix and the overall fraction of correct predictions
for the held out data (that is, the data from 2009 and 2010).
Repeat (4) using LDA.
Repeat (4) using QDA.
Repeat (4) using KNN with K = 1.
Which of these methods appears to provide the best results on
this data?
Experiment with different combinations of predictors, including
possible transformations and interactions, for each of the
methods. Report the variables, method, and associated confusion
matrix that appears to provide the best results on the held
out data. Note that you should
1. Produce some numerical and graphical summaries of the Weekly
data. Do there appear to be any patterns?
End of explanation
weekly.logistic_regression(data=weekly.data)
weekly.logistic_model.summary()
Explanation: FINDINGS
There does not appear to be noticable patterns in dataset.
All field variables except volume appear to follow a Gausian distribution.
2. Use the full data set to perform a logistic regression with
Direction as the response and the five lag variables plus Volume
as predictors. Use the summary function to print the results. Do
any of the predictors appear to be statistically significant? If so,
which ones?
End of explanation
weekly.confusion
print(weekly.classification)
Explanation: FINDINGS
The intercept and lag2 features have P-values below the 0.05 threshold and appear statistically significant.
3. Compute the confusion matrix and overall fraction of correct
End of explanation
weekly.logistic_regression(data=weekly.x_train)
weekly.logistic_model.summary()
weekly.confusion
print(weekly.classification)
Explanation: FINDINGS
The model is not well suited to the data.
The Precision measures the accuracy of the Positive predictions.
$$\frac{T_p}{T_p - F_p}$$
The Recall measures the fraction of the model correctly identified.
$$\frac{T_p}{T_p + F_n}$$
The F1-score is the harmonic mean of the precision and recall.
Harmonic Mean is used when the average of rates is desired.
$$\frac{2 \times Precision \times Recall}{Precision + Recall}$$
The Support is the total number of each class.
The sum each row of the confusion matrix.
4. Now fit the logistic regression model using a training data period
End of explanation
weekly.categorize(weekly.x_test)
weekly.calc_prediction(weekly.y_test, weekly.prediction_nom)
weekly.confusion
print(weekly.classification)
Explanation: FINDINGS
Using 80% of the data as a training set did not improve the models accuracy.
End of explanation
weekly.lda()
weekly.confusion
print(weekly.classification)
Explanation: FINDINGS
Testing the model on the remaining 20% of the data yield a result worse than just randomly guessing.
5. Repeat (4) using LDA.
End of explanation
weekly.qda()
weekly.confusion
print(weekly.classification)
Explanation: FINDINGS
This model is extremely acurate.
6. Repeat (4) using QDA.
End of explanation
weekly.knn()
weekly.confusion
print(weekly.classification)
Explanation: FINDINGS
This model is better than the logistic regression, but not as good as the LDA model.
7. Repeat (4) using KNN with K = 1.
End of explanation
auto = classification.Auto()
auto.data.info()
auto.data.describe()
auto.data.head()
Explanation: FINDINGS
This model is better than the logistic regression, but not as good as the QDA.
8.Which of these methods appears to provide the best results on
this data?
The model acuracy in decending order is the following:
Linear Discriminate Analysis
Quadradic Discriminate Analysis
K-Nearest Neighbors
Logistic Regression
9. Experiment with different combinations of predictors, including
possible transformations and interactions, for each of the
methods. Report the variables, method, and associated confusion
matrix that appears to provide the best results on the held
out data. Note that you should
Exercise 2
In this problem, you will develop a model to predict whether a given
car gets high or low gas mileage based on the Auto data set.
Create a binary variable, mpg01, that contains a 1 if mpg contains
a value above its median, and a 0 if mpg contains a value below
its median.
Explore the data graphically in order to investigate the association
between mpg01 and the other features. Which of the other
features seem most likely to be useful in predicting mpg01? Scatterplots
and boxplots may be useful tools to answer this question.
Describe your findings.
Split the data into a training set and a test set.
Perform LDA on the training data in order to predict mpg01
using the variables that seemed most associated with mpg01 in
(2). What is the test error of the model obtained?
Perform QDA on the training data in order to predict mpg01
using the variables that seemed most associated with mpg01 in
(2). What is the test error of the model obtained?
Perform logistic regression on the training data in order to predict
mpg01 using the variables that seemed most associated with
mpg01 in (2). What is the test error of the model obtained?
Perform KNN on the training data, with several values of K, in
order to predict mpg01. Use only the variables that seemed most
associated with mpg01 in (2). What test errors do you obtain?
Which value of K seems to perform the best on this data set?
1. Create a binary variable, mpg01, that contains a 1 if mpg contains
a value above its median, and a 0 if mpg contains a value below
its median.
End of explanation
plotting.correlation_heatmap_plot(
data=auto.data, title='Auto')
plotting.correlation_pair_plot(
data=auto.data, title='Auto')
auto.box_plots()
Explanation: 2. Explore the data graphically in order to investigate the association
between mpg01 and the other features. Which of the other
features seem most likely to be useful in predicting mpg01? Scatterplots
and boxplots may be useful tools to answer this question.
Describe your findings.
End of explanation
auto.x_train.info()
auto.y_train.head()
auto.x_test.info()
auto.y_test.head()
Explanation: FINDINGS
The following features appear to have a direct impact on the vehicles gas milage.
Displacement
Cylinders are related to Displacement and will not be included.
Horsepower
Weight
Origin
3. Split the data into a training set and a test set.
End of explanation
auto.classify_data(model='LDA')
auto.confusion
print(auto.classification)
Explanation: 4. Perform LDA on the training data in order to predict mpg01
using the variables that seemed most associated with mpg01 in
(2). What is the test error of the model obtained?
End of explanation
auto.classify_data(model='QDA')
auto.confusion
print(auto.classification)
Explanation: 5. Perform QDA on the training data in order to predict mpg01
using the variables that seemed most associated with mpg01 in
(2). What is the test error of the model obtained?
End of explanation
auto.classify_data(model='LR')
auto.confusion
print(auto.classification)
Explanation: 6. Perform logistic regression on the training data in order to predict
mpg01 using the variables that seemed most associated with
mpg01 in (2). What is the test error of the model obtained?
End of explanation
auto.accuracy_vs_k()
auto.classify_data(model='KNN', n=13)
auto.confusion
print(auto.classification)
Explanation: 7. Perform KNN on the training data, with several values of K, in
order to predict mpg01. Use only the variables that seemed most
associated with mpg01 in (2). What test errors do you obtain?
Which value of K seems to perform the best on this data set?
End of explanation |
2,936 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using data from this FiveThirtyEight post, write code to calculate the correlation of the responses from the poll.
Respond to the story in your PR. Is this a good example of data journalism? Why or why not?
Step1: Create a new df just with data for Approve of Obama
Step2: Create a new df just with data for In favor of the Iran deal
Step3: Combine the the two sub df
Step4: Transpose so we can have out column names as rows
Step5: Calculate Correlation Coefficient | Python Code:
import pandas as pd
%matplotlib inline
import statsmodels.formula.api as smf
import matplotlib.pyplot as plt
df = pd.read_excel("Iran_data_3.xlsx")
df
Explanation: Using data from this FiveThirtyEight post, write code to calculate the correlation of the responses from the poll.
Respond to the story in your PR. Is this a good example of data journalism? Why or why not?
End of explanation
df_obama_approve = df[df['Sentiment'] == 'Approve']
df_obama_approve
del df_obama_approve['Sentiment']
Explanation: Create a new df just with data for Approve of Obama
End of explanation
df_favor_iran_deal = df[df['Sentiment'] == 'Favor']
df_favor_iran_deal
del df_favor_iran_deal['Sentiment']
df_favor_iran_deal
Explanation: Create a new df just with data for In favor of the Iran deal
End of explanation
obama_approve_favor_deal = df_obama_approve.append(df_favor_iran_deal)
obama_approve_favor_deal
del obama_approve_favor_deal['Subject']
del obama_approve_favor_deal['Total']
Explanation: Combine the the two sub df
End of explanation
obama_approve_favor_deal_transpose = obama_approve_favor_deal.transpose()
obama_approve_favor_deal_transpose.columns = ["Approve_Obama","Favor_Deal"]
obama_approve_favor_deal_transpose
plt.style.use('fivethirtyeight')
obama_approve_favor_deal_transpose.plot(kind='scatter', x= 'Approve_Obama', y='Favor_Deal')
Explanation: Transpose so we can have out column names as rows
End of explanation
obama_approve_favor_deal_transpose.corr()
lm = smf.ols(formula='Favor_Deal~Approve_Obama',data=obama_approve_favor_deal_transpose).fit()
lm.params
intercept, slope = lm.params
ax = obama_approve_favor_deal_transpose.plot(kind='scatter', x= 'Approve_Obama', y='Favor_Deal')
plt.plot(obama_approve_favor_deal_transpose["Approve_Obama"],slope*obama_approve_favor_deal_transpose["Approve_Obama"]+intercept,"-",color="red")
ax.set_title("Feelings on Obama predict feelings on Iran deal")
ax.set_ylabel('Favor Iran deal')
ax.set_xlabel("Approve of Obama")
Explanation: Calculate Correlation Coefficient
End of explanation |
2,937 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Source localization with MNE/dSPM/sLORETA/eLORETA
The aim of this tutorial is to teach you how to compute and apply a linear
minimum-norm inverse method on evoked/raw/epochs data.
Step1: Process MEG data
Step2: Compute regularized noise covariance
For more details see tut-compute-covariance.
Step3: Compute the evoked response
Let's just use the MEG channels for simplicity.
Step4: It's also a good idea to look at whitened data
Step5: Inverse modeling
Step6: Next, we make an MEG inverse operator.
Step7: Compute inverse solution
We can use this to compute the inverse solution and obtain source time
courses
Step8: Visualization
We can look at different dipole activations
Step9: Examine the original data and the residual after fitting
Step10: Here we use peak getter to move visualization to the time point of the peak
and draw a marker at the maximum peak vertex. | Python Code:
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
Explanation: Source localization with MNE/dSPM/sLORETA/eLORETA
The aim of this tutorial is to teach you how to compute and apply a linear
minimum-norm inverse method on evoked/raw/epochs data.
End of explanation
data_path = sample.data_path()
raw_fname = op.join(data_path, 'MEG', 'sample',
'sample_audvis_filt-0-40_raw.fif')
raw = mne.io.read_raw_fif(raw_fname) # already has an average reference
events = mne.find_events(raw, stim_channel='STI 014')
event_id = dict(aud_l=1) # event trigger and conditions
tmin = -0.2 # start of each epoch (200ms before the trigger)
tmax = 0.5 # end of each epoch (500ms after the trigger)
raw.info['bads'] = ['MEG 2443', 'EEG 053']
baseline = (None, 0) # means from the first instant to t = 0
reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=('meg', 'eog'), baseline=baseline, reject=reject)
Explanation: Process MEG data
End of explanation
noise_cov = mne.compute_covariance(
epochs, tmax=0., method=['shrunk', 'empirical'], rank=None, verbose=True)
fig_cov, fig_spectra = mne.viz.plot_cov(noise_cov, raw.info)
Explanation: Compute regularized noise covariance
For more details see tut-compute-covariance.
End of explanation
evoked = epochs.average().pick('meg')
evoked.plot(time_unit='s')
evoked.plot_topomap(times=np.linspace(0.05, 0.15, 5), ch_type='mag',
time_unit='s')
Explanation: Compute the evoked response
Let's just use the MEG channels for simplicity.
End of explanation
evoked.plot_white(noise_cov, time_unit='s')
del epochs, raw # to save memory
Explanation: It's also a good idea to look at whitened data:
End of explanation
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-oct-6-fwd.fif'
fwd = mne.read_forward_solution(fname_fwd)
Explanation: Inverse modeling: MNE/dSPM on evoked and raw data
Here we first read the forward solution. You will likely need to compute
one for your own data -- see tut-forward for information on how
to do it.
End of explanation
inverse_operator = make_inverse_operator(
evoked.info, fwd, noise_cov, loose=0.2, depth=0.8)
del fwd
# You can write it to disk with::
#
# >>> from mne.minimum_norm import write_inverse_operator
# >>> write_inverse_operator('sample_audvis-meg-oct-6-inv.fif',
# inverse_operator)
Explanation: Next, we make an MEG inverse operator.
End of explanation
method = "dSPM"
snr = 3.
lambda2 = 1. / snr ** 2
stc, residual = apply_inverse(evoked, inverse_operator, lambda2,
method=method, pick_ori=None,
return_residual=True, verbose=True)
Explanation: Compute inverse solution
We can use this to compute the inverse solution and obtain source time
courses:
End of explanation
fig, ax = plt.subplots()
ax.plot(1e3 * stc.times, stc.data[::100, :].T)
ax.set(xlabel='time (ms)', ylabel='%s value' % method)
Explanation: Visualization
We can look at different dipole activations:
End of explanation
fig, axes = plt.subplots(2, 1)
evoked.plot(axes=axes)
for ax in axes:
for text in list(ax.texts):
text.remove()
for line in ax.lines:
line.set_color('#98df81')
residual.plot(axes=axes)
Explanation: Examine the original data and the residual after fitting:
End of explanation
vertno_max, time_max = stc.get_peak(hemi='rh')
subjects_dir = data_path + '/subjects'
surfer_kwargs = dict(
hemi='rh', subjects_dir=subjects_dir,
clim=dict(kind='value', lims=[8, 12, 15]), views='lateral',
initial_time=time_max, time_unit='s', size=(800, 800), smoothing_steps=10)
brain = stc.plot(**surfer_kwargs)
brain.add_foci(vertno_max, coords_as_verts=True, hemi='rh', color='blue',
scale_factor=0.6, alpha=0.5)
brain.add_text(0.1, 0.9, 'dSPM (plus location of maximal activation)', 'title',
font_size=14)
# The documentation website's movie is generated with:
# brain.save_movie(..., tmin=0.05, tmax=0.15, interpolation='linear',
# time_dilation=20, framerate=10, time_viewer=True)
Explanation: Here we use peak getter to move visualization to the time point of the peak
and draw a marker at the maximum peak vertex.
End of explanation |
2,938 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pandas objects
So far, we have manipulated data which were stored in NumPy arrays. Let us consider 2D data.
Step1: We could visualize it with Matplotlib.
Step2: Raw data could look like this. Say that columns hold variables and rows hold observations (or records). We may want to label the data (set some metadata). We may also want to handle non-numerical data. Then, we want to store our data in a DataFrame, a 2D labelled data structure with columns of potentially different types.
The DataFrame object
Step3: The DataFrame object has attributes...
Step4: ... and methods, as we shall see in the following. For now, let us label our data.
Step5: Note that, alternatively, you could have done df.rename(columns={0
Step6: (This is a terrible visualization though... 3-cycle needed!)
Hands-on exercises
Create another DataFrame, df2, equal to df (with the same values for each column) by passing a dictionary to pd.DataFrame(). You can check your answer by running pd.testing.assert_frame_equal(df, df2, check_like=True).
What is the type of object df[['green']]?
What is the type of object df['green']?
The Series object
A Series is a 1D labelled data structure.
Step7: It can hold any data type.
Step8: Hands-on exercises
Create a series equal (element-wise) to the product of the 'green' variable and the 'alpha' variable. (Hint | Python Code:
import numpy as np
ar = 0.5 * np.eye(3)
ar[2, 1] = 1
ar
Explanation: Pandas objects
So far, we have manipulated data which were stored in NumPy arrays. Let us consider 2D data.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plt.imshow(ar, cmap=plt.cm.gray)
Explanation: We could visualize it with Matplotlib.
End of explanation
import pandas as pd
df = pd.DataFrame(ar)
df
Explanation: Raw data could look like this. Say that columns hold variables and rows hold observations (or records). We may want to label the data (set some metadata). We may also want to handle non-numerical data. Then, we want to store our data in a DataFrame, a 2D labelled data structure with columns of potentially different types.
The DataFrame object
End of explanation
df.size
df.shape
Explanation: The DataFrame object has attributes...
End of explanation
df.columns = ['red', 'green', 'blue']
Explanation: ... and methods, as we shall see in the following. For now, let us label our data.
End of explanation
df
df.plot()
Explanation: Note that, alternatively, you could have done df.rename(columns={0: 'red', 1: 'green', 2: 'blue'}, inplace=True).
End of explanation
df['green']
Explanation: (This is a terrible visualization though... 3-cycle needed!)
Hands-on exercises
Create another DataFrame, df2, equal to df (with the same values for each column) by passing a dictionary to pd.DataFrame(). You can check your answer by running pd.testing.assert_frame_equal(df, df2, check_like=True).
What is the type of object df[['green']]?
What is the type of object df['green']?
The Series object
A Series is a 1D labelled data structure.
End of explanation
pd.Series(range(10))
s = pd.Series(['first', 'second', 'third'])
s
t = pd.Series([pd.Timestamp('2017-09-01'), pd.Timestamp('2017-09-02'), pd.Timestamp('2017-09-03')])
t
alpha = pd.Series(0.1 * np.arange(1, 4))
alpha.plot(kind='bar')
df['alpha'] = alpha
df
Explanation: It can hold any data type.
End of explanation
alpha.index
df.index
alpha
alpha.index = s
alpha
alpha.index
df.set_index(s)
df.set_index(s, inplace=True)
Explanation: Hands-on exercises
Create a series equal (element-wise) to the product of the 'green' variable and the 'alpha' variable. (Hint: It works like NumPy arrays.)
Label this series as 'pre_multiplied_green'. (Hint: Use tab completion to explore the list of attributes and/or scroll up to see which attribute should be set.)
The Index object
The Index object stores axis labels for Series and DataFrames.
End of explanation |
2,939 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Checking the Equivalence of Regular Expressions
In order to check whether two regular expressions $r_1$ and $r_2$ are equivalent, perform the
following steps
Step1: NFA-2-DFA.ipynb contains the function nfa2dfa that converts a non-deterministic
<span style="font-variant
Step2: Given two sets A and B, the function cartesian_product(A, B) computes the
<em style="color
Step3: Given to deterministic <span style="font-variant
Step4: Given a regular expression $r$ and an alphabet $\Sigma$, the function $\texttt{regexp2DFA}(r, \Sigma)$
computes a deterministic <span style="font-variant
Step5: Given a deterministic <span style="font-variant
Step6: The function regExpEquiv takes three arguments | Python Code:
%run Regexp-2-NFA.ipynb
Explanation: Checking the Equivalence of Regular Expressions
In order to check whether two regular expressions $r_1$ and $r_2$ are equivalent, perform the
following steps:
- convert $r_1$ and $r_2$ into non-deterministic <span style="font-variant:small-caps;">Fsm</span>s
$F_1$ and $F_2$ such that $L(r_1) = L(F_1)$ and $L(r_2) = L(F_2)$,
- convert $F_1$ and $F_2$ into deterministic <span style="font-variant:small-caps;">Fsm</span>s
$D_1$ and $D_2$ such that $L(D_1) = L(F_1)$ and $L(D_2) = L(F_2)$
- check whether both $L(D_1) \backslash L(D_2)$ and $L(D_2) \backslash L(D_1)$ are empty.
Regexp-2-NFA.ipynb contains the function RegExp2NFA.toNFA that can be used to compute a non-deterministic
<span style="font-variant:small-caps;">Fsm</span> that accepts the language described by a given regular expression.
End of explanation
%run NFA-2-DFA.ipynb
Explanation: NFA-2-DFA.ipynb contains the function nfa2dfa that converts a non-deterministic
<span style="font-variant:small-caps;">Fsm</span> into an equivalent deterministic
<span style="font-variant:small-caps;">Fsm</span>.
End of explanation
def cartesian_product(A, B):
return { (x, y) for x in A
for y in B
}
cartesian_product({1, 2}, {'a', 'b'})
Explanation: Given two sets A and B, the function cartesian_product(A, B) computes the
<em style="color:blue">cartesian product</em> $A \times B$ which is defined as
$$ A \times B := { (x, y) \mid x \in A \wedge y \in B }. $$
End of explanation
def fsm_complement(F1, F2):
States1, ฮฃ, ๐ฟ1, q1, A1 = F1
States2, _, ๐ฟ2, q2, A2 = F2
States = cartesian_product(States1, States2)
๐ฟ = {}
for p1, p2 in States:
for c in ฮฃ:
๐ฟ[(p1, p2), c] = (๐ฟ1[p1, c], ๐ฟ2[p2, c])
return States, ฮฃ, ๐ฟ, (q1, q2), cartesian_product(A1, States2 - A2)
Explanation: Given to deterministic <span style="font-variant:small-caps;">Fsm</span>s F1 and F2,
the expression fsm_complement(F1, F2) computes a deterministic
<span style="font-variant:small-caps;">Fsm</span> that recognizes the language $L(F_1)\backslash L(F_2)$.
End of explanation
def regexp2DFA(r, ฮฃ):
converter = RegExp2NFA(ฮฃ)
nfa = converter.toNFA(r)
return nfa2dfa(nfa)
Explanation: Given a regular expression $r$ and an alphabet $\Sigma$, the function $\texttt{regexp2DFA}(r, \Sigma)$
computes a deterministic <span style="font-variant:small-caps;">Fsm</span>s that accepts
the language specified by $r$.
End of explanation
def is_empty(F):
States, ฮฃ, ฮด, q0, Accepting = F
Reachable = { q0 }
NewFound = { q0 }
while True:
NewFound = { ฮด[q, c] for q in NewFound
for c in ฮฃ
}
if NewFound <= Reachable:
break
Reachable |= NewFound
return Reachable & Accepting == set()
%run FSM-2-Dot.ipynb
Explanation: Given a deterministic <span style="font-variant:small-caps;">Fsm</span> $F$ the function
is_empty(F) checks whether the language accepted by $F$ is empty.
In this function, the variable Reachable is the set of those states that are already known to be reachable
from the start state q0. NewFound are those states that can be reached from a state in the set
Reachable. When we find no new states that are reachable, the iteration stops and we check whether
there is a state that is both reachable and acceptable because in that case the language is not empty.
End of explanation
def regExpEquiv(r1, r2, ฮฃ):
F1 = regexp2DFA(r1, ฮฃ)
F2 = regexp2DFA(r2, ฮฃ)
r1_minus_r2 = fsm_complement(F1, F2)
r2_minus_r1 = fsm_complement(F2, F1)
return is_empty(r1_minus_r2) and is_empty(r2_minus_r1)
Explanation: The function regExpEquiv takes three arguments:
- $r_1$ and $r_2$ are regular expressions,
- $\Sigma$ is the alphabet used in these regular expressions.
The function returns True iff $r_1 \doteq r_2$, i.e. if $r_1$ and $r_2$ are equivalent.
End of explanation |
2,940 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Enter your settings here
Step1: Load the data
Step2: Smoothing
Smoothing in the lattitudinal direction (N-S) is not affected by the projection; distances do not vary since the circles of equal longitude are all greater circles on the sphere. We apply a Gaussian filter where we reflect data in the lattitudinal and time directions and wrap in the longitudinal direction. Next, to filter in the longitudinal direction, we need to use a different filter width for each lattitude.
Step3: Sobel filtering
The Sobel filter has the same problem as the Gaussian filter, but the solution is easier. We just correct for the magnitude of the Sobel response by multiplying the longitudinal component by the cosine of the latitude.
Step4: Determine hysteresis settings | Python Code:
data_folder = Path("/home/bathiany/Sebastian/datamining/edges/obsscan")
data_set = DataSet([data_folder / 'ice_conc_nh_ease2-250_cdr-v2p0_remapbilt200_September.nc'], 'ice_conc')
#dataset goes from 1979 - 2015
sigma_d = unit('120 km') #200
sigma_t = unit('1 year')
gamma = 1e10
scaling_factor = gamma * unit('1 km/year')
sobel_delta_t = unit('1 year')
sobel_delta_d = sobel_delta_t * scaling_factor
sobel_weights = [sobel_delta_t, sobel_delta_d, sobel_delta_d]
Explanation: Enter your settings here
End of explanation
from datetime import date, timedelta
box = data_set.box
print("({:.6~P}, {:.6~P}, {:.6~P}) per pixel".format(*box.resolution))
for t in box.time[:3]:
print(box.date(t), end=', ')
print(" ...")
dt = box.time[1:] - box.time[:-1]
print("time steps: max", dt.max(), "min", dt.min())
Explanation: Load the data
End of explanation
yearly_data_set = data_set
box = yearly_data_set.box
data = yearly_data_set.data
smooth_data = gaussian_filter(box, data, [sigma_t, sigma_d, sigma_d])
my_cmap = matplotlib.cm.get_cmap('rainbow') #YlGnBu
my_cmap.set_under('w')
plot_orthographic_np(box, yearly_data_set.data[0], cmap=my_cmap, vmin=0.1) #1979
plot_orthographic_np(box, yearly_data_set.data[28], cmap=my_cmap, vmin=0.1) #2007
plot_orthographic_np(box, yearly_data_set.data[33], cmap=my_cmap, vmin=0.1) #2012
plot_orthographic_np(box, yearly_data_set.data[36], cmap=my_cmap, vmin=0.1) #2015
Explanation: Smoothing
Smoothing in the lattitudinal direction (N-S) is not affected by the projection; distances do not vary since the circles of equal longitude are all greater circles on the sphere. We apply a Gaussian filter where we reflect data in the lattitudinal and time directions and wrap in the longitudinal direction. Next, to filter in the longitudinal direction, we need to use a different filter width for each lattitude.
End of explanation
sb = sobel_filter(box, smooth_data, weight=sobel_weights)
pixel_sb = sobel_filter(box, smooth_data, physical=False)
Explanation: Sobel filtering
The Sobel filter has the same problem as the Gaussian filter, but the solution is easier. We just correct for the magnitude of the Sobel response by multiplying the longitudinal component by the cosine of the latitude.
End of explanation
signal = 1/sb[3]
## quartiles
signalarray = np.asarray(signal).reshape(-1)
signal_no0 = np.ma.masked_equal(signalarray,0)
signal_no0=signal_no0.compressed()
#np.percentile(signal_no0, [25, 50, 75, 100])
## There is no control data for observations => must choose thresholds based on data itself.
# Here, use 75 and 50%ile (quartiles 2 and 3)
upper_threshold=np.percentile(signal_no0, 95)
lower_threshold=np.percentile(signal_no0, 50)
from hyper_canny import cp_edge_thinning, cp_double_threshold
# use directions of pixel based sobel transform and magnitudes from calibrated physical sobel.
dat = pixel_sb.transpose([3,2,1,0]).astype('float32')
dat[:,:,:,3] = sb[3].transpose([2,1,0])
mask = cp_edge_thinning(dat)
thinned = mask.transpose([2, 1, 0])
dat = sb.transpose([3,2,1,0]).copy().astype('float32')
edges = cp_double_threshold(data=dat, mask=mask, a=1/upper_threshold, b=1/lower_threshold)
m = edges.transpose([2, 1, 0])
years = np.array([d.year for d in box.dates])
my_cmap = matplotlib.cm.get_cmap('rainbow') #YlGnBu
my_cmap.set_under('w')
plot_orthographic_np(box, (years[:,None,None]*m).max(axis=0), cmap=my_cmap, vmin=1979)
Explanation: Determine hysteresis settings
End of explanation |
2,941 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Now to create the needed tables we need to execute our first SQL statements. It is important to call the commit() function if you want your changes to appear in the file. It is recommended to not set the AUTO INCREMENT parameter for the primary keys. In this case the ID just refers to the ROWID column of the table which is automatically created. The AUTO INCREMENT parameter would asure you that for every new entry a higher key is created and smaller keys from deleted entries are not used. If you set the PRIMARY KEY make sure that the data type is INTEGER and not INT otherwise the field won't be filled automatically.
Step1: To fill the created tables with data we first need the data. For the beginning I manually collected the needed data for four movies and severall cinemas in Bonn and Cologne.
Step2: First I fill all the tables which do not reference any other tables, namely the FILM, PERSON and CINEMA tables. Therefore I loop over the entries in our data to create insertion statements for every entry using pythons string formating.
Step4: The SHOW table is intended to hold the data when you can see which film in which cinema. As it would be rather difficult to find out the dates and times when each of our four movies where shown in the different cinemas I decided to use random dates in the date column of the table SHOW. Therefore I borrowed the following function from StackOverflow. | Python Code:
conn.execute('''CREATE TABLE FILM
(ID INTEGER PRIMARY KEY,
TITLE CHAR(200) NOT NULL,
YEAR INTEGER NOT NULL,
GENRE CHAR(50) NOT NULL,
UNIQUE(TITLE, YEAR, GENRE));''')
conn.execute('''CREATE TABLE PERSON
(ID INTEGER PRIMARY KEY,
NAME CHAR(50) NOT NULL,
FIRSTNAME CHAR(50) NOT NULL,
P_ID INTEGER UNIQUE NOT NULL);''')
conn.execute('''CREATE TABLE CINEMA
(ID INTEGER PRIMARY KEY,
NAME CHAR(50) NOT NULL,
CITY CHAR(50) NOT NULL,
UNIQUE(NAME, CITY));''')
conn.execute('''CREATE TABLE PARTICIPATION
(FILM INTEGER NOT NULL,
PERSON INTEGER NOT NULL,
FUNCTION CHAR(20) NOT NULL,
FOREIGN KEY(FILM) REFERENCES FILM(ID),
FOREIGN KEY(PERSON) REFERENCES PERSON(ID),
UNIQUE(FILM, PERSON, FUNCTION));''')
conn.execute('''CREATE TABLE SHOW
(FILM INTEGER NOT NULL,
DATE DATETIME NOT NULL,
CINEMA CHAR(30) NOT NULL,
FOREIGN KEY(FILM) REFERENCES FILM(ID),
FOREIGN KEY(CINEMA) REFERENCES CINEMA(ID),
UNIQUE(FILM, DATE, CINEMA));''')
conn.commit()
Explanation: Now to create the needed tables we need to execute our first SQL statements. It is important to call the commit() function if you want your changes to appear in the file. It is recommended to not set the AUTO INCREMENT parameter for the primary keys. In this case the ID just refers to the ROWID column of the table which is automatically created. The AUTO INCREMENT parameter would asure you that for every new entry a higher key is created and smaller keys from deleted entries are not used. If you set the PRIMARY KEY make sure that the data type is INTEGER and not INT otherwise the field won't be filled automatically.
End of explanation
import tmdbsimple as tmdb
tmdb.API_KEY = '7f0bbadc274ac0c100f84d8bf81ef2f6'
disc = tmdb.Discover()
total_pages = disc.movie(page = 1, vote_average_gte = 6.5,
vote_count_gte=100, language='en-US')['total_pages']
all_movies = []
for i in range(1,total_pages+1):
movies = disc.movie(page = i, vote_average_gte = 6.5,
vote_count_gte=50, language='en-US')['results']
all_movies += movies
gen = tmdb.Genres()
genres_dict = {x['id'] : x['name'] for x in gen.list()['genres']}
wrong_genre_ids =[]
for movie in all_movies:
genre_list = []
for gen_id in movie['genre_ids']:
try:
genre_list.append(genres_dict[gen_id])
except KeyError:
wrong_genre_ids.append(gen_id)
movie['genre_string'] = '/'.join(genre_list)
for movie in all_movies:
mov_id = movie['id']
actor_list = []
director_list = []
for char in tmdb.Movies(mov_id).credits()['cast']:
actor_list.append((char['name'],char['id']))
for char in tmdb.Movies(mov_id).credits()['crew']:
if char['job'] == 'Director':
director_list.append((char['name'],char['id']))
movie['actors'] = actor_list
movie['directors'] = director_list
# Cinema Data
cinemas = [('Bonner Kinemathek', 'Bonn'),
('Stern Lichtspiele', 'Bonn'),
('WOKI - Dein Kino!', 'Bonn'),
('Neue Filmbรผhne', 'Bonn'),
('Rex-Lichtspieltheater', 'Bonn'),
('Cinedom', 'Cologne'),
('Residenz', 'Cologne'),
('Metropolis Lichtspieltheater GmbH', 'Cologne'),
('Filmpalette', 'Cologne'),
('ODEON-Lichtspieltheater GmbH', 'Cologne'),
('Off Broadway Kino', 'Cologne'),
('Theater am Weiรhaus', 'Cologne'),
('Cinenova', 'Cologne')
]
Explanation: To fill the created tables with data we first need the data. For the beginning I manually collected the needed data for four movies and severall cinemas in Bonn and Cologne.
End of explanation
# Insert Movies
movie_errors = []
for movie in all_movies:
try:
conn.execute('''INSERT INTO FILM (TITLE,YEAR,GENRE)
VALUES ("{}", "{}", "{}");'''.format(movie['original_title'],
movie['release_date'][:4],
movie['genre_string']))
except sqlite3.IntegrityError:
continue
except sqlite3.OperationalError:
movie_errors.append(movie['original_title'])
cur = conn.cursor()
# Insert Persons
person_errors = []
for movie in all_movies:
for actor in movie['actors']:
actor_id = actor[1]
actor_name = actor[0]
try:
conn.execute('''INSERT INTO PERSON (FIRSTNAME,NAME, P_ID)
VALUES ("{}", "{}", "{}");'''.format(' '.join(actor_name.split()[:-1]),
actor_name.split()[-1],
actor_id))
except sqlite3.IntegrityError:
continue
except sqlite3.OperationalError:
person_errors.append(actor_name)
for director in movie['directors']:
director_id = director[1]
director_name = director[0]
try:
conn.execute('''INSERT INTO PERSON (FIRSTNAME,NAME, P_ID)
VALUES ("{}", "{}", "{}");'''.format(' '.join(director_name.split()[:-1]),
director_name.split()[-1],
director_id))
except sqlite3.IntegrityError:
continue
except sqlite3.OperationalError:
person_errors.append(director_name)
# Insert Cinemas
for cinema in cinemas:
conn.execute('''INSERT INTO CINEMA (NAME, CITY)
VALUES ('{}', '{}');'''.format(*cinema))
conn.commit()
Explanation: First I fill all the tables which do not reference any other tables, namely the FILM, PERSON and CINEMA tables. Therefore I loop over the entries in our data to create insertion statements for every entry using pythons string formating.
End of explanation
from random import randrange
from datetime import timedelta
import datetime
def random_date(start, end):
This function will return a random datetime between two datetime
objects.
delta = end - start
int_delta = (delta.days * 24 * 60 * 60) + delta.seconds
random_second = randrange(int_delta)
return start + timedelta(seconds=random_second)
d1 = datetime.datetime(1980, 1,1)
d2 = datetime.datetime(2016, 1,1)
str(random_date(d1, d2))
import random
# Insert Participation
for movie in all_movies:
try:
for actor in movie['actors']:
actor_id = actor[1]
actor_name = actor[0]
conn.execute('''INSERT INTO PARTICIPATION (FILM, PERSON, FUNCTION)
VALUES ((SELECT ID from FILM WHERE TITLE="{}"),
(SELECT ID from PERSON WHERE FIRSTNAME="{}"
AND NAME="{}"
AND P_ID="{}"),
'actor');'''.format(movie['original_title'],
' '.join(actor_name.split()[:-1]),
actor_name.split()[-1],
actor_id))
for director in movie['directors']:
director_id = director[1]
director_name = director[0]
conn.execute('''INSERT INTO PARTICIPATION (FILM, PERSON, FUNCTION) VALUES
((SELECT ID from FILM WHERE TITLE="{}"),
(SELECT ID from PERSON WHERE FIRSTNAME="{}"
AND NAME="{}"
AND P_ID="{}"),
'director');'''.format(movie['original_title'],
' '.join(director_name.split()[:-1]),
director_name.split()[-1],
director_id))
except sqlite3.IntegrityError:
continue
except sqlite3.OperationalError:
pass
# Insert Shows with random dates.
for cinema in cinemas:
for movie in all_movies:
#Do not add all movies to all cinemas
if random.uniform(0, 1) > 0.5:
conn.execute('''INSERT INTO SHOW (FILM, DATE, CINEMA) VALUES
((SELECT ID from FILM WHERE TITLE="{}"),
"{}",
(SELECT ID from CINEMA WHERE NAME="{}" AND CITY="{}")
);'''.format(movie['original_title'],
str(random_date(d1, d2)),*cinema))
else:
continue
conn.commit()
conn.close()
Explanation: The SHOW table is intended to hold the data when you can see which film in which cinema. As it would be rather difficult to find out the dates and times when each of our four movies where shown in the different cinemas I decided to use random dates in the date column of the table SHOW. Therefore I borrowed the following function from StackOverflow.
End of explanation |
2,942 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Donnรฉes multidimensionnelles SQL - correction
Correction de la sรฉance sur l'utilisation du SQL depuis un notebook.
Step1: Exercice 1
Step2: Exercice 2
Step3: Que faut-il รฉcrire ici pour rรฉcupรฉrer 1% de la table ?
Step4: Exercice 3
Step6: Il faut complรฉter le programme suivant.
Step8: Un reducer ร deux entrรฉes mรชme si cela n'a pas beaucoup de sens ici
Step10: Il n'est apparemment pas possible de retourner deux rรฉsultats mais on peut utiliser une ruse qui consise ร les concatรฉner dans une chaรฎne de caracรจres. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('ggplot')
from pyquickhelper.helpgen import NbImage
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: Donnรฉes multidimensionnelles SQL - correction
Correction de la sรฉance sur l'utilisation du SQL depuis un notebook.
End of explanation
from actuariat_python.data import table_mortalite_euro_stat
table_mortalite_euro_stat()
import pandas
df = pandas.read_csv("mortalite.txt", sep="\t", encoding="utf8", low_memory=False)
# ...
import os
if not os.path.exists('mortalite.db3'):
import sqlite3
from pandas.io import sql
cnx = sqlite3.connect('mortalite.db3')
df.to_sql(name='mortalite', con=cnx)
cnx.close()
Explanation: Exercice 1 : filtre
On veut comparer les espรฉrances de vie pour deux pays et deux annรฉes.
End of explanation
import random #loi uniforme
def echantillon(proportion):
return 1 if random.random() < proportion else 0
import sqlite3
from pandas.io import sql
cnx = sqlite3.connect('mortalite.db3')
cnx.create_function('echantillon', 1, echantillon)
Explanation: Exercice 2 : รฉchantillon alรฉatoire
End of explanation
import pandas
#example = pandas.read_sql(' ??? ', cnx)
#example
cnx.close()
Explanation: Que faut-il รฉcrire ici pour rรฉcupรฉrer 1% de la table ?
End of explanation
import sqlite3, pandas
from pandas.io import sql
cnx = sqlite3.connect('mortalite.db3')
Explanation: Exercice 3 : reducer SQL
End of explanation
class ReducerMediane:
def __init__(self):
self.indicateur = []
def step(self, value):
if value >= 0:
self.indicateur.append(value)
def finalize(self):
self.indicateur.sort()
return self.indicateur[len(self.indicateur)//2]
cnx.create_aggregate("ReducerMediane", 1, ReducerMediane)
query = SELECT annee,age,age_num, ReducerMediane(valeur) AS mediane FROM mortalite
WHERE indicateur=="LIFEXP" AND genre=="F"
GROUP BY annee,age,age_num
df = pandas.read_sql(query, cnx)
df.head()
Explanation: Il faut complรฉter le programme suivant.
End of explanation
class ReducerMediane2:
def __init__(self):
self.indicateur = []
def step(self, value, value2):
if value >= 0:
self.indicateur.append(value)
if value2 >= 0:
self.indicateur.append(value2)
def finalize(self):
self.indicateur.sort()
return self.indicateur[len(self.indicateur)//2]
cnx.create_aggregate("ReducerMediane2", 2, ReducerMediane2)
query = SELECT annee,age,age_num, ReducerMediane2(valeur, valeur+1) AS mediane2 FROM mortalite
WHERE indicateur=="LIFEXP" AND genre=="F"
GROUP BY annee,age,age_num
df = pandas.read_sql(query, cnx)
df.head()
Explanation: Un reducer ร deux entrรฉes mรชme si cela n'a pas beaucoup de sens ici :
End of explanation
class ReducerQuantile:
def __init__(self):
self.indicateur = []
def step(self, value):
if value >= 0:
self.indicateur.append(value)
def finalize(self):
self.indicateur.sort()
q1 = self.indicateur[len(self.indicateur)//4]
q2 = self.indicateur[3*len(self.indicateur)//4]
n = len(self.indicateur)
return "%f;%f;%s" % (q1,q2,n)
cnx.create_aggregate("ReducerQuantile", 1, ReducerQuantile)
query = SELECT annee,age,age_num, ReducerQuantile(valeur) AS quantiles FROM mortalite
WHERE indicateur=="LIFEXP" AND genre=="F"
GROUP BY annee,age,age_num
df = pandas.read_sql(query, cnx)
df.head()
cnx.close()
Explanation: Il n'est apparemment pas possible de retourner deux rรฉsultats mais on peut utiliser une ruse qui consise ร les concatรฉner dans une chaรฎne de caracรจres.
End of explanation |
2,943 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial
Let's work through an example of single-cell data analysis using Uncurl, using many of its features. For a much briefer example using the same dataset, see examples/zeisel_subset_example.py.
Step1: Loading data
Step2: This dataset is a subset of the data from Zeisel et al. 2015 (https
Step3: Here, X is a 2d numpy array containing read count data. It contains 753 cells and 19971 genes.
Gene subset selection
Before doing anything else, we usually select a subset of about 20% of the genes. This is done using the max_variance_genes function, which first bins the genes by their mean expression values, and for each bin, selects the highest variance genes.
Step4: Here, we divide the genes into five bins by their mean expression value, and in each bin, we select the top 20% of genes by variance. This gives us 3990 genes.
Step5: Distribution selection
Now, we can try determining which distribution fits the data best using the DistFitDataset function. This is a heuristic method that returns the fit errors for the Gaussian, Poisson, and Log-Normal distributions for each gene.
Based on our results, the Poisson distribution will be best for count (UMI) data, while the Log-Normal distribution will be a better fit for transformed (TPM, etc.) data. So it's okay to skip this step if you have a good sense of the data you're working with.
Step6: This would indicate that the best fit distribution is the Log-Normal distribution, despite the data being UMI counts.
State estimation
State estimation is the heart of UNCURL. This involves probabilistic matrix factorization and returns two matrices
Step7: Clustering
The simplest way to do clustering is simply to take W.argmax(0). This returns the most likely assigned cluster for each cell.
Step8: Visualization
It is recommended to run t-SNE on W, or M*W. The Euclidean distance isn't really the best distance metric for W; usually cosine distance, L1 distance, or symmetric KL divergence work better, with KL divergence usually the best. However, as a custom distance metric it tends to be rather slow.
M*W can be treated basically the same as the original data, and whatever visualization methods for the original data can also be used on it. However, given that M*W is low-rank, taking the log might be necessary.
Step9: Since we know the actual labels for the cell types, we can plot them here too
Step10: To use M*W for visualization, we usually do some additional processing on it (as with t-SNE on the original dataset)
Step11: Another method for visualizing data is a MDS-based approach. This is fastest, but does not produce the most clean visualizations. | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import scipy.io
import uncurl
Explanation: Tutorial
Let's work through an example of single-cell data analysis using Uncurl, using many of its features. For a much briefer example using the same dataset, see examples/zeisel_subset_example.py.
End of explanation
data = scipy.io.loadmat('data/GSE60361_dat.mat')
Explanation: Loading data
End of explanation
X = data['Dat']
X.shape
Explanation: This dataset is a subset of the data from Zeisel et al. 2015 (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE60361), consisting of mouse brain cells.
End of explanation
genes = uncurl.max_variance_genes(X, nbins=5, frac=0.2)
len(genes)
Explanation: Here, X is a 2d numpy array containing read count data. It contains 753 cells and 19971 genes.
Gene subset selection
Before doing anything else, we usually select a subset of about 20% of the genes. This is done using the max_variance_genes function, which first bins the genes by their mean expression values, and for each bin, selects the highest variance genes.
End of explanation
data_subset = X[genes,:]
data_subset.shape
Explanation: Here, we divide the genes into five bins by their mean expression value, and in each bin, we select the top 20% of genes by variance. This gives us 3990 genes.
End of explanation
from uncurl import fit_dist_data
fit_errors = fit_dist_data.DistFitDataset(data_subset)
poiss_errors = fit_errors['poiss']
lognorm_errors = fit_errors['lognorm']
norm_errors = fit_errors['norm']
errors = np.vstack((poiss_errors, lognorm_errors, norm_errors))
print(sum(errors.argmax(0)==0))
print(sum(errors.argmax(0)==1))
print(sum(errors.argmax(0)==2))
Explanation: Distribution selection
Now, we can try determining which distribution fits the data best using the DistFitDataset function. This is a heuristic method that returns the fit errors for the Gaussian, Poisson, and Log-Normal distributions for each gene.
Based on our results, the Poisson distribution will be best for count (UMI) data, while the Log-Normal distribution will be a better fit for transformed (TPM, etc.) data. So it's okay to skip this step if you have a good sense of the data you're working with.
End of explanation
M1, W1, ll = uncurl.run_state_estimation(data_subset, 7, dist='Poiss', disp=False)
M2, W2, cost = uncurl.run_state_estimation(data_subset, 7, dist='LogNorm')
Explanation: This would indicate that the best fit distribution is the Log-Normal distribution, despite the data being UMI counts.
State estimation
State estimation is the heart of UNCURL. This involves probabilistic matrix factorization and returns two matrices: M, a genes by clusters matrix indicating the "archetypal" cell for each cluster, and W, a clusters by cells matrix indicating the cluster assignments of each cell. State estimation requires the number of clusters to be provided beforehand. In this case, we know that there are 7 cell types.
The run_state_estimation is an interface to different state estimation methods for different distributions.
This step should finish in less than a couple of minutes.
End of explanation
labels = W1.argmax(0)
Explanation: Clustering
The simplest way to do clustering is simply to take W.argmax(0). This returns the most likely assigned cluster for each cell.
End of explanation
from sklearn.manifold import TSNE
from uncurl.sparse_utils import symmetric_kld
tsne = TSNE(2, metric=symmetric_kld)
tsne_w = tsne.fit_transform(W1.T)
fig = visualize_dim_red(tsne_w.T, labels, title='TSNE(W) assigned labels', figsize=(10,6))
Explanation: Visualization
It is recommended to run t-SNE on W, or M*W. The Euclidean distance isn't really the best distance metric for W; usually cosine distance, L1 distance, or symmetric KL divergence work better, with KL divergence usually the best. However, as a custom distance metric it tends to be rather slow.
M*W can be treated basically the same as the original data, and whatever visualization methods for the original data can also be used on it. However, given that M*W is low-rank, taking the log might be necessary.
End of explanation
fig = visualize_dim_red(tsne_w.T, data['ActLabs'].flatten(), title='TSNE(W) actual labels', figsize=(10,6))
Explanation: Since we know the actual labels for the cell types, we can plot them here too:
End of explanation
from sklearn.decomposition import TruncatedSVD
tsvd = TruncatedSVD(50)
tsne = TSNE(2)
mw = M1.dot(W1)
mw_log = np.log1p(mw)
mw_tsvd = tsvd.fit_transform(mw_log.T)
mw_tsne = tsne.fit_transform(mw_tsvd)
fig = visualize_dim_red(mw_tsne.T, labels, title='TSNE(MW) assigned labels', figsize=(10,6))
fig = visualize_dim_red(mw_tsne.T, data['ActLabs'].flatten(), title='TSNE(MW) actual labels', figsize=(10,6))
Explanation: To use M*W for visualization, we usually do some additional processing on it (as with t-SNE on the original dataset):
End of explanation
from uncurl.vis import visualize_dim_red
vis_mds = uncurl.mds(M1, W1, 2)
fig = visualize_dim_red(vis_mds, labels, figsize=(10,6))
Explanation: Another method for visualizing data is a MDS-based approach. This is fastest, but does not produce the most clean visualizations.
End of explanation |
2,944 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Procrustes Analysis
Step1: PCA
Step2: Solving b vector
$$b = \Phi^T \left(x - \bar{x}\right)$$ | Python Code:
import pandas
df = pandas.read_csv('muct76_stasm-output.csv', header=None, usecols=np.arange(2,156), dtype=float)
#df = pandas.read_csv('muct76-opencv.csv', header=0, usecols=np.arange(2,154), dtype=float)
df.head()
X = df.iloc[:, ::2].values
Y = df.iloc[:, 1::2].values
d = np.hstack((X,Y))
d.shape
import sys
threshold = 1.0e-8
def center(vec):
pivot = int(vec.shape[0]/2)
meanx = np.mean(vec[:pivot])
meany = np.mean(vec[pivot:])
return(meanx, meany)
def calnorm(vec):
vsqsum = np.sum(np.square(vec))
return(vsqsum)
def scale(vec):
vcopy = vec.copy()
vmax = np.max(vec)
if vmax > 2.0:
vcopy = vcopy / vmax
vnorm = calnorm(vcopy)
return (vcopy / np.sqrt(vnorm))
def caldiff(pref, pcmp):
return np.mean(np.sum(np.square(pref - pcmp), axis=1))
def simTransform(pref, pcmp, showerror = False):
err_before = np.mean(np.sum(np.square(pref - pcmp), axis=1))
ref_mean = np.mean(pref, axis=0)
prefcentered = np.asmatrix(pref) - np.asmatrix(ref_mean)
cmp_mean = np.mean(pcmp, axis=0)
pcmpcentered = np.asmatrix(pcmp) - np.asmatrix(cmp_mean)
Sxx = np.sum(np.square(pcmpcentered[:,0]))
Syy = np.sum(np.square(pcmpcentered[:,1]))
Sxxr = prefcentered[:,0].T * pcmpcentered[:,0] #(ref_x, x)
Syyr = prefcentered[:,1].T * pcmpcentered[:,1] #(ref_y, y)
Sxyr = prefcentered[:,1].T * pcmpcentered[:,0] #(ref_y, x)
Syxr = prefcentered[:,0].T * pcmpcentered[:,1] #(ref_x, y)
a = (Sxxr + Syyr)/(Sxx + Syy) #(Sxxr + Syyr) / (Sxx + Syy)
b = (Sxyr - Syxr) / (Sxx + Syy)
a = np.asscalar(a)
b = np.asscalar(b)
Rot = np.matrix([[a, -b],[b, a]])
translation = -Rot * np.asmatrix(cmp_mean).T + np.asmatrix(ref_mean).T
outx, outy = [], []
res = Rot * np.asmatrix(pcmp).T + translation
err_after = np.mean(np.sum(np.square(pref - res.T), axis=1))
if showerror:
print("Error before: %.4f after: %.4f\n"%(err_before, err_after))
return (res.T, err_after)
def align2mean(data):
d = data.copy()
pivot = int(d.shape[1]/2)
print("pivot: ", pivot)
for i in range(d.shape[0]):
cx, cy = center(d[i,:])
d[i,:pivot] = d[i,:pivot] - cx
d[i,pivot:] = d[i,pivot:] - cy
#print(cx, cy, center(d[i,:]))
d[i,:] = scale(d[i,:])
norm = calnorm(d[i,:])
d_aligned = d.copy()
pref = np.vstack((d[0,:pivot], d[0,pivot:])).T
print("pref: ", pref.shape)
mean = pref.copy()
mean_diff = 1
while mean_diff > threshold:
err_sum = 0.0
for i in range(1, d.shape[0]):
p = np.vstack((d[i,:pivot], d[i,pivot:])).T
p_aligned, err = simTransform(mean, p)
d_aligned[i,:] = scale(p_aligned.flatten(order='F'))
err_sum += err
oldmean = mean.copy()
mean = np.mean(d_aligned, axis=0)
mean = scale(mean)
mean = np.reshape(mean, newshape=pref.shape, order='F')
d = d_aligned.copy()
mean_diff = caldiff(oldmean, mean)
sys.stdout.write("SumError: %.4f MeanDiff: %.6f\n"%(err_sum, mean_diff))
return (d_aligned, mean)
d_aligned, mean = align2mean(d)
plt.figure(figsize=(7,7))
plt.gca().set_aspect('equal')
plotFaceShapeFromStasm(mean)
alignedfaces = d_aligned
plt.figure()
plt.plot(alignedfaces[2,:76], alignedfaces[2,76:])
plt.show()
Explanation: Procrustes Analysis:
Translate each example such that its centroid is at origin, and scale them so that $|\mathbf{x}|=1$
Choose the first example as the initial estimate of the mean shape ($\bar{\mathbf{x}}$), and save a copy as a reference ($\mathbf{\bar{x}_0}$)
Align all the examples with the current estimate of the mean ($\bar{\mathbf{x}}$)
Re-estimate mean from the aligned shapes
Apply constraints on the current estiate of the mean by aligning it with reference $\mathbf{\bar{x}_0}$ and scaling so that $|\bar{\mathbf{x}}|=1$
Repeat steps 3, 4, 5 until convergence
End of explanation
d_aligned.shape
from sklearn.decomposition import PCA
pca = PCA(n_components=8)
pca.fit(d_aligned)
print(pca.explained_variance_ratio_)
cov_mat = np.cov(d_aligned.T)
print(cov_mat.shape)
eig_values, eig_vectors = np.linalg.eig(cov_mat)
print(eig_values.shape, eig_vectors.shape)
num_eigs = 8
Phi_matrix = eig_vectors[:,:num_eigs]
Phi_matrix.shape
Explanation: PCA
End of explanation
# * ()
mean_matrix = np.reshape(mean, (152,1), 'F')
d_aligned_matrix = np.matrix(d_aligned)
delta = d_aligned_matrix.T - mean_matrix
b = (np.matrix(Phi_matrix).T * delta).T
b.shape
mean.dump('models/meanshape-ocvfmt.pkl')
eig_vectors.dump('models/eigenvectors-ocvfmt.pkl')
eig_values.dump('models/eigenvalues-ocvfmt.pkl')
Phi_matrix.dump('models/phimatrix.pkl')
b.dump('models/bvector.pkl')
d_aligned.dump('models/alignedfaces.pkl')
mean_matrix.dump('models/meanvector.pkl')
Explanation: Solving b vector
$$b = \Phi^T \left(x - \bar{x}\right)$$
End of explanation |
2,945 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Step 1
Step2: Step 2
Step3: Step 3
Step4: The CmdStan installation includes a simple example program bernoulli.stan and test data bernoulli.data.json. These are in the CmdStan installation directory examples/bernoulli.
The program bernoulli.stan takes a vector y of length N containing binary outcomes and uses a bernoulli distribution to estimate theta, the chance of success.
Step5: The data file bernoulli.data.json contains 10 observations, split between 2 successes (1) and 8 failures (0).
Step6: The following code test that the CmdStanPy toolchain is properly installed by compiling the example model, fitting it to the data, and obtaining a summary of estimates of the posterior distribution of all parameters and quantities of interest. | Python Code:
# Load packages used in this notebook
import os
import json
import shutil
import urllib.request
import pandas as pd
Explanation: <a href="https://colab.research.google.com/github/probml/probml-notebooks/blob/main/notebooks/CmdStanPy_Example_Notebook.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Running STAN (CmdStanPy) MCMC library in Colab Example
Taken from
https://mc-stan.org/users/documentation/case-studies/jupyter_colab_notebooks_2020.html
This notebook demonstrates how to install the CmdStanPy toolchain on a Google Colab instance and verify the installation by running the Stan NUTS-HMC sampler on the example model and data which are included with CmdStan. Each code block in this notebook updates the Python environment, therefore you must step through this notebook cell by cell.
End of explanation
# Install package CmdStanPy
!pip install --upgrade cmdstanpy
Explanation: Step 1: install CmdStanPy
End of explanation
# Install pre-built CmdStan binary
# (faster than compiling from source via install_cmdstan() function)
tgz_file = "colab-cmdstan-2.23.0.tar.gz"
tgz_url = "https://github.com/stan-dev/cmdstan/releases/download/v2.23.0/colab-cmdstan-2.23.0.tar.gz"
if not os.path.exists(tgz_file):
urllib.request.urlretrieve(tgz_url, tgz_file)
shutil.unpack_archive(tgz_file)
!ls
Explanation: Step 2: download and untar the CmdStan binary for Google Colab instances.
End of explanation
# Specify CmdStan location via environment variable
os.environ["CMDSTAN"] = "./cmdstan-2.23.0"
# Check CmdStan path
from cmdstanpy import CmdStanModel, cmdstan_path
cmdstan_path()
Explanation: Step 3: Register the CmdStan install location.
End of explanation
bernoulli_stan = os.path.join(cmdstan_path(), "examples", "bernoulli", "bernoulli.stan")
with open(bernoulli_stan, "r") as fd:
print("\n".join(fd.read().splitlines()))
Explanation: The CmdStan installation includes a simple example program bernoulli.stan and test data bernoulli.data.json. These are in the CmdStan installation directory examples/bernoulli.
The program bernoulli.stan takes a vector y of length N containing binary outcomes and uses a bernoulli distribution to estimate theta, the chance of success.
End of explanation
bernoulli_data = os.path.join(cmdstan_path(), "examples", "bernoulli", "bernoulli.data.json")
with open(bernoulli_data, "r") as fd:
print("\n".join(fd.read().splitlines()))
Explanation: The data file bernoulli.data.json contains 10 observations, split between 2 successes (1) and 8 failures (0).
End of explanation
# Run CmdStanPy Hello, World! example
from cmdstanpy import cmdstan_path, CmdStanModel
# Compile example model bernoulli.stan
bernoulli_model = CmdStanModel(stan_file=bernoulli_stan)
# Condition on example data bernoulli.data.json
bern_fit = bernoulli_model.sample(data=bernoulli_data, seed=123)
# Print a summary of the posterior sample
bern_fit.summary()
Explanation: The following code test that the CmdStanPy toolchain is properly installed by compiling the example model, fitting it to the data, and obtaining a summary of estimates of the posterior distribution of all parameters and quantities of interest.
End of explanation |
2,946 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Prepare CDL Data
In this notebook, we prepare the 2016 from the USDA 2016 Crop Data Layer (CDL) for Iowa for use in the crop classification notebooks. This involves taking the CDL data for Iowa, projecting, cropping, and resampling it to match the Planet scene. We perform these steps for the crop classification test and train Planet scenes.
The end results will be saved in pre-data for use in those notebooks. The initial data is too large to be managed gracefully in git.
Install Dependencies
Step1: Obtain CDL Data File
In this section, we download the 2016 Iowa CDL.
2016 CDL for Iowa is obtained through the CropScape site. On that site, ensure the layer is 'CropLand Data Layers -> 2016', then click the icon for 'Define Area of Interest by State...' (looks like an outline of the US with the US flag design). Under 'Select a State', select Iowa and click 'Submit.' Next, click the icon for 'Download Defined Area of Interest Data' (looks like a postal letter with an arrow). In the popup, ensure the 'CDL' tab is open and '2016' is selected, then click 'Select.' Another popup should appear with 'Please Wait...' then after a while, the popup should be replaced by a popup that says 'Download files from server'. Click 'Download' and after a bit the download should begin.
Unzip the downloaded folder and move the tif, CDL_2016_19.tif to the data folder.
Step2: Prepare Train CDL Dataset
The first step in preparing the train CDL dataset is downloading the train scene. We also download the metadata and udm because those are used in another notebook.
The train scene is 210879_1558814_2016-07-25_0e16.
Download Scene and Supporting Files
Step3: Crop and Project CDL
We project, resample, and crop the CDL to match the Orthotile and save in train dataset.
Step4: Prepare Train CDL Dataset
Now we prepare the test CDL dataset.
The test scene is 210863_1559015_2016-07-25_0e0f.
Download Scene and Supporting Files
Step5: Crop and Project CDL
We project, resample, and crop the CDL to match the Orthotile and save in test dataset. | Python Code:
import os
import pathlib
from subprocess import check_output, STDOUT, CalledProcessError
import tempfile
import rasterio
Explanation: Prepare CDL Data
In this notebook, we prepare the 2016 from the USDA 2016 Crop Data Layer (CDL) for Iowa for use in the crop classification notebooks. This involves taking the CDL data for Iowa, projecting, cropping, and resampling it to match the Planet scene. We perform these steps for the crop classification test and train Planet scenes.
The end results will be saved in pre-data for use in those notebooks. The initial data is too large to be managed gracefully in git.
Install Dependencies
End of explanation
# ensure the tif file is present
cdl_full = os.path.join('data', 'CDL_2016_19.tif')
print(cdl_full)
assert os.path.isfile(cdl_full)
Explanation: Obtain CDL Data File
In this section, we download the 2016 Iowa CDL.
2016 CDL for Iowa is obtained through the CropScape site. On that site, ensure the layer is 'CropLand Data Layers -> 2016', then click the icon for 'Define Area of Interest by State...' (looks like an outline of the US with the US flag design). Under 'Select a State', select Iowa and click 'Submit.' Next, click the icon for 'Download Defined Area of Interest Data' (looks like a postal letter with an arrow). In the popup, ensure the 'CDL' tab is open and '2016' is selected, then click 'Select.' Another popup should appear with 'Please Wait...' then after a while, the popup should be replaced by a popup that says 'Download files from server'. Click 'Download' and after a bit the download should begin.
Unzip the downloaded folder and move the tif, CDL_2016_19.tif to the data folder.
End of explanation
# define and, if necessary, create data directory
train_folder = os.path.join('data', 'cart', '210879_1558814_2016-07-25_0e16')
pathlib.Path(train_folder).mkdir(parents=True, exist_ok=True)
train_scene = os.path.join(train_folder, '210879_1558814_2016-07-25_0e16_BGRN_Analytic.tif')
# First test if scene file exists, if not, use the Planet commandline tool to download the image, metadata, and udm.
# This command assumes a bash shell, available in Unix-based operating systems.
!test -f $train_scene || \
planet data download \
--item-type PSOrthoTile \
--dest $train_folder \
--asset-type analytic,analytic_xml,udm \
--string-in id 210879_1558814_2016-07-25_0e16
Explanation: Prepare Train CDL Dataset
The first step in preparing the train CDL dataset is downloading the train scene. We also download the metadata and udm because those are used in another notebook.
The train scene is 210879_1558814_2016-07-25_0e16.
Download Scene and Supporting Files
End of explanation
# Utility functions: crop, resample, and project an image
# These use gdalwarp. for a description of gdalwarp command line options, see:
# http://www.gdal.org/gdalwarp.html
def gdalwarp_project_options(src_crs, dst_crs):
return ['-s_srs', src_crs.to_string(), '-t_srs', dst_crs.to_string()]
def gdalwarp_crop_options(bounds, crs):
xmin, ymin, xmax, ymax = [str(b) for b in bounds]
# -te xmin ymin xmax ymax
return ['-te', xmin, ymin, xmax, ymax]
def gdalwarp_resample_options(width, height, technique='near'):
# for technique options, see: http://www.gdal.org/gdalwarp.html
return ['-ts', width, height, '-r', technique]
def gdalwarp(input_filename, output_filename, options):
commands = _gdalwarp_commands(input_filename, output_filename, options)
# print error if one is encountered
# https://stackoverflow.com/questions/29580663/save-error-message-of-subprocess-command
try:
output = check_output(commands, stderr=STDOUT)
except CalledProcessError as exc:
print(exc.output)
def _gdalwarp_commands(input_filename, output_filename, options):
commands = ['gdalwarp'] + options + \
['-overwrite',
input_filename,
output_filename]
print(' '.join(commands))
return commands
def _test():
TEST_DST_SCENE = train_scene
TEST_SRC_SCENE = cdl_full
with rasterio.open(TEST_DST_SCENE, 'r') as dst:
with rasterio.open(TEST_SRC_SCENE, 'r') as src:
print(gdalwarp_project_options(src.crs, dst.crs))
print(gdalwarp_crop_options(dst.bounds, dst.crs))
print(gdalwarp_resample_options(dst.width, dst.height))
# _test()
# lossless compression of an image
def _compress(input_filename, output_filename):
commands = ['gdal_translate',
'-co', 'compress=LZW',
'-co', 'predictor=2',
input_filename,
output_filename]
print(' '.join(commands))
# subprocess.check_call(commands)
# print error if one is encountered
# https://stackoverflow.com/questions/29580663/save-error-message-of-subprocess-command
try:
output = check_output(commands, stderr=STDOUT)
except CalledProcessError as exc:
print(exc.output)
def prepare_cdl_image(cdl_filename, dst_filename, out_filename, compress=False, overwrite=True):
'''Project, crop, and resample cdl image to match dst_filename image.'''
with rasterio.open(cdl_filename, 'r') as src:
with rasterio.open(dst_filename, 'r') as dst:
# project
src_crs = _add_nad_datum(src.crs) # Manually add NAD83 datum
proj_options = gdalwarp_project_options(src_crs, dst.crs)
# crop
crop_options = gdalwarp_crop_options(dst.bounds, dst.crs)
# resample
width, height = dst.shape
resample_options = gdalwarp_resample_options(str(width), str(height), 'near')
options = proj_options + crop_options + resample_options
# check to see if output file exists, if it does, do not warp
if os.path.isfile(dst_filename) and not overwrite:
print('{} already exists. Aborting warp of {}'.format(dst_filename, cdl_filename))
elif compress:
with tempfile.NamedTemporaryFile(suffix='.vrt') as vrt_file:
options += ['-of', 'vrt']
gdalwarp(cdl_filename, vrt_file.name, options)
_compress(vrt_file.name, out_filename)
else:
print(options)
gdalwarp(cdl_filename, out_filename, options)
def _add_nad_datum(crs):
'''Rasterio is not reading the datum for the CDL image so add it manually'''
crs.update({'datum': 'NAD83'})
return crs
def _test(delete=True):
TEST_DST_SCENE = train_scene
TEST_SRC_SCENE = cdl_full
with tempfile.NamedTemporaryFile(suffix='.tif', delete=delete, dir='.') as out_file:
# create output
prepare_cdl_image(TEST_SRC_SCENE, TEST_DST_SCENE, out_file.name)
# check output
with rasterio.open(TEST_DST_SCENE, 'r') as dst:
with rasterio.open(out_file.name, 'r') as src:
assert dst.crs == src.crs, '{} != {}'.format(src.crs, dst.crs)
assert dst.bounds == src.bounds
assert dst.shape == src.shape
# _test()
# define and, if necessary, create pre-data directory
predata_folder = 'pre-data'
pathlib.Path(predata_folder).mkdir(parents=True, exist_ok=True)
# create train dataset gold image from CDL image
train_CDL_filename = os.path.join(predata_folder, 'CDL_2016_19_train.tif')
prepare_cdl_image(cdl_full, train_scene, train_CDL_filename, overwrite=False, compress=True)
Explanation: Crop and Project CDL
We project, resample, and crop the CDL to match the Orthotile and save in train dataset.
End of explanation
# define and, if necessary, create data directory
test_folder = os.path.join('data', 'cart', '210863_1559015_2016-07-25_0e0f')
pathlib.Path(test_folder).mkdir(parents=True, exist_ok=True)
test_scene = os.path.join(test_folder, '210863_1559015_2016-07-25_0e0f_BGRN_Analytic.tif')
# First test if scene file exists, if not, use the Planet commandline tool to download the image, metadata, and udm.
# This command assumes a bash shell, available in Unix-based operating systems.
!test -f $test_scene || \
planet data download \
--item-type PSOrthoTile \
--dest $test_folder \
--asset-type analytic,analytic_xml,udm \
--string-in id 210863_1559015_2016-07-25_0e0f
Explanation: Prepare Train CDL Dataset
Now we prepare the test CDL dataset.
The test scene is 210863_1559015_2016-07-25_0e0f.
Download Scene and Supporting Files
End of explanation
# create train dataset gold image from CDL image
test_CDL_filename = os.path.join(predata_folder, 'CDL_2016_19_test.tif')
prepare_cdl_image(cdl_full, test_scene, test_CDL_filename, overwrite=False, compress=True)
Explanation: Crop and Project CDL
We project, resample, and crop the CDL to match the Orthotile and save in test dataset.
End of explanation |
2,947 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example 1
Step1: Read the counts table
This is bacterial mRNA-Seq with samples at 4 different temperatures with or without the addition of BCM. Each condition is sequenced in triplicates. And we are interested in 5'UTRs transcription levels.
Step2: Normalize to UTR length
Step3: Notation
X
Step4: Principal Component Analisys (PCA)
Linear dimensionality reduction using Singular Value Decomposition of the data to project it to a lower dimensional space. The input data is centered but not scaled for each feature before applying the SVD.
The resulting components are ranked by the amount of variance explained in the data.
Step5: Aside
Step6: Example 2
Step7: Expression matrix contains read counts in genes. Columns are worms rows are genes.
Step8: PCA is sensitive to variable scaling. Therefore before performing the analysis we need to normalize the data. StandardScaler will transform every variable to unti space (mean 0, variance 1). Note also that sklearn expects columns to be genes (features) and rows to be worms (samples, or observations). Therefore we transpose the matrix before doing anything.
Step9: Y_sklearn is a numpy array of the shape (num_samples, n_components) where original X data is projected onto the number of extracted principal components
Plot explained variance | Python Code:
def parse_barcodes(bcfile, bc_id='BC'):
res = {}
with open(bcfile, 'r') as fi:
for line in fi:
fields = line.strip().split(',')
if fields[0].startswith(bc_id):
res[fields[0]] = fields[1]
return res
def parse_exp_config(expfile, bc_dict):
res = []
fieldnames = ['id', 'sample', 'cond', 'barcode', 'size', 'region', 'Qbit', 'conc', 'dilution']
with open(expfile) as fi:
reader = csv.DictReader(fi, fieldnames=fieldnames)
for rec in reader:
if rec['id']:
res.append({
'sample': rec['sample'],
'bc_id': rec['barcode'],
'bc_seq': bc_dict[rec['barcode']],
'temp': int(rec['cond'][:2]),
'bcm': '+' in rec['cond'],
})
return pd.DataFrame.from_records(res)
Explanation: Example 1: E.coli 5'UTRs
Utility functions to parse experiment metadata
End of explanation
bc_dict = parse_barcodes('../../data/Lexogen_Sense_RNA-Seq.csv')
exp_df = parse_exp_config('../../data/2017-03-09_NextSeq.csv', bc_dict)
agg_utr = pd.read_csv('../../data/utr.counts.csv')
agg_utr
exp_df
Explanation: Read the counts table
This is bacterial mRNA-Seq with samples at 4 different temperatures with or without the addition of BCM. Each condition is sequenced in triplicates. And we are interested in 5'UTRs transcription levels.
End of explanation
def normalize(df, edf, columns=None):
'''
Prepares the UTR dataframe (`df`) for log transformation.
Adds experiment metadata from `edf`.
Adds pseudocounts to `utr_counts` and `UTR_length`.
Normalizes counts to UTR length.
'''
def pseudo_counts(x):
return x + 1 if x == 0 else x
df = df.merge(edf, how='left', on='sample')
if columns is not None:
df = df[columns]
# Add pseudocounts to allow log transform later
df['utr_counts'] = df['utr_counts'].apply(pseudo_counts)
df['UTR_length'] = df['UTR_length'].apply(pseudo_counts)
df['utr_norm'] = df['utr_counts'] / df['UTR_length']
return df
columns = ['gene', 'TSS', 'start', 'end', 'UTR_length',
'utr_counts', 'sample', 'bcm', 'temp']
utr = normalize(agg_utr, exp_df, columns)
utr
Explanation: Normalize to UTR length
End of explanation
# build expression matrix
X = pd.DataFrame()
samples = []
for sample in set(utr['sample']):
mask = (utr['sample']==sample) & (utr['bcm']==False)
if not utr[mask].empty:
X[sample] = utr[mask]['utr_norm'].values
samples.append(sample)
# Same as .fit() and then .transform()
X_std = StandardScaler().fit_transform(X.values.T)
X_std
scaler = StandardScaler()
dir(scaler)
Explanation: Notation
X : data (features)
Y : predicted value
-BCM samples
From scikit-learn docs:
Standardize features by removing the mean and scaling to unit variance
The standard score of a sample x is calculated as:
z = (x - u) / s
where u is the mean of the training samples or zero if with_mean=False, and s is the standard deviation of the training samples or one if with_std=False.
End of explanation
sklearn_pca = sklearnPCA(n_components=10)
Y = sklearn_pca.fit_transform(X_std)
print(Y)
#print(sklearn_pca.explained_variance_)
#print(sklearn_pca.explained_variance_ratio_)
sklearn_pca.explained_variance_
sklearn_pca.explained_variance_ratio_
Explanation: Principal Component Analisys (PCA)
Linear dimensionality reduction using Singular Value Decomposition of the data to project it to a lower dimensional space. The input data is centered but not scaled for each feature before applying the SVD.
The resulting components are ranked by the amount of variance explained in the data.
End of explanation
vdf = pd.DataFrame()
vdf['PC'] = [(i+1) for i,x in enumerate(sklearn_pca.explained_variance_ratio_)]
vdf['var'] = sklearn_pca.explained_variance_ratio_
(ggplot(vdf, aes(x='PC', y='var'))
+ geom_point(size=5, alpha=0.3)
+ ylab('Explained variance')
+ ggtitle('Unfiltered -BCM')
)
pca_df = pd.DataFrame()
pca_df['cond'] = ['%doC' % exp_df[exp_df['sample']==sample]['temp'] for sample in samples]
pca_df['PC1'] = Y[:,0]
pca_df['PC2'] = Y[:,1]
pca_df
(ggplot(pca_df, aes(x='PC1', y='PC2', color='cond'))
+ geom_point(size=5, alpha=0.3)
+ ggtitle('Unfiltered -BCM')
)
Explanation: Aside: ggplot terminology
from: https://beanumber.github.io/sds192/lab-ggplot2.html
Geometric Objects (geom)
Geometric objects or geoms are the actual marks we put on a plot. Examples include:
points (geom_point, for scatter plots, dot plots, etc)
lines (geom_line, for time series, trend lines, etc)
boxplot (geom_boxplot, for, well, boxplots!)
โฆ and many more!
A plot should have at least one geom, but there is no upper limit. You can add a geom to a plot using the + operator.
Aesthetic Mapping (aes)
In ggplot2, aesthetic means โsomething you can seeโ. Each aesthetic is a mapping between a visual cue and a variable. Examples include:
position (i.e., on the x and y axes)
color (โoutsideโ color)
fill (โinsideโ color)
shape (of points)
line type
size
Each type of geom accepts only a subset of all aestheticsโrefer to the geom help pages to see what mappings each geom accepts. Aesthetic mappings are set with the aes() function.
End of explanation
!head ../../data/CE_exp.umi.tab
!tail ../../data/CE_exp.umi.tab
Explanation: Example 2: Single worm RNA-Seq
Read in expression matrix
mRNA-Seq from 10 individual C.elegans worms. Processed with CEL-Seq-pipeline (https://github.com/eco32i/CEL-Seq-pipeline)
End of explanation
ce = pd.read_csv('../../data/CE_exp.umi.tab', sep='\t', skipfooter=5, engine='python')
ce
Explanation: Expression matrix contains read counts in genes. Columns are worms rows are genes.
End of explanation
X_std = StandardScaler().fit_transform(ce.iloc[:,1:].values.T)
X_std
sklearn_pca = sklearnPCA(n_components=10)
Y_sklearn = sklearn_pca.fit_transform(X_std)
Y_sklearn
Explanation: PCA is sensitive to variable scaling. Therefore before performing the analysis we need to normalize the data. StandardScaler will transform every variable to unti space (mean 0, variance 1). Note also that sklearn expects columns to be genes (features) and rows to be worms (samples, or observations). Therefore we transpose the matrix before doing anything.
End of explanation
sklearn_pca.explained_variance_
sklearn_pca.explained_variance_ratio_
vdf = pd.DataFrame()
vdf['PC'] = [(i+1) for i,x in enumerate(sklearn_pca.explained_variance_ratio_)]
vdf['var'] = sklearn_pca.explained_variance_ratio_
(ggplot(vdf, aes(x='PC', y='var'))
+ geom_point(size=5, alpha=0.4)
+ ylab('Explained variance')
+ theme(figure_size=(12,10))
)
pca_df = pd.DataFrame()
pca_df['sample'] = ['CE_%i' % (x+1) for x in range(10)]
pca_df['PC1'] = Y_sklearn[:,0]
pca_df['PC2'] = Y_sklearn[:,1]
(ggplot(pca_df, aes(x='PC1', y='PC2', color='sample'))
+ geom_point(size=5, alpha=0.5)
+ theme(figure_size=(12,10))
)
pca_df = pd.DataFrame()
pca_df['sample'] = ['CE_%i' % (x+1) for x in range(10)]
pca_df['PC1'] = Y_sklearn[:,0]
pca_df['PC3'] = Y_sklearn[:,2]
(ggplot(pca_df, aes(x='PC1', y='PC3', color='sample'))
+ geom_point(size=5, alpha=0.5)
+ theme(figure_size=(12,10))
)
pca_df = pd.DataFrame()
pca_df['sample'] = ['CE_%i' % (x+1) for x in range(10)]
pca_df['PCA2'] = Y_sklearn[:,1]
pca_df['PCA4'] = Y_sklearn[:,3]
(ggplot(pca_df, aes(x='PCA2', y='PCA4', color='sample'))
+ geom_point(size=5, alpha=0.5)
+ theme(figure_size=(12,10))
)
Explanation: Y_sklearn is a numpy array of the shape (num_samples, n_components) where original X data is projected onto the number of extracted principal components
Plot explained variance
End of explanation |
2,948 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Model Evaluation
How to measure a model? How to find out that the model is doing well or just predicting useless?
This job is done by metrics. There are a bunch of metrics explained in scikit-learn documentation about model evaluation and we are going to see some of them.
Train & Test Data
<img src="images/metrics/train-test.jpg" width="500" />
Step1: 1. Classification Metrics
scikit-learn documentation
Accuracy
Accuracy is the fraction of the correct predictions
Step2: Confusion Matrix
For multiclass classification
Step3: Recall, Precision & F-Score
$$ recall = \frac{tp}{tp + fn} $$
$$ precision = \frac{tp}{tp + fp} $$
$$ F_1 = 2 \cdot \frac{precision \cdot recall}{precision + recall} $$
Step4: Cross Entropy
$$ l(y, \hat{y}) = - \frac{1}{m} \sum_{i=1}^m y^{(i)} log(\hat{y}^{(i)}) + (1 - y^{(i)})log(1 - \hat{y}^{(i)}) $$
Step5: 2. Regression Metrics
scikit-learn documentation
Mean Absolute Error
$$ MAE(y, \hat{y}) = \frac{1}{m} \sum_{i=1}^m \mid y^{(i)} - \hat{y}^{(i)} \mid $$
Step6: Mean Squared Error
$$ MSE(y, \hat{y}) = \frac{1}{m} \sum_{i=1}^m (y^{(i)} - \hat{y}^{(i)})^2 $$ | Python Code:
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_iris
X, y = load_iris(return_X_y=True)
print('X.shape =', X.shape)
print('y.shape =', y.shape)
print()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, shuffle=True)
print('X_train.shape =', X_train.shape)
print('y_train.shape =', y_train.shape)
print('X_test.shape =', X_test.shape)
print('y_test.shape =', y_test.shape)
Explanation: Model Evaluation
How to measure a model? How to find out that the model is doing well or just predicting useless?
This job is done by metrics. There are a bunch of metrics explained in scikit-learn documentation about model evaluation and we are going to see some of them.
Train & Test Data
<img src="images/metrics/train-test.jpg" width="500" />
End of explanation
import numpy as np
from sklearn.metrics import accuracy_score
y_pred = [0, 2, 1, 3]
y_true = [0, 1, 2, 3]
accuracy_score(y_true, y_pred)
Explanation: 1. Classification Metrics
scikit-learn documentation
Accuracy
Accuracy is the fraction of the correct predictions:
$$ accuracy(y, \hat{y}) = \frac{1}{m} \sum_{i=1}^m 1(y^{(i)} = \hat{y}^{(i)}) $$
where $ 1(x) $ is the indicator function and $ m $ is the number of samples.
End of explanation
import numpy as np
from sklearn.metrics import confusion_matrix
y_true = [0, 0, 0, 1, 1, 1, 1, 1]
y_pred = [0, 1, 0, 1, 0, 1, 0, 1]
tn, fp, fn, tp = confusion_matrix(y_true, y_pred).ravel()
np.array([[tn, fp],
[fn, tp]])
Explanation: Confusion Matrix
For multiclass classification:
<img src="images/metrics/confusion-matrix.png" width="500" />
and for binary classification:
End of explanation
from sklearn.metrics import precision_score, recall_score, f1_score
y_pred = [0, 1, 0, 0]
y_true = [0, 1, 0, 1]
print('[[tn fn]\n [fp, tp]]')
print(confusion_matrix(y_true, y_pred))
print()
print('Recall =', recall_score(y_true, y_pred))
print('Precision =', precision_score(y_true, y_pred))
print('F1 =', f1_score(y_true, y_pred))
from sklearn.metrics import classification_report
y_true = [0, 1, 2, 2, 0]
y_pred = [0, 0, 2, 1, 0]
target_names = ['class 0', 'class 1', 'class 2']
print(classification_report(y_true, y_pred, target_names=target_names))
Explanation: Recall, Precision & F-Score
$$ recall = \frac{tp}{tp + fn} $$
$$ precision = \frac{tp}{tp + fp} $$
$$ F_1 = 2 \cdot \frac{precision \cdot recall}{precision + recall} $$
End of explanation
from sklearn.metrics import log_loss
y_true = [0, 0, 1, 1]
y_pred = [[.9, .1], [.8, .2], [.3, .7], [.01, .99]]
log_loss(y_true, y_pred)
Explanation: Cross Entropy
$$ l(y, \hat{y}) = - \frac{1}{m} \sum_{i=1}^m y^{(i)} log(\hat{y}^{(i)}) + (1 - y^{(i)})log(1 - \hat{y}^{(i)}) $$
End of explanation
from sklearn.metrics import mean_absolute_error
y_true = [3, -0.5, 2, 7]
y_pred = [2.5, 0.0, 2, 8]
mean_absolute_error(y_true, y_pred)
Explanation: 2. Regression Metrics
scikit-learn documentation
Mean Absolute Error
$$ MAE(y, \hat{y}) = \frac{1}{m} \sum_{i=1}^m \mid y^{(i)} - \hat{y}^{(i)} \mid $$
End of explanation
from sklearn.metrics import mean_squared_error
y_true = [3, -0.5, 2, 7]
y_pred = [2.5, 0.0, 2, 8]
mean_squared_error(y_true, y_pred)
Explanation: Mean Squared Error
$$ MSE(y, \hat{y}) = \frac{1}{m} \sum_{i=1}^m (y^{(i)} - \hat{y}^{(i)})^2 $$
End of explanation |
2,949 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: Get the Data
We'll work with the Ecommerce Customers csv file from the company. It has Customer info, suchas Email, Address, and their color Avatar. Then it also has numerical value columns
Step2: Check the head of customers, and check out its info() and describe() methods.
Step3: Exploratory Data Analysis
Let's explore the data!
For the rest of the exercise we'll only be using the numerical data of the csv file.
Use seaborn to create a jointplot to compare the Time on Website and Yearly Amount Spent columns. Does the correlation make sense?
Step4: Do the same but with the Time on App column instead.
Step5: Use jointplot to create a 2D hex bin plot comparing Time on App and Length of Membership.
Step6: Let's explore these types of relationships across the entire data set. Use pairplot to recreate the plot below.(Don't worry about the the colors)
Step7: Based off this plot what looks to be the most correlated feature with Yearly Amount Spent?
Step8: Create a linear model plot (using seaborn's lmplot) of Yearly Amount Spent vs. Length of Membership.
Step9: Training and Testing Data
Now that we've explored the data a bit, let's go ahead and split the data into training and testing sets.
Set a variable X equal to the numerical features of the customers and a variable y equal to the "Yearly Amount Spent" column.
Step10: Use model_selection.train_test_split from sklearn to split the data into training and testing sets. Set test_size=0.3 and random_state=101
Step11: Training the Model
Now its time to train our model on our training data!
Import LinearRegression from sklearn.linear_model
Step12: Create an instance of a LinearRegression() model named lm.
Step13: Train/fit lm on the training data.
Step14: Print out the coefficients of the model
Step15: Predicting Test Data
Now that we have fit our model, let's evaluate its performance by predicting off the test values!
Use lm.predict() to predict off the X_test set of the data.
Step16: Create a scatterplot of the real test values versus the predicted values.
Step17: Evaluating the Model
Let's evaluate our model performance by calculating the residual sum of squares and the explained variance score (R^2).
Calculate the Mean Absolute Error, Mean Squared Error, and the Root Mean Squared Error. Refer to the lecture or to Wikipedia for the formulas
Step18: Residuals
You should have gotten a very good model with a good fit. Let's quickly explore the residuals to make sure everything was okay with our data.
Plot a histogram of the residuals and make sure it looks normally distributed. Use either seaborn distplot, or just plt.hist().
Step19: Conclusion
We still want to figure out the answer to the original question, do we focus our efforst on mobile app or website development? Or maybe that doesn't even really matter, and Membership Time is what is really important. Let's see if we can interpret the coefficients at all to get an idea.
Recreate the dataframe below. | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
Linear Regression - Project Exercise
Congratulations! You just got some contract work with an Ecommerce company based in New York City that sells clothing online but they also have in-store style and clothing advice sessions. Customers come in to the store, have sessions/meetings with a personal stylist, then they can go home and order either on a mobile app or website for the clothes they want.
The company is trying to decide whether to focus their efforts on their mobile app experience or their website. They've hired you on contract to help them figure it out! Let's get started!
Just follow the steps below to analyze the customer data (it's fake, don't worry I didn't give you real credit card numbers or emails).
Imports
Import pandas, numpy, matplotlib,and seaborn. Then set %matplotlib inline
(You'll import sklearn as you need it.)
End of explanation
customers = pd.read_csv("Ecommerce Customers")
Explanation: Get the Data
We'll work with the Ecommerce Customers csv file from the company. It has Customer info, suchas Email, Address, and their color Avatar. Then it also has numerical value columns:
Avg. Session Length: Average session of in-store style advice sessions.
Time on App: Average time spent on App in minutes
Time on Website: Average time spent on Website in minutes
Length of Membership: How many years the customer has been a member.
Read in the Ecommerce Customers csv file as a DataFrame called customers.
End of explanation
customers.head()
customers.describe()
customers.info()
Explanation: Check the head of customers, and check out its info() and describe() methods.
End of explanation
sns.set_palette("GnBu_d")
sns.set_style('whitegrid')
# More time on site, more money spent.
sns.jointplot(x='Time on Website',y='Yearly Amount Spent',data=customers)
Explanation: Exploratory Data Analysis
Let's explore the data!
For the rest of the exercise we'll only be using the numerical data of the csv file.
Use seaborn to create a jointplot to compare the Time on Website and Yearly Amount Spent columns. Does the correlation make sense?
End of explanation
sns.jointplot(x='Time on App',y='Yearly Amount Spent',data=customers)
Explanation: Do the same but with the Time on App column instead.
End of explanation
sns.jointplot(x='Time on App',y='Length of Membership',kind='hex',data=customers)
Explanation: Use jointplot to create a 2D hex bin plot comparing Time on App and Length of Membership.
End of explanation
sns.pairplot(customers)
Explanation: Let's explore these types of relationships across the entire data set. Use pairplot to recreate the plot below.(Don't worry about the the colors)
End of explanation
# Length of Membership
Explanation: Based off this plot what looks to be the most correlated feature with Yearly Amount Spent?
End of explanation
sns.lmplot(x='Length of Membership',y='Yearly Amount Spent',data=customers)
Explanation: Create a linear model plot (using seaborn's lmplot) of Yearly Amount Spent vs. Length of Membership.
End of explanation
y = customers['Yearly Amount Spent']
X = customers[['Avg. Session Length', 'Time on App','Time on Website', 'Length of Membership']]
Explanation: Training and Testing Data
Now that we've explored the data a bit, let's go ahead and split the data into training and testing sets.
Set a variable X equal to the numerical features of the customers and a variable y equal to the "Yearly Amount Spent" column.
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=101)
Explanation: Use model_selection.train_test_split from sklearn to split the data into training and testing sets. Set test_size=0.3 and random_state=101
End of explanation
from sklearn.linear_model import LinearRegression
Explanation: Training the Model
Now its time to train our model on our training data!
Import LinearRegression from sklearn.linear_model
End of explanation
lm = LinearRegression()
Explanation: Create an instance of a LinearRegression() model named lm.
End of explanation
lm.fit(X_train,y_train)
Explanation: Train/fit lm on the training data.
End of explanation
# The coefficients
print('Coefficients: \n', lm.coef_)
Explanation: Print out the coefficients of the model
End of explanation
predictions = lm.predict( X_test)
Explanation: Predicting Test Data
Now that we have fit our model, let's evaluate its performance by predicting off the test values!
Use lm.predict() to predict off the X_test set of the data.
End of explanation
plt.scatter(y_test,predictions)
plt.xlabel('Y Test')
plt.ylabel('Predicted Y')
Explanation: Create a scatterplot of the real test values versus the predicted values.
End of explanation
# calculate these metrics by hand!
from sklearn import metrics
print('MAE:', metrics.mean_absolute_error(y_test, predictions))
print('MSE:', metrics.mean_squared_error(y_test, predictions))
print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, predictions)))
Explanation: Evaluating the Model
Let's evaluate our model performance by calculating the residual sum of squares and the explained variance score (R^2).
Calculate the Mean Absolute Error, Mean Squared Error, and the Root Mean Squared Error. Refer to the lecture or to Wikipedia for the formulas
End of explanation
sns.distplot((y_test-predictions),bins=50);
Explanation: Residuals
You should have gotten a very good model with a good fit. Let's quickly explore the residuals to make sure everything was okay with our data.
Plot a histogram of the residuals and make sure it looks normally distributed. Use either seaborn distplot, or just plt.hist().
End of explanation
coeffecients = pd.DataFrame(lm.coef_,X.columns)
coeffecients.columns = ['Coeffecient']
coeffecients
Explanation: Conclusion
We still want to figure out the answer to the original question, do we focus our efforst on mobile app or website development? Or maybe that doesn't even really matter, and Membership Time is what is really important. Let's see if we can interpret the coefficients at all to get an idea.
Recreate the dataframe below.
End of explanation |
2,950 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
VARMAX models
This is a brief introduction notebook to VARMAX models in statsmodels. The VARMAX model is generically specified as
Step1: Model specification
The VARMAX class in statsmodels allows estimation of VAR, VMA, and VARMA models (through the order argument), optionally with a constant term (via the trend argument). Exogenous regressors may also be included (as usual in statsmodels, by the exog argument), and in this way a time trend may be added. Finally, the class allows measurement error (via the measurement_error argument) and allows specifying either a diagonal or unstructured innovation covariance matrix (via the error_cov_type argument).
Example 1
Step2: From the estimated VAR model, we can plot the impulse response functions of the endogenous variables.
Step3: Example 2
Step4: Caution | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
dta = sm.datasets.webuse('lutkepohl2', 'https://www.stata-press.com/data/r12/')
dta.index = dta.qtr
dta.index.freq = dta.index.inferred_freq
endog = dta.loc['1960-04-01':'1978-10-01', ['dln_inv', 'dln_inc', 'dln_consump']]
Explanation: VARMAX models
This is a brief introduction notebook to VARMAX models in statsmodels. The VARMAX model is generically specified as:
$$
y_t = \nu + A_1 y_{t-1} + \dots + A_p y_{t-p} + B x_t + \epsilon_t +
M_1 \epsilon_{t-1} + \dots M_q \epsilon_{t-q}
$$
where $y_t$ is a $\text{k_endog} \times 1$ vector.
End of explanation
exog = endog['dln_consump']
mod = sm.tsa.VARMAX(endog[['dln_inv', 'dln_inc']], order=(2,0), trend='n', exog=exog)
res = mod.fit(maxiter=1000, disp=False)
print(res.summary())
Explanation: Model specification
The VARMAX class in statsmodels allows estimation of VAR, VMA, and VARMA models (through the order argument), optionally with a constant term (via the trend argument). Exogenous regressors may also be included (as usual in statsmodels, by the exog argument), and in this way a time trend may be added. Finally, the class allows measurement error (via the measurement_error argument) and allows specifying either a diagonal or unstructured innovation covariance matrix (via the error_cov_type argument).
Example 1: VAR
Below is a simple VARX(2) model in two endogenous variables and an exogenous series, but no constant term. Notice that we needed to allow for more iterations than the default (which is maxiter=50) in order for the likelihood estimation to converge. This is not unusual in VAR models which have to estimate a large number of parameters, often on a relatively small number of time series: this model, for example, estimates 27 parameters off of 75 observations of 3 variables.
End of explanation
ax = res.impulse_responses(10, orthogonalized=True, impulse=[1, 0]).plot(figsize=(13,3))
ax.set(xlabel='t', title='Responses to a shock to `dln_inv`');
Explanation: From the estimated VAR model, we can plot the impulse response functions of the endogenous variables.
End of explanation
mod = sm.tsa.VARMAX(endog[['dln_inv', 'dln_inc']], order=(0,2), error_cov_type='diagonal')
res = mod.fit(maxiter=1000, disp=False)
print(res.summary())
Explanation: Example 2: VMA
A vector moving average model can also be formulated. Below we show a VMA(2) on the same data, but where the innovations to the process are uncorrelated. In this example we leave out the exogenous regressor but now include the constant term.
End of explanation
mod = sm.tsa.VARMAX(endog[['dln_inv', 'dln_inc']], order=(1,1))
res = mod.fit(maxiter=1000, disp=False)
print(res.summary())
Explanation: Caution: VARMA(p,q) specifications
Although the model allows estimating VARMA(p,q) specifications, these models are not identified without additional restrictions on the representation matrices, which are not built-in. For this reason, it is recommended that the user proceed with error (and indeed a warning is issued when these models are specified). Nonetheless, they may in some circumstances provide useful information.
End of explanation |
2,951 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Statistics in Python
https
Step1: Hypothesis Testing
Step2: 1-sample t-test
Step3: 2-sample t-test
Step4: Paired tests
Step5: However this doesn't account for individual differences contributing to variance in data.
We can use a paired t-test or repeated measures test to account for these individual differences.
Step6: This is actually equivalent to doing a 1-sample t-test on the difference of the two measures.
Step7: These tests assume normality in the data. A non-parametric alternative is the Wilcoxian signed rank test
Step8: Note
Step9: Categorical variables
Step10: Link to t-tests between different FSIQ and PIQ
To compare different types of IQ, we need to create a โlong-formโ table,
listing IQs, where the type of IQ is indicated by a categorical variable
Step11: Note that we can get the same results now of doing the independant t-tests for each pariing.
Step12: Multiple Regression
Step13: Sepal and petal size tend to be related
Step14: Analysis of Variance (ANOVA)
1-Way Anova | Python Code:
#import pandas and use magic function
import pandas as pd
%matplotlib inline
# import our data using pandas read_csv() function where delimiter = ';', index_col = 0, na_values = '.'
data = pd.read_csv('https://www.scipy-lectures.org/_downloads/brain_size.csv',
delimiter=';', index_col=0, na_values='.')
# check out our data using pandas df.head() function
data.head()
# how many observations do we have? use pandas df.shape attribute
data.shape
# check out one column with df['column name'] or df.column_nae
data.Gender.head()
# make a groupby object on the dataframe
groupby_gender = data.groupby('Gender')
# take the mean of the groupby object across all measures using the .mean() method
groupby_gender.mean()
# take a look at our data distributions and pair-wise correlations
from pandas import plotting
plotting.scatter_matrix(data[['Weight', 'Height', 'MRI_Count']]);
plotting.scatter_matrix(data[['PIQ', 'VIQ', 'FSIQ']]);
## Distributions and
sns.kdeplot(data['FSIQ'])
sns.kdeplot(data['PIQ'])
sns.kdeplot(data['VIQ'])
sns.kdeplot(data['FSIQ'],cumulative=True)
sns.kdeplot(data['PIQ'],cumulative=True)
sns.kdeplot(data['VIQ'],cumulative=True)
Explanation: Introduction to Statistics in Python
https://www.scipy-lectures.org/packages/statistics/index.html
Importing and Exploring our Data
End of explanation
from scipy import stats
Explanation: Hypothesis Testing
End of explanation
# runa 1 sample t-test
stats.ttest_1samp(data['VIQ'], 0)
Explanation: 1-sample t-test: Testing the value of a population mean.
scipy.stats.ttest_1samp() tests if the population mean of data is likely to
be equal to a given value (technically if observations are drawn from a Gaussian
distributions of given population mean). It returns the T statistic, and the
p-value (see the functionโs help)
End of explanation
female_viq = data[data['Gender'] == 'Female']['VIQ']
male_viq = data[data['Gender'] == 'Male']['VIQ']
stats.ttest_ind(female_viq, male_viq)
Explanation: 2-sample t-test: testing for difference across populations
We have seen above that the mean VIQ in the male and female populations were different.
To test if this is significant, we do a 2-sample t-test with scipy.stats.ttest_ind()
End of explanation
stats.ttest_ind(data['FSIQ'], data['PIQ'])
Explanation: Paired tests: repeated measurements on the same indivuals
The PIQ, VIQ and FSIQ are three different measures of IQ in the same individual.
We can first look if FSIQ and PIQ are different using the 2-sample t-test.
End of explanation
stats.ttest_rel(data['FSIQ'], data['PIQ'])
Explanation: However this doesn't account for individual differences contributing to variance in data.
We can use a paired t-test or repeated measures test to account for these individual differences.
End of explanation
stats.ttest_1samp(data['FSIQ'] - data['PIQ'], 0)
Explanation: This is actually equivalent to doing a 1-sample t-test on the difference of the two measures.
End of explanation
stats.wilcoxon(data['FSIQ'], data['PIQ'])
Explanation: These tests assume normality in the data. A non-parametric alternative is the Wilcoxian signed rank test
End of explanation
# Let's simulate some data according to the model
import numpy as np
x = np.linspace(-5, 5, 20)
np.random.seed(1)
# normal distributed noise
y = -5 + 3*x + 4 * np.random.normal(size=x.shape)
# Create a data frame containing all the relevant variables
sim_data = pd.DataFrame({'x': x, 'y': y})
# Specify an OLS model and fit it
from statsmodels.formula.api import ols
model = ols("y ~ x", sim_data).fit()
# Inspect the results of the model fit
print(model.summary())
# Retrieve the model params, note tab completion
model.params
Explanation: Note:
The corresponding test in the non paired case is the MannโWhitney U test, scipy.stats.mannwhitneyu().
Linear models, multiple factors, and analysis of variance
Given two set of observations, x and y, we want to test the hypothesis that y is a linear function of x.
We will use the statsmodels module to:
Fit a linear model. We will use the simplest strategy, ordinary least squares (OLS).
Test that coef is non zero.
End of explanation
data.head()
# We can write a comparison between IQ of male and female using a linear model:
model = ols("VIQ ~ Gender", data).fit()
# ols automatically detects Gender as categorical
# model = ols('VIQ ~ C(Gender)', data).fit()
print(model.summary())
Explanation: Categorical variables: comparing groups or multiple categories
End of explanation
iq_melt = pd.melt(data, value_vars=['FSIQ', 'PIQ'], value_name='iq', var_name="type")
iq_melt.head()
model = ols("iq ~ type", iq_melt).fit()
print(model.summary())
Explanation: Link to t-tests between different FSIQ and PIQ
To compare different types of IQ, we need to create a โlong-formโ table,
listing IQs, where the type of IQ is indicated by a categorical variable:
End of explanation
stats.ttest_ind(data['VIQ'], data['PIQ'])
Explanation: Note that we can get the same results now of doing the independant t-tests for each pariing.
End of explanation
iris = pd.read_csv('https://www.scipy-lectures.org/_downloads/iris.csv')
import seaborn as sns
sns.pairplot(iris, hue='name');
Explanation: Multiple Regression: including multiple factors
End of explanation
model = ols('sepal_width ~ name + petal_length', iris).fit()
print(model.summary())
Explanation: Sepal and petal size tend to be related: bigger flowers are bigger!
But is there in addition a systematic effect of species?
End of explanation
data.head()
sns.boxplot('name', "sepal_length", data=iris)
setosa = iris[iris['name'] == 'setosa']['sepal_length']
versicolor = iris[iris['name'] == 'versicolor']['sepal_length']
virginica = iris[iris['name'] == 'virginica']['sepal_length']
f_value, p_value = stats.f_oneway(setosa, versicolor, virginica)
print(f_value, p_value)
data.head()
data = data.dropna(axis=0, how='any')
data.isna().sum()
import statsmodels.api as sm
from statsmodels.formula.api import ols
mod = ols('sepal_length ~ name',
data=iris).fit();
aov_table = sm.stats.anova_lm(mod, typ=2);
print(aov_table);
Explanation: Analysis of Variance (ANOVA)
1-Way Anova
End of explanation |
2,952 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Environment Preparation
Install Java 8
Run the cell on the Google Colab to install jdk 1.8.
Note
Step2: Install BigDL Orca
You can install the latest pre-release version using pip install --pre --upgrade bigdl-orca.
Step3: Distributed TensorFlow 2 using Orca APIs
In this guide we will describe how to scale out TensorFlow 2 programs using Orca in 4 simple steps.
Step4: Step 1
Step5: This is the only place where you need to specify local or distributed mode. View Orca Context for more details.
Note
Step6: Step 3
Step7: Step 4
Step8: Next, fit the Estimator.
Step9: Finally, evaluate using the Estimator.
Step10: Now, the accuracy of this model has reached 98%. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
Explanation: <a href="https://colab.research.google.com/github/intel-analytics/BigDL/blob/branch-2.0/python/orca/colab-notebook/quickstart/tf2_keras_lenet_mnist.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2016 The BigDL Authors.
End of explanation
# Install jdk8
!apt-get install openjdk-8-jdk-headless -qq > /dev/null
import os
# Set environment variable JAVA_HOME.
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
!update-alternatives --set java /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java
!java -version
Explanation: Environment Preparation
Install Java 8
Run the cell on the Google Colab to install jdk 1.8.
Note: if you run this notebook on your computer, root permission is required when running the cell to install Java 8. (You may ignore this cell if Java 8 has already been set up in your computer).
End of explanation
# Install latest pre-release version of BigDL Orca
# Installing BigDL Orca from pip will automatically install pyspark, bigdl, and their dependencies.
!pip install --pre --upgrade bigdl-orca[ray]
Explanation: Install BigDL Orca
You can install the latest pre-release version using pip install --pre --upgrade bigdl-orca.
End of explanation
# import necesary libraries and modules
import argparse
from bigdl.orca import init_orca_context, stop_orca_context
from bigdl.orca import OrcaContext
Explanation: Distributed TensorFlow 2 using Orca APIs
In this guide we will describe how to scale out TensorFlow 2 programs using Orca in 4 simple steps.
End of explanation
# recommended to set it to True when running BigDL in Jupyter notebook
OrcaContext.log_output = True # (this will display terminal's stdout and stderr in the Jupyter notebook).
cluster_mode = "local"
if cluster_mode == "local":
init_orca_context(cluster_mode="local", cores=1) # run in local mode
elif cluster_mode == "k8s":
init_orca_context(cluster_mode="k8s", num_nodes=2, cores=2) # run on K8s cluster
elif cluster_mode == "yarn":
init_orca_context(cluster_mode="yarn-client", num_nodes=2, cores=2) # run on Hadoop YARN cluster
Explanation: Step 1: Init Orca Context
End of explanation
def model_creator(config):
import tensorflow as tf
model = tf.keras.Sequential(
[tf.keras.layers.Conv2D(20, kernel_size=(5, 5), strides=(1, 1), activation='tanh',
input_shape=(28, 28, 1), padding='valid'),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2, 2), padding='valid'),
tf.keras.layers.Conv2D(50, kernel_size=(5, 5), strides=(1, 1), activation='tanh',
padding='valid'),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2, 2), padding='valid'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(500, activation='tanh'),
tf.keras.layers.Dense(10, activation='softmax'),
]
)
model.compile(optimizer=tf.keras.optimizers.RMSprop(),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
return model
Explanation: This is the only place where you need to specify local or distributed mode. View Orca Context for more details.
Note: You should export HADOOP_CONF_DIR=/path/to/hadoop/conf/dir when you run on Hadoop YARN cluster.
Step 2: Define the Model
You can then define the Keras model in the Creator Function using the standard TensroFlow 2 APIs.
End of explanation
def preprocess(x, y):
import tensorflow as tf
x = tf.cast(tf.reshape(x, (28, 28, 1)), dtype=tf.float32) / 255.0
return x, y
def train_data_creator(config, batch_size):
import tensorflow as tf
(train_feature, train_label), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices((train_feature, train_label))
dataset = dataset.repeat()
dataset = dataset.map(preprocess)
dataset = dataset.shuffle(1000)
dataset = dataset.batch(batch_size)
return dataset
def val_data_creator(config, batch_size):
import tensorflow as tf
_, (val_feature, val_label) = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices((val_feature, val_label))
dataset = dataset.repeat()
dataset = dataset.map(preprocess)
dataset = dataset.batch(batch_size)
return dataset
Explanation: Step 3: Define Train Dataset
You can define the dataset in the Creator Function using standard tf.data.Dataset APIs.
End of explanation
from bigdl.orca.learn.tf2 import Estimator
batch_size = 320
est = Estimator.from_keras(model_creator=model_creator, workers_per_node=1)
Explanation: Step 4: Fit with Orca Estimator
First, create an Estimator.
End of explanation
max_epoch=1
stats = est.fit(train_data_creator,
epochs=max_epoch,
batch_size=batch_size,
steps_per_epoch=60000 // batch_size,
validation_data=val_data_creator,
validation_steps=10000 // batch_size)
est.save("/tmp/mnist_keras.ckpt")
Explanation: Next, fit the Estimator.
End of explanation
stats = est.evaluate(val_data_creator, num_steps=10000 // batch_size)
est.shutdown()
print(stats)
Explanation: Finally, evaluate using the Estimator.
End of explanation
# Stop orca context when your program finishes
stop_orca_context()
Explanation: Now, the accuracy of this model has reached 98%.
End of explanation |
2,953 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab 7
Step1: Today's lab reviews Maximum Likelihood Estimation, and introduces interctive plotting in the jupyter notebook.
Part 1
Step2: Question 2
Step3: Question 3
Step4: Question 4
Step5: Part 2
Step6: What's the log-likelihood? As before, don't just use np.log(btype_likelihood).
Step7: Question 6
Step8: Now, complete the plot_btype_likelihood_3d function.
Step9: Question 7
Step10: We also can make some 2d color plots, to get a better view of exactly where our values are maximized. As in the 3d plots, redder colors refer to higher likelihoods.
Step11: As with the binomial, the likelihood has a "sharper" distribution than the log-likelihood. So, plotting the likelihood, we can see our maximal point with greater clarity.
Step12: Question 8
Step13: Submitting your assignment
If you made a good-faith effort to complete the lab, change i_finished_the_lab to True in the cell below. In any case, run the cells below to submit the lab. | Python Code:
# Run this cell to set up the notebook.
import numpy as np
import pandas as pd
import seaborn as sns
import scipy as sci
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import patches, cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
from mpl_toolkits.mplot3d import Axes3D
from client.api.notebook import Notebook
ok = Notebook('lab07.ok')
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
Explanation: Lab 7: Maximum Likelihood Estimation
End of explanation
factorial = sci.misc.factorial # so you don't have to look it up
def likelihood(n, p, x):
...
Explanation: Today's lab reviews Maximum Likelihood Estimation, and introduces interctive plotting in the jupyter notebook.
Part 1: Likelihood of the Binomial Distribution
Recall that the binomial distribution describes the chance of $x$ successes out of $n$ trials, where the trials are independent and each has a probability $p$ of success. For instance, the number of sixes rolled in ten rolls of a die is distributed $Binomial(10, \frac{1}{6})$.
Given $n$ draws from a $Binomial(n, p)$ distribution, which resulted in $x$ successes, we wish to find the chance $p$ of success via maximum likelihood estimation.
Question 1: Likelihood of the Binomial
What is the likelihood function for the binomial, L(p)? Remember, this is equal to the probability of the data occuring given some chance of success $p$.
As an aid, we provide a factorial(x) function below.
End of explanation
def log_likelihood(n, p, x):
...
Explanation: Question 2: Log-likelihood of the Binomial
What is the log of the likelihood function for the binomial, $log(L(p)) = lik(p)$? Don't just use np.log(likelihood) - determine the value as a new function of n, x, and p.
End of explanation
def highest_likelihood(n, x):
...
Explanation: Question 3: Maximum Likelihood Estimate of the Binomial
Given $n$ samples from a binomial distribution $Bin(n, p)$, $x$ of which were successes, what is the value $p$ which maximizes the log-likelihood function?
Hint: Find $\frac{d}{dp}lik(p)$, set it equal to 0, and solve for p in terms of x and n.
End of explanation
n_widget = widgets.FloatSlider(min=1, max=20, step=1, value=20)
x_widget = widgets.FloatSlider(min=0, max=20, step=1, value=5)
# We want to make sure x <= n, otherwise we get into trouble
def update_x_range(*args):
x_widget.max = n_widget.value
n_widget.observe(update_x_range, 'value')
def plot_likelihood(n, x, plot_log=False):
# values of p are on the x-axis.
# We plot every value from 0.01 to 0.99
pvals = np.arange(1, 100)/100
# values of either Likelihood(p) or log(Likelihood(p))
# are on the y-axis, depending on the method
if plot_log:
yvals = ...
else:
yvals = ...
plt.plot(pvals, yvals)
# Put a line where L(p) is maximized and print the value p*
p_star = highest_likelihood(n, x)
plt.axvline(p_star, lw=1.5, color='r', ls='dashed')
plt.text(p_star + 0.01, min(yvals), 'p*=%.3f' % (p_star))
plt.xlabel('p')
if plot_log:
plt.ylabel('lik(p)')
plt.title("log(Likelihood(p)), if X ~ bin(n, p) = k")
else:
plt.ylabel('L(p)')
plt.title("Likelihood of p, if X ~ bin(n, p) = k")
plt.show()
interact(plot_likelihood, n=n_widget, x=x_widget, log=False);
Explanation: Question 4: Interactive Plotting
Using the interact jupyter notebook extension, we can create interactive plots. In this case, we create an interactive plot of likelihood as a function of $p$ - interactive in the sense that we can plug in our own values of $n$ and $x$ and see how the plot changes. We can also choose our method of plotting - likelihood or log(likelihood).
We've provided code that creates sliders for n and x, and a checkbox to determine whether to plot the likelihood or the log-likelihood. Finish our code by defining the variable yvals, and then run it and play around a bit with the output.
End of explanation
def btype_likelihood(pa, pb, po, O, A, B, AB):
...
Explanation: Part 2: Likelihood of the Blood Types
Here's a more complex example, involving several variables. Recall the blood types experiment from lecture. We assume a model where a person's blood type is determined by two genes, each of which is identically-distributed between three alleles. We call the alleles $a$, $b$, and $o$. For each person, the two specific allele variants are random and independent of one another.
We know that, if a person has alleles $a$ and $b$, they have blood type $AB$. If the have alleles $a$ and $a$, or $a$ and $o$, they have blood type $A$. Similarly, if the have alleles $b$ and $b$, or $b$ and $o$, they have blood type $B$. Finally, if they have alleles $o$ and $o$, they have blood type $O$.
We measure the blood types of a group of people, and get counts of each type $A$, $B$, $AB$, and $O$. Using these counts, we wish to determine the frequency of alleles $a$, $b$, and $o$. We know that, under the assumption of Hardy-Weinberg equilibrium:
The frequency of type $O$ is $p_o^2$.
The frequency of type $A$ is $p_a^2 + 2p_op_a$.
The frequency of type $B$ is $p_b^2 + 2p_op_b$.
And the frequency of type $AB$ is $2p_ap_b$.
Question 5: blood type likelihood formulas
What's the likelihood of allele probabilities $p_a$, $p_b$, $p_o$, given sample counts O, A, B, AB?
Hint: Think about how the binomial formula can be extended. Don't worry about the $n$ choose $k$ bit - we're only concerned with the specific values O, A, B, and AB that we observed, so that term will be the same regardless of $p_a, p_b, p_o$, and it can be ignored.
End of explanation
def btype_log_likelihood(pa, pb, po, O, A, B, AB):
...
Explanation: What's the log-likelihood? As before, don't just use np.log(btype_likelihood).
End of explanation
def plot_surface_3d(X, Y, Z, orient_x = 45, orient_y = 45):
highest_Z = max(Z.reshape(-1,1))
lowest_Z = min(Z.reshape(-1,1))
fig = plt.figure()
ax = fig.gca(projection='3d')
surf = ax.plot_surface(X, Y, Z,
cmap=cm.coolwarm,
linewidth=0,
antialiased=False,
rstride=5, cstride=5)
ax.zaxis.set_major_locator(LinearLocator(5))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.1f'))
ax.view_init(orient_y, orient_x)
fig.colorbar(surf, shrink=0.5, aspect=5)
plt.title("log(Likelihood(p_a, p_b))")
plt.xlabel("p_a")
plt.ylabel("p_b")
plt.show()
Explanation: Question 6: Interactive 3D Plots of Allele Distribution Likelihood
Fill in the function plot_btype_likelihood_3d, which plots the log-likelihood as $p_a$ and $p_b$ vary (since $p_o$ is a simple function of $p_a$ and $p_b$, this covers all possible triplets of values). You'll need to define four methods of interact input - we recommend sticking with FloatSlider. Allow for samples of up to 1000 people, with anywhere from 0 to 100% of the population having each phenotype $A$, $B$, $AB$, $O$.
First, run this cell to define a function for plotting 3D graphs:
End of explanation
O = ...
A = ...
B = ...
AB = ...
def plot_btype_likelihood_3d(O, A, B, AB):
pa = np.arange(1, 50)/100
pb = np.arange(1, 50)/100
pa, pb = np.meshgrid(pa, pb) # get all pairs
po = ...
likelihoods = ...
plot_surface_3d(pa, pb, likelihoods)
interact(plot_btype_likelihood_3d, O=O, A=A, B=B, AB=AB);
Explanation: Now, complete the plot_btype_likelihood_3d function.
End of explanation
O2 = ...
A2 = ...
B2 = ...
AB2 = ...
X = ...
Y = ...
def plot_btype_likelihood_3d_oriented(O, A, B, AB, X, Y):
pa = np.arange(1, 50)/100
pb = np.arange(1, 50)/100
pa, pb = np.meshgrid(pa, pb) # get all pairs
po = ...
likelihoods = ...
plot_surface_3d(pa, pb, likelihoods, orient_x=X, orient_y=Y)
interact(plot_btype_likelihood_3d_oriented, O=O2, A=A2, B=B2, AB=AB2, X=X, Y=Y);
Explanation: Question 7: Rotating 3D Plots of Allele Distribution Likelihood
We can also rotate this 3D graphic by passing values orient_x and orient_y to the plot_surface_3d function. Add two new sliders, and fill in the plot_btype_likelihood_3d_oriented function. You may want to set the step size on the new sliders to a value greater than one, and make sure the max value is large enough such that they can rotate all the way around. You should be able to copy-paste a good deal of code from above.
End of explanation
O3 = widgets.FloatSlider(min=1, max=200, step=1, value=120)
A3 = widgets.FloatSlider(min=1, max=200, step=1, value=100)
B3 = widgets.FloatSlider(min=1, max=200, step=1, value=30)
AB3 = widgets.FloatSlider(min=1, max=200, step=1, value=5)
def plot_btype_log_likelihood_heatmap(O, A, B, AB):
pa = np.arange(1, 50)/100
pb = np.arange(1, 50)/100
pa, pb = np.meshgrid(pa, pb) # get all possible pairs
po = 1 - pa - pb
likelihoods = btype_log_likelihood(pa, pb, po, O, A, B, AB)
plt.pcolor(pa, pb, likelihoods, cmap=cm.coolwarm)
plt.xlabel("p_a")
plt.ylabel("p_b")
plt.title("log(Likelihood(p_a, p_b))")
plt.show()
interact(plot_btype_log_likelihood_heatmap, O=O3, A=A3, B=B3, AB=AB3);
Explanation: We also can make some 2d color plots, to get a better view of exactly where our values are maximized. As in the 3d plots, redder colors refer to higher likelihoods.
End of explanation
O4 = widgets.FloatSlider(min=1, max=200, step=1, value=120)
A4 = widgets.FloatSlider(min=1, max=200, step=1, value=100)
B4 = widgets.FloatSlider(min=1, max=200, step=1, value=30)
AB4 = widgets.FloatSlider(min=1, max=200, step=1, value=5)
def plot_btype_likelihood_heatmap(O, A, B, AB):
pa = np.arange(1, 100)/100
pb = np.arange(1, 100)/100
pa, pb = np.meshgrid(pa, pb) # get all possible pairs
po = 1 - pa - pb
likelihoods = btype_likelihood(pa, pb, po, O, A, B, AB)
likelihoods[(pa + pb) > 1] = 0 # Don't plot impossible probability pairs
plt.pcolor(pa, pb, likelihoods, cmap=cm.coolwarm)
plt.xlabel("p_a")
plt.ylabel("p_b")
plt.title("Likelihood(p_a, p_b)")
plt.show()
interact(plot_btype_likelihood_heatmap, O=O4, A=A4, B=B4, AB=AB4);
Explanation: As with the binomial, the likelihood has a "sharper" distribution than the log-likelihood. So, plotting the likelihood, we can see our maximal point with greater clarity.
End of explanation
O5 = widgets.FloatSlider(min=1, max=200, step=1, value=120)
A5 = widgets.FloatSlider(min=1, max=200, step=1, value=100)
B5 = widgets.FloatSlider(min=1, max=200, step=1, value=30)
AB5 = widgets.FloatSlider(min=1, max=200, step=1, value=5)
def maximize_btype_likelihood(O, A, B, AB):
def flipped_btype_fixed_params(params):
# "params" is a list containing p_a, p_b, p_o
pa, pb, po = params
# We wish to return a value which is minimized when the log-likelihood is maximized...
# What function would accomplish this?
...
# We need to provide an initial guess at the solution
initial_guess = [1/3, 1/3, 1/3]
# Each variable is bounded between zero and one
# sci.optimize.minimize seems to dislike exact zero bounds, though, so we use 10^-6
bnds = ((1e-6, 1), (1e-6, 1), (1e-6, 1))
# An additional constraint on our parameters - they must sum to one
# The minimizer will only check params where constraint_fn(params) = 0
def constraint_fn(params):
# "params" is a list containing p_a, p_b, p_o
return sum(params) - 1
constraint = ({'type': 'eq', 'fun': constraint_fn},)
pa, pb, po = sci.optimize.minimize(flipped_btype_fixed_params,
x0=initial_guess,
bounds=bnds,
constraints=constraint).x
return "pa* = %.3f, pb* = %.2f, po* = %.3f" % (pa, pb, po)
interact(maximize_btype_likelihood, O=O5, A=A5, B=B5, AB=AB5);
Explanation: Question 8: Getting the MLE for the blood-type question
Finally, we want to get our actual estimates for $p_a, p_b, p_o$! However, unlike in the Binomial example, we don't want to calculate our MLE by hand. So instead, we use function-minimizers to calculate the highest likelihood.
scipy's optimize.minimize function allows us to find the tuple of arguments that minimizes a function of $n$ variables, subject to desired constraints. Given any set of observed phenotype counts $O, A, B, AB$, we can thus find the specific values $p_a, p_b, p_o$ that maximize the log-likelihood function. Finish the nested function flipped_btype_fixed_params in order to do just that.
End of explanation
i_finished_the_lab = False
_ = ok.grade('qcompleted')
_ = ok.backup()
_ = ok.submit()
Explanation: Submitting your assignment
If you made a good-faith effort to complete the lab, change i_finished_the_lab to True in the cell below. In any case, run the cells below to submit the lab.
End of explanation |
2,954 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
BigQuery Query Run
Run query on a project.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project
Step2: 3. Enter BigQuery Query Run Recipe Parameters
Specify a single query and choose legacy or standard mode.
For PLX use
Step3: 4. Execute BigQuery Query Run
This does NOT need to be modified unless you are changing the recipe, click play. | Python Code:
!pip install git+https://github.com/google/starthinker
Explanation: BigQuery Query Run
Run query on a project.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Disclaimer
This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.
This code generated (see starthinker/scripts for possible source):
- Command: "python starthinker_ui/manage.py colab"
- Command: "python starthinker/tools/colab.py [JSON RECIPE]"
1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
End of explanation
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
Explanation: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project:
Set the configuration project value to the project identifier from these instructions.
If the recipe has auth set to user:
If you have user credentials:
Set the configuration user value to your user credentials JSON.
If you DO NOT have user credentials:
Set the configuration client value to downloaded client credentials.
If the recipe has auth set to service:
Set the configuration service value to downloaded service credentials.
End of explanation
FIELDS = {
'auth_write':'service', # Credentials used for writing data.
'query':'', # SQL with newlines and all.
'legacy':True, # Query type must match table and query format.
}
print("Parameters Set To: %s" % FIELDS)
Explanation: 3. Enter BigQuery Query Run Recipe Parameters
Specify a single query and choose legacy or standard mode.
For PLX use: SELECT * FROM [plx.google:FULL_TABLE_NAME.all] WHERE...
For non legacy use: SELECT * project.datset.table WHERE...
Modify the values below for your use case, can be done multiple times, then click play.
End of explanation
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'bigquery':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},
'run':{
'query':{'field':{'name':'query','kind':'text','order':1,'default':'','description':'SQL with newlines and all.'}},
'legacy':{'field':{'name':'legacy','kind':'boolean','order':2,'default':True,'description':'Query type must match table and query format.'}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
Explanation: 4. Execute BigQuery Query Run
This does NOT need to be modified unless you are changing the recipe, click play.
End of explanation |
2,955 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Form Parsing using Google Cloud Document AI
This notebook shows how to use Google Cloud Document AI to parse a campaign disclosure form.
It accompanies this Medium article
Step2: Document
As an example, let's take this US election campaign disclosure form.
Step3: Note
Step4: Enable Document AI
First enable Document AI in your project by visiting
https
Step5: Create a service account authorization by visiting
https
Step6: Note
Step7: Option 1
Step8: We know that "Cash on Hand" is on Page 2.
Step9: Cool, we are at the right part of the document! Let's get the next block, which should be the actual amount.
Step10: Option 2 | Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/imported/formparsing.ipynb
from IPython.display import Markdown as md
### change to reflect your notebook
_nb_repo = 'training-data-analyst'
_nb_loc = "blogs/form_parser/formparsing.ipynb"
_nb_title = "Form Parsing Using Google Cloud Document AI"
### no need to change any of this
_nb_safeloc = _nb_loc.replace('/', '%2F')
_nb_safetitle = _nb_title.replace(' ', '+')
md(
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://console.cloud.google.com/ai-platform/notebooks/deploy-notebook?name={1}&url=https%3A%2F%2Fgithub.com%2FGoogleCloudPlatform%2F{3}%2Fblob%2Fmaster%2F{2}&download_url=https%3A%2F%2Fgithub.com%2FGoogleCloudPlatform%2F{3}%2Fraw%2Fmaster%2F{2}">
<img src="https://raw.githubusercontent.com/GoogleCloudPlatform/practical-ml-vision-book/master/logo-cloud.png"/> Run in AI Platform Notebook</a>
</td>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/GoogleCloudPlatform/{3}/blob/master/{0}">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/GoogleCloudPlatform/{3}/blob/master/{0}">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://raw.githubusercontent.com/GoogleCloudPlatform/{3}/master/{0}">
<img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
.format(_nb_loc, _nb_safetitle, _nb_safeloc, _nb_repo))
Explanation: Form Parsing using Google Cloud Document AI
This notebook shows how to use Google Cloud Document AI to parse a campaign disclosure form.
It accompanies this Medium article:
https://medium.com/@lakshmanok/how-to-parse-forms-using-google-cloud-document-ai-68ad47e1c0ed
End of explanation
%%bash
if [ ! -f scott_walker.pdf ]; then
curl -O https://storage.googleapis.com/practical-ml-vision-book/images/scott_walker.pdf
fi
!ls *.pdf
from IPython.display import IFrame
IFrame("./scott_walker.pdf", width=600, height=300)
Explanation: Document
As an example, let's take this US election campaign disclosure form.
End of explanation
BUCKET="ai-analytics-solutions-kfpdemo" # CHANGE to a bucket that you own
!gsutil cp scott_walker.pdf gs://{BUCKET}/formparsing/scott_walker.pdf
!gsutil ls gs://{BUCKET}/formparsing/scott_walker.pdf
Explanation: Note: If the file is not visible, simply open the PDF file by double-clicking on it in the left hand menu.
Upload to Cloud Storage
Document AI works with documents on Cloud Storage, so let's upload the doc.
End of explanation
!gcloud auth list
Explanation: Enable Document AI
First enable Document AI in your project by visiting
https://console.developers.google.com/apis/api/documentai.googleapis.com/overview
Find out who you are running as:
End of explanation
%%bash
PDF="gs://ai-analytics-solutions-kfpdemo/formparsing/scott_walker.pdf" # CHANGE to your PDF file
REGION="us" # change to EU if the bucket is in the EU
cat <<EOM > request.json
{
"inputConfig":{
"gcsSource":{
"uri":"${PDF}"
},
"mimeType":"application/pdf"
},
"documentType":"general",
"formExtractionParams":{
"enabled":true
}
}
EOM
# Send request to Document AI.
PROJECT=$(gcloud config get-value project)
echo "Sending the following request to Document AI in ${PROJECT} ($REGION region), saving to response.json"
cat request.json
curl -X POST \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
https://${REGION}-documentai.googleapis.com/v1beta2/projects/${PROJECT}/locations/us/documents:process \
> response.json
!tail response.json
Explanation: Create a service account authorization by visiting
https://console.cloud.google.com/iam-admin/serviceaccounts/create
Give this service account Document AI Core Service Account authorization
Give the above ACTIVE ACCOUNT the ability to use the service account you just created.
Call Document AI
End of explanation
import json
ifp = open('response.json')
response = json.load(ifp)
allText = response['text']
print(allText[:100])
Explanation: Note: If you get a 403 PERMISSION DENIED error, please re-run all the cells from the top.
Parse the response
Let's use Python to parse the response and pull out specific fields.
End of explanation
print(allText.index("CASH ON HAND"))
Explanation: Option 1: Parsing blocks of text
As an example, let's try to get the "Cash on Hand". This is in Page 2 and the answer is $75,931.36
All the data in the document is the allText field. we just need to find the right starting and ending index
for what we want to extract.
End of explanation
response['pages'][1]['blocks'][5]
response['pages'][1]['blocks'][5]['layout']['textAnchor']['textSegments'][0]
startIndex = int(response['pages'][1]['blocks'][5]['layout']['textAnchor']['textSegments'][0]['startIndex'])
endIndex = int(response['pages'][1]['blocks'][5]['layout']['textAnchor']['textSegments'][0]['endIndex'])
allText[startIndex:endIndex]
Explanation: We know that "Cash on Hand" is on Page 2.
End of explanation
def extractText(allText, elem):
startIndex = int(elem['textAnchor']['textSegments'][0]['startIndex'])
endIndex = int(elem['textAnchor']['textSegments'][0]['endIndex'])
return allText[startIndex:endIndex].strip()
amount = float(extractText(allText, response['pages'][1]['blocks'][6]['layout']))
print(amount)
Explanation: Cool, we are at the right part of the document! Let's get the next block, which should be the actual amount.
End of explanation
response['pages'][1].keys()
response['pages'][1]['formFields'][2]
fieldName = extractText(allText, response['pages'][1]['formFields'][2]['fieldName'])
fieldValue = extractText(allText, response['pages'][1]['formFields'][2]['fieldValue'])
print('key={}\nvalue={}'.format(fieldName, fieldValue))
Explanation: Option 2: Parsing form fields
What we did with blocks of text was quite low-level. Document AI understands that forms tend to have key-value pairs, and part of the JSON response includes these extracted key-value pairs as well.
Besides FormField Document AI also supports getting Paragraph and Table from the document.
End of explanation |
2,956 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Optimization Methods
Until now, you've always used Gradient Descent to update the parameters and minimize the cost. In this notebook, you will learn more advanced optimization methods that can speed up learning and perhaps even get you to a better final value for the cost function. Having a good optimization algorithm can be the difference between waiting days vs. just a few hours to get a good result.
Gradient descent goes "downhill" on a cost function $J$. Think of it as trying to do this
Step2: 1 - Gradient Descent
A simple optimization method in machine learning is gradient descent (GD). When you take gradient steps with respect to all $m$ examples on each step, it is also called Batch Gradient Descent.
Warm-up exercise
Step4: Expected Output
Step6: Expected Output
Step8: Expected Output
Step10: Expected Output
Step12: Expected Output
Step13: Expected Output
Step15: We have already implemented a 3-layer neural network. You will train it with
Step16: You will now run this 3 layer neural network with each of the 3 optimization methods.
5.1 - Mini-batch Gradient descent
Run the following code to see how the model does with mini-batch gradient descent.
Step17: 5.2 - Mini-batch gradient descent with momentum
Run the following code to see how the model does with momentum. Because this example is relatively simple, the gains from using momemtum are small; but for more complex problems you might see bigger gains.
Step18: 5.3 - Mini-batch with Adam mode
Run the following code to see how the model does with Adam. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import scipy.io
import math
import sklearn
import sklearn.datasets
from opt_utils import load_params_and_grads, initialize_parameters, forward_propagation, backward_propagation
from opt_utils import compute_cost, predict, predict_dec, plot_decision_boundary, load_dataset
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
Explanation: Optimization Methods
Until now, you've always used Gradient Descent to update the parameters and minimize the cost. In this notebook, you will learn more advanced optimization methods that can speed up learning and perhaps even get you to a better final value for the cost function. Having a good optimization algorithm can be the difference between waiting days vs. just a few hours to get a good result.
Gradient descent goes "downhill" on a cost function $J$. Think of it as trying to do this:
<img src="images/cost.jpg" style="width:650px;height:300px;">
<caption><center> <u> Figure 1 </u>: Minimizing the cost is like finding the lowest point in a hilly landscape<br> At each step of the training, you update your parameters following a certain direction to try to get to the lowest possible point. </center></caption>
Notations: As usual, $\frac{\partial J}{\partial a } = $ da for any variable a.
To get started, run the following code to import the libraries you will need.
End of explanation
# GRADED FUNCTION: update_parameters_with_gd
def update_parameters_with_gd(parameters, grads, learning_rate):
Update parameters using one step of gradient descent
Arguments:
parameters -- python dictionary containing your parameters to be updated:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients to update each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
learning_rate -- the learning rate, scalar.
Returns:
parameters -- python dictionary containing your updated parameters
L = len(parameters) // 2 # number of layers in the neural networks
# Update rule for each parameter
for l in range(L):
### START CODE HERE ### (approx. 2 lines)
parameters["W" + str(l + 1)] = parameters["W" + str(l + 1)] - learning_rate * grads["dW" + str(l + 1)]
parameters["b" + str(l + 1)] = parameters["b" + str(l + 1)] - learning_rate * grads["db" + str(l + 1)]
### END CODE HERE ###
return parameters
parameters, grads, learning_rate = update_parameters_with_gd_test_case()
parameters = update_parameters_with_gd(parameters, grads, learning_rate)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
Explanation: 1 - Gradient Descent
A simple optimization method in machine learning is gradient descent (GD). When you take gradient steps with respect to all $m$ examples on each step, it is also called Batch Gradient Descent.
Warm-up exercise: Implement the gradient descent update rule. The gradient descent rule is, for $l = 1, ..., L$:
$$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{1}$$
$$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{2}$$
where L is the number of layers and $\alpha$ is the learning rate. All parameters should be stored in the parameters dictionary. Note that the iterator l starts at 0 in the for loop while the first parameters are $W^{[1]}$ and $b^{[1]}$. You need to shift l to l+1 when coding.
End of explanation
# GRADED FUNCTION: random_mini_batches
def random_mini_batches(X, Y, mini_batch_size = 64, seed = 0):
Creates a list of random minibatches from (X, Y)
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)
mini_batch_size -- size of the mini-batches, integer
Returns:
mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)
np.random.seed(seed) # To make your "random" minibatches the same as ours
m = X.shape[1] # number of training examples
mini_batches = []
# Step 1: Shuffle (X, Y)
permutation = list(np.random.permutation(m))
shuffled_X = X[:, permutation]
shuffled_Y = Y[:, permutation].reshape((1,m))
# Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case.
num_complete_minibatches = math.floor(m/mini_batch_size) # number of mini batches of size mini_batch_size in your partitionning
for k in range(0, num_complete_minibatches):
### START CODE HERE ### (approx. 2 lines)
mini_batch_X = shuffled_X[:,k * mini_batch_size:(k + 1) * mini_batch_size]
mini_batch_Y = shuffled_Y[:,k * mini_batch_size:(k + 1) * mini_batch_size]
### END CODE HERE ###
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
# Handling the end case (last mini-batch < mini_batch_size)
if m % mini_batch_size != 0:
### START CODE HERE ### (approx. 2 lines)
end = m - mini_batch_size * math.floor(m / mini_batch_size)
mini_batch_X = shuffled_X[:,num_complete_minibatches * mini_batch_size:]
mini_batch_Y = shuffled_Y[:,num_complete_minibatches * mini_batch_size:]
### END CODE HERE ###
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
return mini_batches
X_assess, Y_assess, mini_batch_size = random_mini_batches_test_case()
mini_batches = random_mini_batches(X_assess, Y_assess, mini_batch_size)
print("shape of the 1st mini_batch_X: " + str(mini_batches[0][0].shape))
print("shape of the 2nd mini_batch_X: " + str(mini_batches[1][0].shape))
print("shape of the 3rd mini_batch_X: " + str(mini_batches[2][0].shape))
print("shape of the 1st mini_batch_Y: " + str(mini_batches[0][1].shape))
print("shape of the 2nd mini_batch_Y: " + str(mini_batches[1][1].shape))
print("shape of the 3rd mini_batch_Y: " + str(mini_batches[2][1].shape))
print("mini batch sanity check: " + str(mini_batches[0][0][0][0:3]))
Explanation: Expected Output:
<table>
<tr>
<td > **W1** </td>
<td > [[ 1.63535156 -0.62320365 -0.53718766]
[-1.07799357 0.85639907 -2.29470142]] </td>
</tr>
<tr>
<td > **b1** </td>
<td > [[ 1.74604067]
[-0.75184921]] </td>
</tr>
<tr>
<td > **W2** </td>
<td > [[ 0.32171798 -0.25467393 1.46902454]
[-2.05617317 -0.31554548 -0.3756023 ]
[ 1.1404819 -1.09976462 -0.1612551 ]] </td>
</tr>
<tr>
<td > **b2** </td>
<td > [[-0.88020257]
[ 0.02561572]
[ 0.57539477]] </td>
</tr>
</table>
A variant of this is Stochastic Gradient Descent (SGD), which is equivalent to mini-batch gradient descent where each mini-batch has just 1 example. The update rule that you have just implemented does not change. What changes is that you would be computing gradients on just one training example at a time, rather than on the whole training set. The code examples below illustrate the difference between stochastic gradient descent and (batch) gradient descent.
(Batch) Gradient Descent:
``` python
X = data_input
Y = labels
parameters = initialize_parameters(layers_dims)
for i in range(0, num_iterations):
# Forward propagation
a, caches = forward_propagation(X, parameters)
# Compute cost.
cost = compute_cost(a, Y)
# Backward propagation.
grads = backward_propagation(a, caches, parameters)
# Update parameters.
parameters = update_parameters(parameters, grads)
```
Stochastic Gradient Descent:
python
X = data_input
Y = labels
parameters = initialize_parameters(layers_dims)
for i in range(0, num_iterations):
for j in range(0, m):
# Forward propagation
a, caches = forward_propagation(X[:,j], parameters)
# Compute cost
cost = compute_cost(a, Y[:,j])
# Backward propagation
grads = backward_propagation(a, caches, parameters)
# Update parameters.
parameters = update_parameters(parameters, grads)
In Stochastic Gradient Descent, you use only 1 training example before updating the gradients. When the training set is large, SGD can be faster. But the parameters will "oscillate" toward the minimum rather than converge smoothly. Here is an illustration of this:
<img src="images/kiank_sgd.png" style="width:750px;height:250px;">
<caption><center> <u> <font color='purple'> Figure 1 </u><font color='purple'> : SGD vs GD<br> "+" denotes a minimum of the cost. SGD leads to many oscillations to reach convergence. But each step is a lot faster to compute for SGD than for GD, as it uses only one training example (vs. the whole batch for GD). </center></caption>
Note also that implementing SGD requires 3 for-loops in total:
1. Over the number of iterations
2. Over the $m$ training examples
3. Over the layers (to update all parameters, from $(W^{[1]},b^{[1]})$ to $(W^{[L]},b^{[L]})$)
In practice, you'll often get faster results if you do not use neither the whole training set, nor only one training example, to perform each update. Mini-batch gradient descent uses an intermediate number of examples for each step. With mini-batch gradient descent, you loop over the mini-batches instead of looping over individual training examples.
<img src="images/kiank_minibatch.png" style="width:750px;height:250px;">
<caption><center> <u> <font color='purple'> Figure 2 </u>: <font color='purple'> SGD vs Mini-Batch GD<br> "+" denotes a minimum of the cost. Using mini-batches in your optimization algorithm often leads to faster optimization. </center></caption>
<font color='blue'>
What you should remember:
- The difference between gradient descent, mini-batch gradient descent and stochastic gradient descent is the number of examples you use to perform one update step.
- You have to tune a learning rate hyperparameter $\alpha$.
- With a well-turned mini-batch size, usually it outperforms either gradient descent or stochastic gradient descent (particularly when the training set is large).
2 - Mini-Batch Gradient descent
Let's learn how to build mini-batches from the training set (X, Y).
There are two steps:
- Shuffle: Create a shuffled version of the training set (X, Y) as shown below. Each column of X and Y represents a training example. Note that the random shuffling is done synchronously between X and Y. Such that after the shuffling the $i^{th}$ column of X is the example corresponding to the $i^{th}$ label in Y. The shuffling step ensures that examples will be split randomly into different mini-batches.
<img src="images/kiank_shuffle.png" style="width:550px;height:300px;">
Partition: Partition the shuffled (X, Y) into mini-batches of size mini_batch_size (here 64). Note that the number of training examples is not always divisible by mini_batch_size. The last mini batch might be smaller, but you don't need to worry about this. When the final mini-batch is smaller than the full mini_batch_size, it will look like this:
<img src="images/kiank_partition.png" style="width:550px;height:300px;">
Exercise: Implement random_mini_batches. We coded the shuffling part for you. To help you with the partitioning step, we give you the following code that selects the indexes for the $1^{st}$ and $2^{nd}$ mini-batches:
python
first_mini_batch_X = shuffled_X[:, 0 : mini_batch_size]
second_mini_batch_X = shuffled_X[:, mini_batch_size : 2 * mini_batch_size]
...
Note that the last mini-batch might end up smaller than mini_batch_size=64. Let $\lfloor s \rfloor$ represents $s$ rounded down to the nearest integer (this is math.floor(s) in Python). If the total number of examples is not a multiple of mini_batch_size=64 then there will be $\lfloor \frac{m}{mini_batch_size}\rfloor$ mini-batches with a full 64 examples, and the number of examples in the final mini-batch will be ($m-mini__batch__size \times \lfloor \frac{m}{mini_batch_size}\rfloor$).
End of explanation
# GRADED FUNCTION: initialize_velocity
def initialize_velocity(parameters):
Initializes the velocity as a python dictionary with:
- keys: "dW1", "db1", ..., "dWL", "dbL"
- values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.
Arguments:
parameters -- python dictionary containing your parameters.
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
Returns:
v -- python dictionary containing the current velocity.
v['dW' + str(l)] = velocity of dWl
v['db' + str(l)] = velocity of dbl
L = len(parameters) // 2 # number of layers in the neural networks
v = {}
# Initialize velocity
for l in range(L):
### START CODE HERE ### (approx. 2 lines)
v["dW" + str(l + 1)] = np.zeros_like(parameters["W" + str(l+1)])
v["db" + str(l + 1)] = np.zeros_like(parameters["b" + str(l+1)])
### END CODE HERE ###
return v
parameters = initialize_velocity_test_case()
v = initialize_velocity(parameters)
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
Explanation: Expected Output:
<table style="width:50%">
<tr>
<td > **shape of the 1st mini_batch_X** </td>
<td > (12288, 64) </td>
</tr>
<tr>
<td > **shape of the 2nd mini_batch_X** </td>
<td > (12288, 64) </td>
</tr>
<tr>
<td > **shape of the 3rd mini_batch_X** </td>
<td > (12288, 20) </td>
</tr>
<tr>
<td > **shape of the 1st mini_batch_Y** </td>
<td > (1, 64) </td>
</tr>
<tr>
<td > **shape of the 2nd mini_batch_Y** </td>
<td > (1, 64) </td>
</tr>
<tr>
<td > **shape of the 3rd mini_batch_Y** </td>
<td > (1, 20) </td>
</tr>
<tr>
<td > **mini batch sanity check** </td>
<td > [ 0.90085595 -0.7612069 0.2344157 ] </td>
</tr>
</table>
<font color='blue'>
What you should remember:
- Shuffling and Partitioning are the two steps required to build mini-batches
- Powers of two are often chosen to be the mini-batch size, e.g., 16, 32, 64, 128.
3 - Momentum
Because mini-batch gradient descent makes a parameter update after seeing just a subset of examples, the direction of the update has some variance, and so the path taken by mini-batch gradient descent will "oscillate" toward convergence. Using momentum can reduce these oscillations.
Momentum takes into account the past gradients to smooth out the update. We will store the 'direction' of the previous gradients in the variable $v$. Formally, this will be the exponentially weighted average of the gradient on previous steps. You can also think of $v$ as the "velocity" of a ball rolling downhill, building up speed (and momentum) according to the direction of the gradient/slope of the hill.
<img src="images/opt_momentum.png" style="width:400px;height:250px;">
<caption><center> <u><font color='purple'>Figure 3</u><font color='purple'>: The red arrows shows the direction taken by one step of mini-batch gradient descent with momentum. The blue points show the direction of the gradient (with respect to the current mini-batch) on each step. Rather than just following the gradient, we let the gradient influence $v$ and then take a step in the direction of $v$.<br> <font color='black'> </center>
Exercise: Initialize the velocity. The velocity, $v$, is a python dictionary that needs to be initialized with arrays of zeros. Its keys are the same as those in the grads dictionary, that is:
for $l =1,...,L$:
python
v["dW" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["W" + str(l+1)])
v["db" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["b" + str(l+1)])
Note that the iterator l starts at 0 in the for loop while the first parameters are v["dW1"] and v["db1"] (that's a "one" on the superscript). This is why we are shifting l to l+1 in the for loop.
End of explanation
# GRADED FUNCTION: update_parameters_with_momentum
def update_parameters_with_momentum(parameters, grads, v, beta, learning_rate):
Update parameters using Momentum
Arguments:
parameters -- python dictionary containing your parameters:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients for each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
v -- python dictionary containing the current velocity:
v['dW' + str(l)] = ...
v['db' + str(l)] = ...
beta -- the momentum hyperparameter, scalar
learning_rate -- the learning rate, scalar
Returns:
parameters -- python dictionary containing your updated parameters
v -- python dictionary containing your updated velocities
L = len(parameters) // 2 # number of layers in the neural networks
# Momentum update for each parameter
for l in range(L):
### START CODE HERE ### (approx. 4 lines)
# compute velocities
v["dW" + str(l + 1)] = beta * v["dW" + str(l + 1)] + (1 - beta) * grads['dW' + str(l + 1)]
v["db" + str(l + 1)] = beta * v["db" + str(l + 1)] + (1 - beta) * grads['db' + str(l + 1)]
# update parameters
parameters["W" + str(l + 1)] = parameters["W" + str(l + 1)] - learning_rate * v["dW" + str(l + 1)]
parameters["b" + str(l + 1)] = parameters["b" + str(l + 1)] - learning_rate * v["db" + str(l + 1)]
### END CODE HERE ###
return parameters, v
parameters, grads, v = update_parameters_with_momentum_test_case()
parameters, v = update_parameters_with_momentum(parameters, grads, v, beta = 0.9, learning_rate = 0.01)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
Explanation: Expected Output:
<table style="width:40%">
<tr>
<td > **v["dW1"]** </td>
<td > [[ 0. 0. 0.]
[ 0. 0. 0.]] </td>
</tr>
<tr>
<td > **v["db1"]** </td>
<td > [[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td > **v["dW2"]** </td>
<td > [[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]] </td>
</tr>
<tr>
<td > **v["db2"]** </td>
<td > [[ 0.]
[ 0.]
[ 0.]] </td>
</tr>
</table>
Exercise: Now, implement the parameters update with momentum. The momentum update rule is, for $l = 1, ..., L$:
$$ \begin{cases}
v_{dW^{[l]}} = \beta v_{dW^{[l]}} + (1 - \beta) dW^{[l]} \
W^{[l]} = W^{[l]} - \alpha v_{dW^{[l]}}
\end{cases}\tag{3}$$
$$\begin{cases}
v_{db^{[l]}} = \beta v_{db^{[l]}} + (1 - \beta) db^{[l]} \
b^{[l]} = b^{[l]} - \alpha v_{db^{[l]}}
\end{cases}\tag{4}$$
where L is the number of layers, $\beta$ is the momentum and $\alpha$ is the learning rate. All parameters should be stored in the parameters dictionary. Note that the iterator l starts at 0 in the for loop while the first parameters are $W^{[1]}$ and $b^{[1]}$ (that's a "one" on the superscript). So you will need to shift l to l+1 when coding.
End of explanation
# GRADED FUNCTION: initialize_adam
def initialize_adam(parameters) :
Initializes v and s as two python dictionaries with:
- keys: "dW1", "db1", ..., "dWL", "dbL"
- values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.
Arguments:
parameters -- python dictionary containing your parameters.
parameters["W" + str(l)] = Wl
parameters["b" + str(l)] = bl
Returns:
v -- python dictionary that will contain the exponentially weighted average of the gradient.
v["dW" + str(l)] = ...
v["db" + str(l)] = ...
s -- python dictionary that will contain the exponentially weighted average of the squared gradient.
s["dW" + str(l)] = ...
s["db" + str(l)] = ...
L = len(parameters) // 2 # number of layers in the neural networks
v = {}
s = {}
# Initialize v, s. Input: "parameters". Outputs: "v, s".
for l in range(L):
### START CODE HERE ### (approx. 4 lines)
v["dW" + str(l + 1)] = np.zeros_like(parameters["W" + str(l + 1)])
v["db" + str(l + 1)] = np.zeros_like(parameters["b" + str(l + 1)])
s["dW" + str(l+1)] = np.zeros_like(parameters["W" + str(l + 1)])
s["db" + str(l+1)] = np.zeros_like(parameters["b" + str(l + 1)])
### END CODE HERE ###
return v, s
parameters = initialize_adam_test_case()
v, s = initialize_adam(parameters)
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
print("s[\"dW1\"] = " + str(s["dW1"]))
print("s[\"db1\"] = " + str(s["db1"]))
print("s[\"dW2\"] = " + str(s["dW2"]))
print("s[\"db2\"] = " + str(s["db2"]))
Explanation: Expected Output:
<table style="width:90%">
<tr>
<td > **W1** </td>
<td > [[ 1.62544598 -0.61290114 -0.52907334]
[-1.07347112 0.86450677 -2.30085497]] </td>
</tr>
<tr>
<td > **b1** </td>
<td > [[ 1.74493465]
[-0.76027113]] </td>
</tr>
<tr>
<td > **W2** </td>
<td > [[ 0.31930698 -0.24990073 1.4627996 ]
[-2.05974396 -0.32173003 -0.38320915]
[ 1.13444069 -1.0998786 -0.1713109 ]] </td>
</tr>
<tr>
<td > **b2** </td>
<td > [[-0.87809283]
[ 0.04055394]
[ 0.58207317]] </td>
</tr>
<tr>
<td > **v["dW1"]** </td>
<td > [[-0.11006192 0.11447237 0.09015907]
[ 0.05024943 0.09008559 -0.06837279]] </td>
</tr>
<tr>
<td > **v["db1"]** </td>
<td > [[-0.01228902]
[-0.09357694]] </td>
</tr>
<tr>
<td > **v["dW2"]** </td>
<td > [[-0.02678881 0.05303555 -0.06916608]
[-0.03967535 -0.06871727 -0.08452056]
[-0.06712461 -0.00126646 -0.11173103]] </td>
</tr>
<tr>
<td > **v["db2"]** </td>
<td > [[ 0.02344157]
[ 0.16598022]
[ 0.07420442]]</td>
</tr>
</table>
Note that:
- The velocity is initialized with zeros. So the algorithm will take a few iterations to "build up" velocity and start to take bigger steps.
- If $\beta = 0$, then this just becomes standard gradient descent without momentum.
How do you choose $\beta$?
The larger the momentum $\beta$ is, the smoother the update because the more we take the past gradients into account. But if $\beta$ is too big, it could also smooth out the updates too much.
Common values for $\beta$ range from 0.8 to 0.999. If you don't feel inclined to tune this, $\beta = 0.9$ is often a reasonable default.
Tuning the optimal $\beta$ for your model might need trying several values to see what works best in term of reducing the value of the cost function $J$.
<font color='blue'>
What you should remember:
- Momentum takes past gradients into account to smooth out the steps of gradient descent. It can be applied with batch gradient descent, mini-batch gradient descent or stochastic gradient descent.
- You have to tune a momentum hyperparameter $\beta$ and a learning rate $\alpha$.
4 - Adam
Adam is one of the most effective optimization algorithms for training neural networks. It combines ideas from RMSProp (described in lecture) and Momentum.
How does Adam work?
1. It calculates an exponentially weighted average of past gradients, and stores it in variables $v$ (before bias correction) and $v^{corrected}$ (with bias correction).
2. It calculates an exponentially weighted average of the squares of the past gradients, and stores it in variables $s$ (before bias correction) and $s^{corrected}$ (with bias correction).
3. It updates parameters in a direction based on combining information from "1" and "2".
The update rule is, for $l = 1, ..., L$:
$$\begin{cases}
v_{dW^{[l]}} = \beta_1 v_{dW^{[l]}} + (1 - \beta_1) \frac{\partial \mathcal{J} }{ \partial W^{[l]} } \
v^{corrected}{dW^{[l]}} = \frac{v{dW^{[l]}}}{1 - (\beta_1)^t} \
s_{dW^{[l]}} = \beta_2 s_{dW^{[l]}} + (1 - \beta_2) (\frac{\partial \mathcal{J} }{\partial W^{[l]} })^2 \
s^{corrected}{dW^{[l]}} = \frac{s{dW^{[l]}}}{1 - (\beta_1)^t} \
W^{[l]} = W^{[l]} - \alpha \frac{v^{corrected}{dW^{[l]}}}{\sqrt{s^{corrected}{dW^{[l]}}} + \varepsilon}
\end{cases}$$
where:
- t counts the number of steps taken of Adam
- L is the number of layers
- $\beta_1$ and $\beta_2$ are hyperparameters that control the two exponentially weighted averages.
- $\alpha$ is the learning rate
- $\varepsilon$ is a very small number to avoid dividing by zero
As usual, we will store all parameters in the parameters dictionary
Exercise: Initialize the Adam variables $v, s$ which keep track of the past information.
Instruction: The variables $v, s$ are python dictionaries that need to be initialized with arrays of zeros. Their keys are the same as for grads, that is:
for $l = 1, ..., L$:
```python
v["dW" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["W" + str(l+1)])
v["db" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["b" + str(l+1)])
s["dW" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["W" + str(l+1)])
s["db" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["b" + str(l+1)])
```
End of explanation
# GRADED FUNCTION: update_parameters_with_adam
def update_parameters_with_adam(parameters, grads, v, s, t, learning_rate=0.01,
beta1=0.9, beta2=0.999, epsilon=1e-8):
Update parameters using Adam
Arguments:
parameters -- python dictionary containing your parameters:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients for each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
v -- Adam variable, moving average of the first gradient, python dictionary
s -- Adam variable, moving average of the squared gradient, python dictionary
learning_rate -- the learning rate, scalar.
beta1 -- Exponential decay hyperparameter for the first moment estimates
beta2 -- Exponential decay hyperparameter for the second moment estimates
epsilon -- hyperparameter preventing division by zero in Adam updates
Returns:
parameters -- python dictionary containing your updated parameters
v -- Adam variable, moving average of the first gradient, python dictionary
s -- Adam variable, moving average of the squared gradient, python dictionary
L = len(parameters) // 2 # number of layers in the neural networks
v_corrected = {} # Initializing first moment estimate, python dictionary
s_corrected = {} # Initializing second moment estimate, python dictionary
# Perform Adam update on all parameters
for l in range(L):
# Moving average of the gradients. Inputs: "v, grads, beta1". Output: "v".
### START CODE HERE ### (approx. 2 lines)
v["dW" + str(l + 1)] = beta1 * v["dW" + str(l + 1)] + (1 - beta1) * grads['dW' + str(l + 1)]
v["db" + str(l + 1)] = beta1 * v["db" + str(l + 1)] + (1 - beta1) * grads['db' + str(l + 1)]
### END CODE HERE ###
# Compute bias-corrected first moment estimate. Inputs: "v, beta1, t". Output: "v_corrected".
### START CODE HERE ### (approx. 2 lines)
v_corrected["dW" + str(l + 1)] = v["dW" + str(l + 1)] / (1 - np.power(beta1, t))
v_corrected["db" + str(l + 1)] = v["db" + str(l + 1)] / (1 - np.power(beta1, t))
### END CODE HERE ###
# Moving average of the squared gradients. Inputs: "s, grads, beta2". Output: "s".
### START CODE HERE ### (approx. 2 lines)
s["dW" + str(l + 1)] = beta2 * s["dW" + str(l + 1)] + (1 - beta2) * np.power(grads['dW' + str(l + 1)], 2)
s["db" + str(l + 1)] = beta2 * s["db" + str(l + 1)] + (1 - beta2) * np.power(grads['db' + str(l + 1)], 2)
### END CODE HERE ###
# Compute bias-corrected second raw moment estimate. Inputs: "s, beta2, t". Output: "s_corrected".
### START CODE HERE ### (approx. 2 lines)
s_corrected["dW" + str(l + 1)] = s["dW" + str(l + 1)] / (1 - np.power(beta2, t))
s_corrected["db" + str(l + 1)] = s["db" + str(l + 1)] / (1 - np.power(beta2, t))
### END CODE HERE ###
# Update parameters. Inputs: "parameters, learning_rate, v_corrected, s_corrected, epsilon". Output: "parameters".
### START CODE HERE ### (approx. 2 lines)
parameters["W" + str(l + 1)] = parameters["W" + str(l + 1)] - learning_rate * v_corrected["dW" + str(l + 1)] / np.sqrt(s["dW" + str(l + 1)] + epsilon)
parameters["b" + str(l + 1)] = parameters["b" + str(l + 1)] - learning_rate * v_corrected["db" + str(l + 1)] / np.sqrt(s["db" + str(l + 1)] + epsilon)
### END CODE HERE ###
return parameters, v, s
parameters, grads, v, s = update_parameters_with_adam_test_case()
parameters, v, s = update_parameters_with_adam(parameters, grads, v, s, t = 2)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
print("s[\"dW1\"] = " + str(s["dW1"]))
print("s[\"db1\"] = " + str(s["db1"]))
print("s[\"dW2\"] = " + str(s["dW2"]))
print("s[\"db2\"] = " + str(s["db2"]))
Explanation: Expected Output:
<table style="width:40%">
<tr>
<td > **v["dW1"]** </td>
<td > [[ 0. 0. 0.]
[ 0. 0. 0.]] </td>
</tr>
<tr>
<td > **v["db1"]** </td>
<td > [[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td > **v["dW2"]** </td>
<td > [[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]] </td>
</tr>
<tr>
<td > **v["db2"]** </td>
<td > [[ 0.]
[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td > **s["dW1"]** </td>
<td > [[ 0. 0. 0.]
[ 0. 0. 0.]] </td>
</tr>
<tr>
<td > **s["db1"]** </td>
<td > [[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td > **s["dW2"]** </td>
<td > [[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]] </td>
</tr>
<tr>
<td > **s["db2"]** </td>
<td > [[ 0.]
[ 0.]
[ 0.]] </td>
</tr>
</table>
Exercise: Now, implement the parameters update with Adam. Recall the general update rule is, for $l = 1, ..., L$:
$$\begin{cases}
v_{W^{[l]}} = \beta_1 v_{W^{[l]}} + (1 - \beta_1) \frac{\partial J }{ \partial W^{[l]} } \
v^{corrected}{W^{[l]}} = \frac{v{W^{[l]}}}{1 - (\beta_1)^t} \
s_{W^{[l]}} = \beta_2 s_{W^{[l]}} + (1 - \beta_2) (\frac{\partial J }{\partial W^{[l]} })^2 \
s^{corrected}{W^{[l]}} = \frac{s{W^{[l]}}}{1 - (\beta_2)^t} \
W^{[l]} = W^{[l]} - \alpha \frac{v^{corrected}{W^{[l]}}}{\sqrt{s^{corrected}{W^{[l]}}}+\varepsilon}
\end{cases}$$
Note that the iterator l starts at 0 in the for loop while the first parameters are $W^{[1]}$ and $b^{[1]}$. You need to shift l to l+1 when coding.
End of explanation
train_X, train_Y = load_dataset()
Explanation: Expected Output:
<table>
<tr>
<td > **W1** </td>
<td > [[ 1.63178673 -0.61919778 -0.53561312]
[-1.08040999 0.85796626 -2.29409733]] </td>
</tr>
<tr>
<td > **b1** </td>
<td > [[ 1.75225313]
[-0.75376553]] </td>
</tr>
<tr>
<td > **W2** </td>
<td > [[ 0.32648046 -0.25681174 1.46954931]
[-2.05269934 -0.31497584 -0.37661299]
[ 1.14121081 -1.09245036 -0.16498684]] </td>
</tr>
<tr>
<td > **b2** </td>
<td > [[-0.88529978]
[ 0.03477238]
[ 0.57537385]] </td>
</tr>
<tr>
<td > **v["dW1"]** </td>
<td > [[-0.11006192 0.11447237 0.09015907]
[ 0.05024943 0.09008559 -0.06837279]] </td>
</tr>
<tr>
<td > **v["db1"]** </td>
<td > [[-0.01228902]
[-0.09357694]] </td>
</tr>
<tr>
<td > **v["dW2"]** </td>
<td > [[-0.02678881 0.05303555 -0.06916608]
[-0.03967535 -0.06871727 -0.08452056]
[-0.06712461 -0.00126646 -0.11173103]] </td>
</tr>
<tr>
<td > **v["db2"]** </td>
<td > [[ 0.02344157]
[ 0.16598022]
[ 0.07420442]] </td>
</tr>
<tr>
<td > **s["dW1"]** </td>
<td > [[ 0.00121136 0.00131039 0.00081287]
[ 0.0002525 0.00081154 0.00046748]] </td>
</tr>
<tr>
<td > **s["db1"]** </td>
<td > [[ 1.51020075e-05]
[ 8.75664434e-04]] </td>
</tr>
<tr>
<td > **s["dW2"]** </td>
<td > [[ 7.17640232e-05 2.81276921e-04 4.78394595e-04]
[ 1.57413361e-04 4.72206320e-04 7.14372576e-04]
[ 4.50571368e-04 1.60392066e-07 1.24838242e-03]] </td>
</tr>
<tr>
<td > **s["db2"]** </td>
<td > [[ 5.49507194e-05]
[ 2.75494327e-03]
[ 5.50629536e-04]] </td>
</tr>
</table>
You now have three working optimization algorithms (mini-batch gradient descent, Momentum, Adam). Let's implement a model with each of these optimizers and observe the difference.
5 - Model with different optimization algorithms
Lets use the following "moons" dataset to test the different optimization methods. (The dataset is named "moons" because the data from each of the two classes looks a bit like a crescent-shaped moon.)
End of explanation
def model(X, Y, layers_dims, optimizer, learning_rate=0.0007, mini_batch_size=64, beta=0.9,
beta1=0.9, beta2=0.999, epsilon=1e-8, num_epochs=10000, print_cost=True):
3-layer neural network model which can be run in different optimizer modes.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)
layers_dims -- python list, containing the size of each layer
learning_rate -- the learning rate, scalar.
mini_batch_size -- the size of a mini batch
beta -- Momentum hyperparameter
beta1 -- Exponential decay hyperparameter for the past gradients estimates
beta2 -- Exponential decay hyperparameter for the past squared gradients estimates
epsilon -- hyperparameter preventing division by zero in Adam updates
num_epochs -- number of epochs
print_cost -- True to print the cost every 1000 epochs
Returns:
parameters -- python dictionary containing your updated parameters
L = len(layers_dims) # number of layers in the neural networks
costs = [] # to keep track of the cost
t = 0 # initializing the counter required for Adam update
seed = 10 # For grading purposes, so that your "random" minibatches are the same as ours
# Initialize parameters
parameters = initialize_parameters(layers_dims)
# Initialize the optimizer
if optimizer == "gd":
pass # no initialization required for gradient descent
elif optimizer == "momentum":
v = initialize_velocity(parameters)
elif optimizer == "adam":
v, s = initialize_adam(parameters)
# Optimization loop
for i in range(num_epochs):
# Define the random minibatches. We increment the seed to reshuffle differently the dataset after each epoch
seed = seed + 1
minibatches = random_mini_batches(X, Y, mini_batch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# Forward propagation
a3, caches = forward_propagation(minibatch_X, parameters)
# Compute cost
cost = compute_cost(a3, minibatch_Y)
# Backward propagation
grads = backward_propagation(minibatch_X, minibatch_Y, caches)
# Update parameters
if optimizer == "gd":
parameters = update_parameters_with_gd(parameters, grads, learning_rate)
elif optimizer == "momentum":
parameters, v = update_parameters_with_momentum(parameters, grads, v, beta, learning_rate)
elif optimizer == "adam":
t = t + 1 # Adam counter
parameters, v, s = update_parameters_with_adam(parameters, grads, v, s,
t, learning_rate, beta1, beta2, epsilon)
# Print the cost every 1000 epoch
if print_cost and i % 1000 == 0:
print("Cost after epoch %i: %f" % (i, cost))
if print_cost and i % 100 == 0:
costs.append(cost)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('epochs (per 100)')
plt.title("Learning rate = " + str(learning_rate))
plt.show()
return parameters
Explanation: We have already implemented a 3-layer neural network. You will train it with:
- Mini-batch Gradient Descent: it will call your function:
- update_parameters_with_gd()
- Mini-batch Momentum: it will call your functions:
- initialize_velocity() and update_parameters_with_momentum()
- Mini-batch Adam: it will call your functions:
- initialize_adam() and update_parameters_with_adam()
End of explanation
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, optimizer="gd")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Gradient Descent optimization")
axes = plt.gca()
axes.set_xlim([-1.5, 2.5])
axes.set_ylim([-1, 1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
Explanation: You will now run this 3 layer neural network with each of the 3 optimization methods.
5.1 - Mini-batch Gradient descent
Run the following code to see how the model does with mini-batch gradient descent.
End of explanation
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, beta=0.9, optimizer="momentum")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Momentum optimization")
axes = plt.gca()
axes.set_xlim([-1.5, 2.5])
axes.set_ylim([-1, 1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
Explanation: 5.2 - Mini-batch gradient descent with momentum
Run the following code to see how the model does with momentum. Because this example is relatively simple, the gains from using momemtum are small; but for more complex problems you might see bigger gains.
End of explanation
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, optimizer="adam")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Adam optimization")
axes = plt.gca()
axes.set_xlim([-1.5, 2.5])
axes.set_ylim([-1, 1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
Explanation: 5.3 - Mini-batch with Adam mode
Run the following code to see how the model does with Adam.
End of explanation |
2,957 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Characterwise Double-Stacked LSTM as Author
Step1: Defining the Model
Actually, it's a single layer of GRU for now... (rather than a double-stacked LSTM)
Step2: That's the underlying network defined - now need to create the infrastructure to iteratively improve it
Step3: The Model can now be shown as a Compute Graph
(But this is time consuming, and the image will be huge...)
Step4: Define the Training Loop
Step5: Run (or continue) the Training
Step6: This is to sample the learned relationships | Python Code:
import numpy
import theano
from theano import tensor
from blocks.bricks import Tanh
from blocks.bricks.recurrent import GatedRecurrent
from blocks.bricks.sequence_generators import (SequenceGenerator, Readout, SoftmaxEmitter, LookupFeedback)
from blocks.graph import ComputationGraph
import blocks.algorithms
from blocks.algorithms import GradientDescent
from blocks.initialization import Orthogonal, IsotropicGaussian, Constant
from blocks.model import Model
from blocks.monitoring import aggregation
from blocks.extensions import FinishAfter, Printing
from blocks.extensions.saveload import Checkpoint
from blocks.extensions.monitoring import TrainingDataMonitoring
from blocks.main_loop import MainLoop
import blocks.serialization
from blocks.select import Selector
import logging
import pprint
logger = logging.getLogger(__name__)
theano.config.floatX='float32'
print theano.config.device
# Dictionaries
import string
all_chars = [ a for a in string.printable]+['<UNK>']
code2char = dict(enumerate(all_chars))
char2code = {v: k for k, v in code2char.items()}
if False:
data_file = 'Shakespeare.poetry.txt'
dim = 32
hidden_state_dim = 32
feedback_dim = 32
else:
data_file = 'Shakespeare.plays.txt'
dim = 64
hidden_state_dim = 64
feedback_dim = 64
seq_len = 256 # The input file is learned in chunks of text this large
# Network parameters
num_states=len(char2code) # This is the size of the one-hot input and SoftMax output layers
batch_size = 100 # This is for mini-batches : Helps optimize GPU workload
num_epochs = 100 # Number of reads-through of corpus to do a training
data_path = '../data/' + data_file
save_path = '../models/' + data_file + '.model'
#from fuel.datasets import Dataset
from fuel.streams import DataStream
from fuel.schemes import ConstantScheme
from fuel.datasets import Dataset
#from fuel.datasets import TextFile
#dataset = TextFile([data_file], bos_token=None, eos_token=None, level="character", dictionary=char2code)
#data_stream = DataStream(dataset, iteration_scheme=ConstantScheme(batch_size))
class CharacterTextFile(Dataset):
provides_sources = ("data",)
def __init__(self, fname, chunk_len, dictionary, **kwargs):
self.fname = fname
self.chunk_len = chunk_len
self.dictionary = dictionary
super(CharacterTextFile, self).__init__(**kwargs)
def open(self):
return open(self.fname,'r')
def get_data(self, state, request):
assert isinstance(request, int)
x = numpy.zeros((self.chunk_len, request), dtype='int64')
for i in range(request):
txt=state.read(self.chunk_len)
if len(txt)<self.chunk_len: raise StopIteration
#print(">%s<\n" % (txt,))
x[:, i] = [ self.dictionary[c] for c in txt ]
return (x,)
def close(self, state):
state.close()
dataset = CharacterTextFile(data_path, chunk_len=seq_len, dictionary=char2code)
data_stream = DataStream(dataset, iteration_scheme=ConstantScheme(batch_size))
a=data_stream.get_data(10)
#[ code2char[v] for v in [94, 27, 21, 94, 16, 14, 54, 23, 14, 12] ] # Horizontally
#[ code2char[v] for v in [94, 94,95,36,94,47,50,57,40,53,68,54,94,38] ] # Vertically
''.join([ code2char[v] for v in a[0][:,0] ])
Explanation: Characterwise Double-Stacked LSTM as Author
End of explanation
transition = GatedRecurrent(name="transition", dim=hidden_state_dim, activation=Tanh())
generator = SequenceGenerator(
Readout(readout_dim=num_states, source_names=["states"],
emitter=SoftmaxEmitter(name="emitter"),
feedback_brick=LookupFeedback(
num_states, feedback_dim, name='feedback'),
name="readout"),
transition,
weights_init=IsotropicGaussian(0.01), biases_init=Constant(0),
name="generator"
)
generator.push_initialization_config()
transition.weights_init = Orthogonal()
generator.initialize()
#dir(generator.readout.emitter)
#print(generator.readout.emitter.get_unique_path())
#print(generator.readout.emitter.name)
print(generator.readout.emitter.readout_dim)
Explanation: Defining the Model
Actually, it's a single layer of GRU for now... (rather than a double-stacked LSTM)
End of explanation
# Give an idea of what's going on.
logger.info("Parameters:\n" + pprint.pformat(
[(key, value.get_value().shape) for key, value in Selector(generator).get_params().items()],
width=120))
#logger.info("Markov chain entropy: {}".format(MarkovChainDataset.entropy))
#logger.info("Expected min error: {}".format( -MarkovChainDataset.entropy * seq_len))
# Build the cost computation graph.
x = tensor.lmatrix('data')
cost = aggregation.mean(generator.cost_matrix(x[:, :]).sum(), x.shape[1])
cost.name = "sequence_log_likelihood"
model=Model(cost)
algorithm = GradientDescent(
cost=cost, params=list(Selector(generator).get_params().values()),
step_rule=blocks.algorithms.CompositeRule([blocks.algorithms.StepClipping(10.0), blocks.algorithms.Scale(0.01)]) )
# tried: blocks.algorithms.Scale(0.001), blocks.algorithms.RMSProp(), blocks.algorithms.AdaGrad()
Explanation: That's the underlying network defined - now need to create the infrastructure to iteratively improve it :
End of explanation
# from IPython.display import SVG
# SVG(theano.printing.pydotprint(cost, return_image=True, format='svg'))
#from IPython.display import Image
#Image(theano.printing.pydotprint(cost, return_image=True, format='png'))
Explanation: The Model can now be shown as a Compute Graph
(But this is time consuming, and the image will be huge...)
End of explanation
main_loop = MainLoop(
algorithm=algorithm,
data_stream=data_stream,
model=model,
extensions=[
FinishAfter(after_n_epochs=num_epochs),
TrainingDataMonitoring([cost], prefix="this_step", after_batch=True),
TrainingDataMonitoring([cost], prefix="average", every_n_batches=100),
Checkpoint(save_path, every_n_batches=1000),
Printing(every_n_batches=500)
]
)
Explanation: Define the Training Loop
End of explanation
main_loop.run()
## continuing models : (new method is not cPickle) :
# https://groups.google.com/forum/#!topic/blocks-users/jns-KKWTtko
# http://blocks.readthedocs.org/en/latest/serialization.html?highlight=load
## To inspect contents of saved/Checkpoint-ed file :
# unzip -t models/Shakespeare.poetry.txt.model
#from six.moves import cPickle
#main_loop = cPickle.load(open(save_path, "rb"))
#blocks.serialization.load(save_path)
#def author(input):
# pass
#model=Model(cost)
# Read back in from disk
if False:
model.set_param_values(blocks.serialization.load_parameter_values(save_path))
# Includes generator(?)
#generator = main_loop.model
Explanation: Run (or continue) the Training
End of explanation
output_length = 1000 # in characters
sampler = ComputationGraph(
generator.generate(n_steps=output_length, batch_size=1, iterate=True)
)
#print("Sampler variables : ", sampler.variables)
sample = sampler.get_theano_function()
states, outputs, costs = [data[:, 0] for data in sample()]
numpy.set_printoptions(precision=3, suppress=True)
print("Generation cost:\n{}".format(costs.sum()))
#freqs = numpy.bincount(outputs).astype(floatX)
#freqs /= freqs.sum()
#print("Frequencies:\n {} vs {}".format(freqs, MarkovChainDataset.equilibrium))
#trans_freqs = numpy.zeros((num_states, num_states), dtype=floatX)
#for a, b in zip(outputs, outputs[1:]):
# trans_freqs[a, b] += 1
#trans_freqs /= trans_freqs.sum(axis=1)[:, None]
#print("Transition frequencies:\n{}\nvs\n{}".format(
# trans_freqs, MarkovChainDataset.trans_prob))
#print(numpy.shape(states))
#print(numpy.shape(outputs))
#print(outputs[:])
print(''.join([ code2char[c] for c in outputs]))
#from blocks.serialization import continue_training
#blocks.serialization.continue_training(save_path)
Explanation: This is to sample the learned relationships
End of explanation |
2,958 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Decision Tree of Observable Operators
Part 2
Step1: ... and emitting all of the items from all of the Observables, one Observable at a time
Step2: ... by combining the items from two or more Observables sequentially to come up with new items to emit
... whenever each of the Observables has emitted a new item zip / zip_list
Step3: ... whenever any of the Observables has emitted a new item combine_latest
Step4: ... whenever the first of the Observables has emitted a new item with_latest_from
Step5: ... whenever an item is emitted by one Observable in a window defined by an item emitted by another join
The join operator takes four parameters
Step6: ... or, alternatively, group_join
The groupJoin operator takes four parameters
Step7: ... by means of Pattern and Plan intermediaries And/Then/When, and / then / when
The combination of the And, Then, and When operators behave much like the Zip operator, but they do so by means of intermediate data structures. And accepts two or more Observables and combines the emissions from each, one set at a time, into Pattern objects. Then operates on such Pattern objects, transforming them in a Plan. When in turn transforms these various Plan objects into emissions from an Observable.
details
The And/Then/When trio has more overloads that enable you to group an even greater number of sequences. They also allow you to provide more than one 'plan' (the output of the Then method). This gives you the Merge feature but on the collection of 'plans'. I would suggest playing around with them if this functionality is of interest to you. The verbosity of enumerating all of the combinations of these methods would be of low value. You will get far more value out of using them and discovering for yourself.
As we delve deeper into the depths of what the Rx libraries provide us, we can see more practical usages for it. Composing sequences with Rx allows us to easily make sense of the multiple data sources a problem domain is exposed to. We can concatenate values or sequences together sequentially with StartWith, Concat and Repeat. We can process multiple sequences concurrently with Merge, or process a single sequence at a time with Amb and Switch. Pairing values with CombineLatest, Zip and the And/Then/When operators can simplify otherwise fiddly operations like our drag-and-drop examples and monitoring system status.
Step8: ... and emitting the items from only the most-recently emitted of those Observables switch_latest | Python Code:
reset_start_time(O.merge)
l = []
def excepting_f(obs):
for i in range(10):
l.append(1)
obs.on_next(1 / (3 - len(l)))
stream1 = O.from_(('a', 'b', 'c'))
stream2 = O.create(excepting_f)
# merged stream stops in any case at first exception!
# No guarantee of order of those immediately created streams !
d = subs(stream1.merge(stream2))
l = []
d = subs(O.merge(new_thread_scheduler, [stream1, stream2]))
rst(O.merge_all, title='merge_all')
meta = O.repeat(O.from_((1, 2, 3)), 3)
# no guarantee of order, immediatelly created:
d = subs(meta.merge_all())
# Introducing delta ts:
d = subs(O.repeat(O.timer(10, 10)\
.take(3), 3)\
.merge_all(),
name='streams with time delays between events')
Explanation: A Decision Tree of Observable Operators
Part 2: Combining Observables
source: http://reactivex.io/documentation/operators.html#tree.
(transcribed to RxPY 1.5.7, Py2.7 / 2016-12, Gunther Klessinger, axiros)
This tree can help you find the ReactiveX Observable operator youโre looking for.
See Part 1 for Usage and Output Instructions.
We also require acquaintance with the marble diagrams feature of RxPy.
This is a helpful accompanying read.
<h2 id="tocheading">Table of Contents</h2>
<div id="toc"></div>
I want to create an Observable by combining other Observables
... and emitting all of the items from all of the Observables in whatever order they are received: merge / merge_all
End of explanation
rst(O.concat)
s1 = O.from_((1, 2))
s2 = O.from_((3, 4))
# while normal subscriptions work as expected...
d1, d2 = subs(s1), subs(s2)
# ... another one can have the order reversed
d = subs(O.concat([s2, s1]))
rst()
# See the marbles notebook:
s1 = O.from_marbles('1--2---3|').to_blocking()
s2 = O.from_marbles('--a-b-c|' ).to_blocking()
d = (subs(s1, name='A'),
subs(s2, name='B'))
rst(title="Concatenating in reverse order", sleep=1)
d = subs(O.concat([s2, s1]), name='C')
Explanation: ... and emitting all of the items from all of the Observables, one Observable at a time: concat
End of explanation
rst(O.zip)
s1 = O.range(0, 5)
d = subs(O.zip(s1, s1.skip(1), s1.skip(2), lambda s1, s2, s3: '%s : %s : %s' % (s1, s2, s3)))
rst(O.zip_list) # alias: zip_array
s1 = O.range(0, 5)
d = subs(O.zip_list(s1, s1.skip(1), s1.skip(2)))
Explanation: ... by combining the items from two or more Observables sequentially to come up with new items to emit
... whenever each of the Observables has emitted a new item zip / zip_list
End of explanation
rst(O.combine_latest, title='combine_latest')
s1 = O.interval(100).map(lambda i: 'First : %s' % i)
s2 = O.interval(150).map(lambda i: 'Second: %s' % i)
# the start is interesting, both must have emitted, so it starts at 150ms with 0/0:
d = subs(s1.combine_latest(s2, lambda s1, s2: '%s, %s' % (s1, s2)).take(6))
rst(title='For comparison: merge', sleep=1)
d = subs(s1.merge(s2).take(6))
Explanation: ... whenever any of the Observables has emitted a new item combine_latest
End of explanation
rst(O.with_latest_from, title='with_latest_from')
s1 = O.interval(140).map(lambda i: 'First : %s' % i)
s2 = O.interval(50) .map(lambda i: 'Second: %s' % i)
d = subs(s1.with_latest_from(s2, lambda s1, s2: '%s, %s' % (s1, s2)).take(6))
Explanation: ... whenever the first of the Observables has emitted a new item with_latest_from
End of explanation
rst(O.join)
# this one is pretty timing critical and output seems swallowed with 2 threads (over)writing.
# better try this with timer(0) on the console. Also the scheduler of the timers is critical,
# try other O.timer schedulers...
xs = O.interval(100).map(lambda i: 'First : %s' % i)
ys = O.interval(101).map(lambda i: 'Second: %s' % i)
d = subs(xs.join(ys, lambda _: O.timer(10), lambda _: O.timer(0), lambda x, y: '%s %s' % (x, y)).take(5))
Explanation: ... whenever an item is emitted by one Observable in a window defined by an item emitted by another join
The join operator takes four parameters:
the second Observable to combine with the source Observable
a function that accepts an item from the source Observable and returns an Observable whose lifespan governs the duration during which that item will combine with items from the second Observable
a function that accepts an item from the second Observable and returns an Observable whose lifespan governs the duration during which that item will combine with items from the first Observable
a function that accepts an item from the first Observable and an item from the second Observable and returns an item to be emitted by the Observable returned from join
End of explanation
rst(O.group_join, title='group_join')
xs = O.interval(100).map(lambda i: 'First : %s' % i)
ys = O.interval(100).map(lambda i: 'Second: %s' % i)
d = subs(xs.group_join(ys,
lambda _: O.timer(0),
lambda _: O.timer(0),
lambda x, yy: yy.select(lambda y: '%s %s' % (x, y))).merge_all().take(5))
Explanation: ... or, alternatively, group_join
The groupJoin operator takes four parameters:
the second Observable to combine with the source Observable
a function that accepts an item from the source Observable and returns an Observable whose lifespan governs the duration during which that item will combine with items from the second Observable
a function that accepts an item from the second Observable and returns an Observable whose lifespan governs the duration during which that item will combine with items from the first Observable
a function that accepts an item from the first Observable and an Observable that emits items from the second Observable and returns an item to be emitted by the Observable returned from groupJoin
End of explanation
rst()
# see the similarity to zip.
ts = time.time()
def _dt():
# giving us info when an element was created:
return 'from time: %.2f' % (time.time() - ts)
one = O.interval(1000) .map(lambda i: 'Seconds : %s %s' % (i, _dt())).take(5)
two = O.interval(500) .map(lambda i: 'HalfSecs: %s %s' % (i, _dt())).take(5)
three = O.interval(100).map(lambda i: '10thS : %s %s' % (i, _dt())).take(5)
z = O.when(
one \
.and_(two) \
.and_(three)\
.then_do(lambda a, b, c: '\n'.join(('', '', a, b, c))))
# from the output you see that the result stream consists of elements built at each interval
# (which is in the past for 'two' and 'three'),
# buffered until the 1 second sequence 'one' advances a step.
d = subs(z)
Explanation: ... by means of Pattern and Plan intermediaries And/Then/When, and / then / when
The combination of the And, Then, and When operators behave much like the Zip operator, but they do so by means of intermediate data structures. And accepts two or more Observables and combines the emissions from each, one set at a time, into Pattern objects. Then operates on such Pattern objects, transforming them in a Plan. When in turn transforms these various Plan objects into emissions from an Observable.
details
The And/Then/When trio has more overloads that enable you to group an even greater number of sequences. They also allow you to provide more than one 'plan' (the output of the Then method). This gives you the Merge feature but on the collection of 'plans'. I would suggest playing around with them if this functionality is of interest to you. The verbosity of enumerating all of the combinations of these methods would be of low value. You will get far more value out of using them and discovering for yourself.
As we delve deeper into the depths of what the Rx libraries provide us, we can see more practical usages for it. Composing sequences with Rx allows us to easily make sense of the multiple data sources a problem domain is exposed to. We can concatenate values or sequences together sequentially with StartWith, Concat and Repeat. We can process multiple sequences concurrently with Merge, or process a single sequence at a time with Amb and Switch. Pairing values with CombineLatest, Zip and the And/Then/When operators can simplify otherwise fiddly operations like our drag-and-drop examples and monitoring system status.
End of explanation
rst(O.switch_latest)
s = O.range(0, 3).select(lambda x: O.range(x, 3)\
# showing from which stream our current value comes:
.map(lambda v: '%s (from stream nr %s)' % (v, x)))\
.switch_latest()
d = subs(s)
Explanation: ... and emitting the items from only the most-recently emitted of those Observables switch_latest
End of explanation |
2,959 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bubble Breaker in Python / Javascript
The key 'board' data structure is a numpy array, which is (for efficiency) stored on its side (with the bottom-right phone cell being the board[0,0] cell)
Step1: Load Game Mechanics / UI code
Step2: Create a sample (numeric) board
Step3: Key trick for this notebook is the ability for Jupyter to go 'round-trip' from Python back-end to Javascript in the browser, and back again. There's a block of helper javascript in the (Python) file crush-ui
Step5: Having imported that base code, we can now create UI elements for javascript to manipulate
Step6: And, now initialise a board and display it
Step8: But - because of the Python-javascript-Python roundtripping - you can now play the game (click on linked cells)!
Once you run out of moves to do, the game is over. You can restart it by refreshing the board generation cell above.
Smaller Board (for training NOW)
Step9: Now, there's quite a lot of machinery required to do Q() learning. So we'll take it one step at a time.
Convert a Board to Features
Step10: Build the Model to Evaluate the Board's Features
Step11: Quick tests of functionality
Step12: Logic to run 1 game
And a 'test()' function that can evaluate the current network, by running a set of 10 fixed games deterministically.
Step13: Ready to Train the Network...
Step14: Draw Graph of Training Process
Step16: Using the model - let's see how it plays
Step17: Save model parameters
Step18: Now for a Full-Sized Version
The GPU speed-up factor isn't very large here (2-3x), since a lot of the training time is spent in the Python game-play and feature generation code. Moreover, the network isn't very large, so the GPU speed is dominated by PCI transfer times.
The Theano/Lasagne version was trained in ~5 hours using a Titan X GPU.
The TensorFlow/Keras version took about 12 hours using a decent i7 CPU to the same point
The most up-to-date TensorFlow model was run for a few days on the Titan X (1 million games)
Loading the pre-trained model
Step20: Running the Pre-Trained model | Python Code:
import os
import numpy as np
import shutil, requests
import pickle
Explanation: Bubble Breaker in Python / Javascript
The key 'board' data structure is a numpy array, which is (for efficiency) stored on its side (with the bottom-right phone cell being the board[0,0] cell):
End of explanation
models_dir = './models/game'
rcl_base_dir = 'http://redcatlabs.com/downloads/deep-learning-workshop/notebooks/models/game/'
if not os.path.exists(models_dir):
os.makedirs(models_dir)
for py in ['__init__.py', 'crush.py', 'crush_ui.py']:
py_path = os.path.join(models_dir, py)
if not os.path.isfile( py_path ):
print("Downloading %s" % (py_path,))
response = requests.get(rcl_base_dir+py, stream=True)
with open(py_path, 'wb') as out_file:
shutil.copyfileobj(response.raw, out_file)
print("Model mechanics code available locally")
from models.game import crush
from models.game import crush_ui
# These are pure numpy (no fancy code - just the game mechanics/UI)
import importlib
importlib.reload(crush)
importlib.reload(crush_ui);
Explanation: Load Game Mechanics / UI code
End of explanation
crush.new_board(10, 14, n_colours=5)
Explanation: Create a sample (numeric) board
End of explanation
from IPython.display import HTML
#HTML(crush_ui.javascript_test)
HTML(crush_ui.javascript_base+'<code>Javascript Loaded</code>')
Explanation: Key trick for this notebook is the ability for Jupyter to go 'round-trip' from Python back-end to Javascript in the browser, and back again. There's a block of helper javascript in the (Python) file crush-ui:
End of explanation
javascript =
<div id="board_10_14_playme"></div>
<script type="text/Javascript">create_board("#board_10_14_playme",10,14,5,'board_playme');</script>
HTML(javascript)
Explanation: Having imported that base code, we can now create UI elements for javascript to manipulate :
End of explanation
board_playme = crush.new_board(10, 14, n_colours=5)
HTML(crush_ui.display_via_javascript_script("#board_10_14_playme", board_playme))
Explanation: And, now initialise a board and display it:
End of explanation
class board_param_set:
def __init__(self, width, height, n_colours):
self.width, self.height, self.n_colours = width, height, n_colours
p_small = board_param_set(5,8,4)
javascript =
<div id="board_small_playme"></div>
<script type="text/Javascript">create_board("#board_small_playme",%d,%d,%d, 'board_small');</script>
% (p_small.width, p_small.height, p_small.n_colours)
HTML(javascript)
board_small = crush.new_board(p_small.width, p_small.height, p_small.n_colours)
HTML(crush_ui.display_via_javascript_script("#board_small_playme", board_small))
Explanation: But - because of the Python-javascript-Python roundtripping - you can now play the game (click on linked cells)!
Once you run out of moves to do, the game is over. You can restart it by refreshing the board generation cell above.
Smaller Board (for training NOW)
End of explanation
def make_features_in_layers(board):
feature_layers = [] # These are effectively 'colours' for the CNN
mask = np.greater( board[:, :], 0 )*1.
feature_layers.append( mask.astype('float32') )
# This works out whether each cell is the same as the cell 'above it'
for shift_down in [1,2,3,4,5,]:
sameness = np.zeros_like(board, dtype='float32')
sameness[:,:-shift_down] = np.equal( board[:, :-shift_down], board[:, shift_down:] )*1.
feature_layers.append( sameness )
# This works out whether each cell is the same as the cell in to columns 'to the left of it'
for shift_right in [1,2,3,]:
sameness = np.zeros_like(board, dtype='float32')
sameness[:-shift_right,:] = np.equal( board[:-shift_right, :], board[shift_right:, :] )*1.
feature_layers.append( sameness )
stacked = np.dstack( feature_layers )
# Return a single feature map stack ~ (input channels, input rows, input columns)
# == Theano order (need to cope with this ordering in TensorFlow, since it is 'unnatural' there)
return np.rollaxis( stacked, 2, 0 )
features_shape_small = make_features_in_layers(board_small).shape
print("('feature layers', width, height) : %s" % (features_shape_small, ))
Explanation: Now, there's quite a lot of machinery required to do Q() learning. So we'll take it one step at a time.
Convert a Board to Features
End of explanation
# Execute for Theano / Lasagne version
raise Exception('Use the TensorFlow version!')
import theano
import lasagne
def build_cnn_theano_lasagne(input_var, features_shape):
# Create a CNN of two convolution layers and
# a fully-connected hidden layer in front of the output 'action score' layer
lasagne.random.set_rng( np.random )
# input layer
network = lasagne.layers.InputLayer(
shape=(None, features_shape[0], features_shape[1], features_shape[2]),
input_var=input_var)
# Two convolutional layers (no dropout, no pooling)
network = lasagne.layers.Conv2DLayer(
network, num_filters=32, filter_size=(3,3),
nonlinearity=lasagne.nonlinearities.rectify,
W=lasagne.init.GlorotUniform(),
)
network = lasagne.layers.Conv2DLayer(
network, num_filters=16, filter_size=(3,3),
nonlinearity=lasagne.nonlinearities.rectify,
)
# Two fully-connected layers - leading to ONE output value : the Q(features(board))
network = lasagne.layers.DenseLayer(
network, num_units=32,
nonlinearity=lasagne.nonlinearities.rectify,
)
network = lasagne.layers.DenseLayer(
network, num_units=1,
nonlinearity=lasagne.nonlinearities.linear,
)
return network
class rl_model_theano_lasagne:
is_tensorflow=False
def __init__(self, features_shape):
board_input = theano.tensor.tensor4('inputs')
board_score = theano.tensor.vector('targets')
np.random.seed(0) # This is for the initialisation inside the CNN
cnn = build_cnn_theano_lasagne(board_input, features_shape)
self.network = cnn
# This is for running the model (training, etc)
estimate_q_value = lasagne.layers.get_output(cnn) # 'running'
self.evaluate_features = theano.function([board_input], estimate_q_value)
# This is for repeatedly testing the model (deterministic)
predict_q_value = lasagne.layers.get_output(cnn, deterministic=True)
self.evaluate_features_deterministic = theano.function([board_input], predict_q_value)
# This is used for training
mse = lasagne.objectives.squared_error( estimate_q_value.reshape( (-1,) ), board_score).mean()
params = lasagne.layers.get_all_params(cnn, trainable=True)
#updates = lasagne.updates.nesterov_momentum( mse, params, learning_rate=0.01, momentum=0.9 )
updates = lasagne.updates.adam( mse, params )
#updates = lasagne.updates.rmsprop( mse, params )
self.train = theano.function([board_input, board_score], mse, updates=updates)
def explain(self):
res, params=[], 0
for i, l in enumerate( lasagne.layers.get_all_layers(self.network) ):
params, params_before = lasagne.layers.count_params(l), params
res.append( "Layer %2d : %6d params in %s" % (i, params-params_before, type(l),) )
return res
def get_param_values(self):
return lasagne.layers.get_all_param_values(self.network)
def set_param_values(self, params):
lasagne.layers.set_all_param_values(self.network, params)
rl_model = rl_model_theano_lasagne
# Execute for Tensorflow / Keras version
#raise Exception('Use the Theano/Lasagne version!')
import tensorflow as tf
import tensorflow.contrib.keras as keras
from tensorflow.contrib.keras.api.keras import backend as K
from tensorflow.contrib.keras.api.keras.layers import Input, Permute, Conv2D, Flatten, Dense
from tensorflow.contrib.keras.api.keras.models import Model
def build_cnn_tf_keras(input_var): #, features_shape
# Create a CNN of two convolution layers and
# a fully-connected hidden layer in front of the output 'action score' layer
# Fix up the board_features being created in Theano ordering...
#K.set_image_data_format('channels_first')
x = Permute( (2,3,1) )(input_var) # Probably more efficient to fix up instead
# Two convolutional layers (no dropout, no pooling, getting smaller)
x = Conv2D(32, (3,3), strides=(1, 1), padding='valid', use_bias=True, activation=K.relu)(x)
x = Conv2D(16, (3,3), strides=(1, 1), padding='valid', use_bias=True, activation=K.relu)(x)
x = Flatten()(x)
# Two fully-connected layers - leading to ONE output value : the Q(features(board))
x = Dense(32, activation=K.relu)(x)
x = Dense(1)(x) # , activation=K.linear
return x
class rl_model_tf_keras:
is_tensorflow=True
def __init__(self, features_shape):
board_input = Input(shape=(features_shape[0], features_shape[1], features_shape[2]),
name="board_features")
np.random.seed(0) # This is for the initialisation inside the CNN
board_score = build_cnn_tf_keras(board_input)
# https://keras.io/getting-started/faq/
# #how-can-i-obtain-the-output-of-an-intermediate-layer
self.get_q_value = K.function([board_input, K.learning_phase()], [board_score])
# This is used for training
model = Model(inputs=board_input, outputs=board_score)
#opt = keras.optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
#opt = keras.optimizers.Adam()
opt = keras.optimizers.RMSprop()
model.compile(opt,'mean_squared_error')
self.model = model
def evaluate_features(self, board_features_batch): # run the model (during training, etc)
# This would be helpful :: https://github.com/fchollet/keras/issues/3072
return self.get_q_value([board_features_batch, 1])[0] # training-mode=1
def evaluate_features_deterministic(self, board_features_batch):
# testing the model (deterministic)
return self.model.predict(x=board_features_batch)
def train(self, boards, targets):
hist = self.model.fit( x=np.array(boards), y=np.array(targets),
batch_size=len(boards), verbose=0)
#print(hist.history.keys())
#print(hist.history['loss'])
return hist.history['loss'][0] # want MSE as a scalar
def explain(self): return self.model.summary()
def get_param_values(self):
return [ layer.get_weights() for layer in self.model.layers ]
def set_param_values(self, params):
for i, layer in enumerate(self.model.layers):
layer.set_weights(params[i])
rl_model = rl_model_tf_keras
Explanation: Build the Model to Evaluate the Board's Features
End of explanation
model_small = rl_model(features_shape_small)
model_small.explain()
b = crush.new_board(p_small.width, p_small.height, p_small.n_colours)
f = make_features_in_layers(b)
f_arr = np.array( [f,10*f,f] )
#print(model_small.evaluate_features_deterministic( f_arr ))
print(model_small.evaluate_features( f_arr ))
#model_small.model.fit( x=np.array([f]), y=np.array([5.]), batch_size=1)
model_small.train( [f], [5.] )
Explanation: Quick tests of functionality
End of explanation
def play_game(game_id, model, board_params,
per_step_discount_factor=0.95, prob_exploration=0.1, capture_game=None):
training_data = dict( board=[], target=[] )
np.random.seed(game_id)
board = crush.new_board(board_params.width, board_params.height, board_params.n_colours)
score_total, new_cols_total, moves_total, game_step = 0,0,0,0
while True:
if capture_game:
capture_game(board, score_total)
moves = crush.potential_moves(board)
moves_total += len(moves)
if len(moves)==0:
# Need to add a training example : This is a zero-score outcome
training_data['board'].append( make_features_in_layers(board) )
training_data['target'].append( 0. )
break
# Let's find the highest-scoring of those moves: First, get all the features
next_step_features = []
next_step_target = []
for (h,v) in moves: # [0:2]
b, score, n_cols = crush.after_move(board, h,v, -1) # Added columns are unknown
next_step_features.append( make_features_in_layers(b) )
#next_step_target.append( score )
next_step_target.append( n_cols )
# Now evaluate the Q() values of the resulting postion for each possible move in one go
all_features = np.array(next_step_features) # , dtype='float32'
#print("all_features.shape", all_features.shape )
remember_training, next_mv = False, -1
if prob_exploration<0: # This is testing only - just need to pick the best move
next_step_q = model.evaluate_features_deterministic( all_features )
else:
if np.random.uniform(0.0, 1.0)<prob_exploration:
## Choose a random move, and do it
next_mv = np.random.randint( len(moves) )
else:
next_step_q = model.evaluate_features( all_features )
remember_training=True
if next_mv<0:
next_step_aggregate = ( np.array( next_step_target, dtype='float32') +
per_step_discount_factor * next_step_q.flatten() )
next_mv = np.argmax( next_step_aggregate )
(h,v) = moves[next_mv]
#print("Move : (%2d,%2d)" % (h,v))
#crush.show_board(board, highlight=(h,v))
if remember_training: # Only collect training data if not testing
training_data['board'].append( make_features_in_layers(board) )
# This value includes a Q() that looks at the 'blank cols', rather than the actuals
training_data['target'].append( next_step_aggregate[next_mv] )
board, score, new_cols = crush.after_move(board, h,v, board_params.n_colours) # Now we do the move 'for real'
score_total += score
new_cols_total += new_cols
#print("Move[%2d]=(%2d,%2d) -> Score : %3d, new_cols=%1d" % (i, h,v, score,new_cols))
#crush.show_board(board, highlight=(0,0))
game_step += 1
stats=dict(
steps=game_step, av_potential_moves=float(moves_total) / game_step,
score=score_total, new_cols=new_cols_total
)
return stats, training_data
def stats_aggregates(log, prefix, last=None):
stats_cols = "steps av_potential_moves new_cols score model_err".split()
if last:
stats_overall = np.array([ [s[c] for c in stats_cols] for s in log[-last:] ])
else:
stats_overall = np.array([ [s[c] for c in stats_cols] for s in log ])
print()
print(prefix + " #steps #moves_av new_cols score model_err")
print(" Min : ", ["%6.1f" % (v,) for v in np.min(stats_overall, axis=0).tolist()] )
print(" Max : ", ["%6.1f" % (v,) for v in np.max(stats_overall, axis=0).tolist()] )
print(" Mean : ", ["%6.1f" % (v,) for v in np.mean(stats_overall, axis=0).tolist()] )
print()
def run_test(i, model, board_params):
# Run a test set of 10 games (not in training examples)
stats_test_log=[]
for j in range(0,10):
stats_test, _ = play_game(1000*1000*1000+j, model, board_params, prob_exploration=-1.0)
stats_test['model_err'] = -999.
stats_test_log.append( stats_test )
stats_aggregates(stats_test_log, "=Test[%5d]" % (i,))
return stats_test_log
# Initial run, testing the score of an untrained SMALL network
_ = run_test(0, model_small, p_small)
Explanation: Logic to run 1 game
And a 'test()' function that can evaluate the current network, by running a set of 10 fixed games deterministically.
End of explanation
model, board_params = model_small, p_small
#model, board_params = model_trained, p_trained
import datetime
t0,i0 = datetime.datetime.now(),0
t_start=t0
stats_log=[]
training_data=dict( board=[], target=[])
stats_log_test = run_test(0, model, board_params)
#n_games, test_every = 20*1000, 1000
n_games, test_every = 1*1000, 100
batchsize=512
for i in range(0, n_games):
# PLAY the game with seed==i
stats, training_data_new = play_game(i, model, board_params)
if False:
print("game[%d]" % (i,))
print(" steps = %d" % (stats['steps'],))
print(" average moves = %5.1f" % (stats['av_potential_moves'], ) )
print(" new_cols = %d" % (stats['new_cols'],))
print(" score_total = %d" % (stats['score'],))
training_data['board'] += training_data_new['board']
training_data['target'] += training_data_new['target']
# This keeps the window from growing too big
if len(training_data['target'])>batchsize*2:
training_data['board'] = training_data['board'][-batchsize:]
training_data['target'] = training_data['target'][-batchsize:]
for iter in range(0,1):
err = model.train( training_data['board' ][-batchsize:],
training_data['target'][-batchsize:] )
stats['model_err'] = err
stats_log.append( stats )
if ((i+1) % (test_every/5))==0:
t_now = datetime.datetime.now()
t_elapsed = (t_now - t0).total_seconds()
t_end_projected = t0 + datetime.timedelta( seconds=(n_games-i0) * (t_elapsed/(i-i0)) )
print((" Time(1000 games)~%.0fsec. "+
"Will end in ~%.0f sec (~ %s), stored_data.length=%d") %
(1000.*t_elapsed/(i-i0), ((n_games-i))*t_elapsed/(i-i0),
t_end_projected.strftime("%H:%M"), len(training_data['target']), ))
t0, i0 = datetime.datetime.now(), i
#if ((i+1) % test_every)==0:
# stats_aggregates(stats_log, "Train[%5d]" % (i,), last=1000)
if ((i+1) % test_every)==0:
stats_log_test.extend( run_test(i, model, board_params) )
stats_aggregates(stats_log, "FINAL[%5d]" % (n_games,) )
Explanation: Ready to Train the Network...
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
for c in ['new_cols', 'score']: # "steps av_potential_moves new_cols score model_err"
plt.figure(figsize=(6, 4))
for log in [ stats_log_test ]: #[stats_log, stats_log_test]:
t = np.arange(0.0, len(stats_log), len(stats_log) / len(log))
v = np.log( [ l[c]+1. for l in log ] )
plt.ylabel('log( '+c+' )', fontsize=16)
plt.plot(t, v)
plt.plot(t, np.poly1d( np.polyfit(t, v, 1) )(t))
plt.grid(b=True, which='major', color='b', axis='y', linestyle='-')
plt.show()
Explanation: Draw Graph of Training Process
End of explanation
javascript =
<div id="board_small_watch"></div>
<script type="text/Javascript">create_board("#board_small_watch",%d,%d,%d,'board_small');</script>
% (p_small.width, p_small.height, p_small.n_colours)
HTML(javascript)
seed = 10
board = crush.new_board(p_small.width, p_small.height, p_small.n_colours)
boards_for_game, scores_for_game=[],[]
def capture_game(b,s): boards_for_game.append(b);scores_for_game.append(s)
stats, _ = play_game(seed, model_small, p_small, capture_game=capture_game)
HTML( crush_ui.display_gameplay("#board_small_watch", boards_for_game, scores_for_game, 0.1) )
Explanation: Using the model - let's see how it plays
End of explanation
model_save, p_save = model_small, p_small
#model_save, p_save = model_trained, p_trained
if False:
param_dictionary = dict(
param_values = model_save.get_param_values(),
width=p_save.width, height=p_save.height, n_colours=p_save.n_colours,
batchsize=batchsize,
i=i, )
with open('./data/game/crush/rl_%dx%dx%d_%s.%06d.pkl' % (
p_save.width, p_save.height, p_save.n_colours,
t_start.strftime("%Y-%m-%d_%H-%M"), i,), 'wb') as f:
pickle.dump(param_dictionary, f)
Explanation: Save model parameters
End of explanation
param_dictionary=dict(width=10, height=14, n_colours=5)
if rl_model.is_tensorflow:
print("Loading a TensorFlow/Keras Model")
#with open('./data/game/crush/rl_10x14x5_2017-05-17_16-05.019999.pkl', 'rb') as f:
#with open('./data/game/crush/rl_10x14x5_2017-05-17_18-23.039999.pkl', 'rb') as f:
#with open('./data/game/crush/rl_10x14x5_2017-05-17_18-23.229999.pkl', 'rb') as f:
#with open('./data/game/crush/rl_10x14x5_2017-05-17_18-23.389999.pkl', 'rb') as f:
#with open('./data/game/crush/rl_10x14x5_2017-05-17_18-23.989999.pkl', 'rb') as f:
with open('./data/game/crush/rl_10x14x5_2017-05-17_18-23.999999.pkl', 'rb') as f:
param_dictionary=pickle.load(f, encoding='iso-8859-1')
else:
print("Loading a Theano/Lasagne Model")
with open('./data/game/crush/rl_10x14x5_2016-06-21_03-27.049999.pkl', 'rb') as f:
param_dictionary=pickle.load(f, encoding='iso-8859-1')
width, height, n_colours = ( param_dictionary[k] for k in 'width height n_colours'.split() )
p_trained = board_param_set( width, height, n_colours )
board_trained = crush.new_board(p_trained.width,
p_trained.height,
n_colours=p_trained.n_colours)
features_shape_trained = make_features_in_layers(board_trained).shape
print("('feature layers', width, height) : %s" %(features_shape_trained, ))
model_trained = rl_model( features_shape_trained )
model_trained.explain()
model_trained.set_param_values( param_dictionary['param_values'] )
Explanation: Now for a Full-Sized Version
The GPU speed-up factor isn't very large here (2-3x), since a lot of the training time is spent in the Python game-play and feature generation code. Moreover, the network isn't very large, so the GPU speed is dominated by PCI transfer times.
The Theano/Lasagne version was trained in ~5 hours using a Titan X GPU.
The TensorFlow/Keras version took about 12 hours using a decent i7 CPU to the same point
The most up-to-date TensorFlow model was run for a few days on the Titan X (1 million games)
Loading the pre-trained model
End of explanation
javascript =
<div id="board_10_14_trained"></div>
<script type="text/Javascript">
create_board("#board_10_14_trained",%d,%d,%d, 'board_trained');
</script>
% (p_trained.width, p_trained.height, p_trained.n_colours)
HTML(javascript)
seed = 1000
board_trained = crush.new_board(width, height, n_colours=n_colours)
boards_for_game, scores_for_game=[],[]
def capture_game(b,s): boards_for_game.append(b);scores_for_game.append(s)
stats, _ = play_game(seed, model_trained, p_trained, capture_game=capture_game)
print(stats)
HTML(crush_ui.display_gameplay("#board_10_14_trained", boards_for_game, scores_for_game, 0.1) )
Explanation: Running the Pre-Trained model
End of explanation |
2,960 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Aprendizaje computacional en grandes volรบmenes de texto
Mario Graff ([email protected], [email protected])
Sabino Miranda ([email protected])
Daniela Moctezuma ([email protected])
Eric S. Tellez ([email protected])
CONACYT, INFOTEC y CentroGEO
https
Step1: Efecto oscilatorio cuyo periodo corresponde a los dรญas de la semana.
TGIF (Thanks god, it's friday).
Remover este fenรณmeno
Quitar la mediana por dรญa
Step2: Anรกlisis de polaridad en Estados Unidos, Argentina, Mรฉxico y Espaรฑa
Estados Unidos
Mรฉxico
Argentina
Espaรฑa.
19 de marzo
19 de junio es importante para todas las naciones excepto Espaรฑa
20 de julio que se celebra el dรญa del amigo en Argentina.
Step3: Anรกlisis Descriptivo de los Tweets en Espaรฑol
A partir de los tweets almacenados se presentan algunas estadรญsticas bรกsicas que describen algunas caracterรญsticas interesantes de los datos como la cantidad de usuarios por paรญs y la movilidad de los usuarios.
Cantidad de usuarios por paรญs
La siguiente figura muestra los usuarios por paรญs, se puede observar que Estados Unidos es el que cuenta con un mayor nรบmero de usuarios, seguido por Argentina y en tercer lugar Mรฉxico. Ademas sorpresivamente Brasil es el cuarto y en quinto lugar se encuentra Espaรฑa.
Step4: Movilidad de los Tuiteros (de habla hispana)
La siguiente figura muestra cuรกles son los paรญses que visitan mรกs frecuentemente usuarios de un paรญs en particular. Por ejemplo, la mayorรญa de los usuarios de Estados Unidos que viajan a otro paรญs viajan a Mรฉxico, en segundo lugar a Puerto Rico y asรญ sucesivamente; los usuarios de Argentina viajan a Brasil en primer lugar;
los de Mรฉxico viajan a Estados Unidos; y los de Espaรฑa tambiรฉn viajan a Estados Unidos. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import gzip
import json
import numpy as np
def read_data(fname):
with gzip.open(fname) as fpt:
d = json.loads(str(fpt.read(), encoding='utf-8'))
return d
%matplotlib inline
plt.figure(figsize=(20, 10))
mx_pos = read_data('spanish/polarity_by_country/MX.json.gz')
ticks = [str(x[0])[2:] for x in mx_pos]
mu = [x[1] for x in mx_pos]
plt.plot(mu)
index = np.argsort(mu)
index = np.concatenate((index[:1], index[-10:]))
_ = plt.xticks(index, [ticks[x] for x in index], rotation=90)
plt.grid()
Explanation: Aprendizaje computacional en grandes volรบmenes de texto
Mario Graff ([email protected], [email protected])
Sabino Miranda ([email protected])
Daniela Moctezuma ([email protected])
Eric S. Tellez ([email protected])
CONACYT, INFOTEC y CentroGEO
https://github.com/ingeotec
Objetivo
El alumno serรก capaz de crear modelos de texto multilenguaje aplicables a grandes volรบmenes de informaciรณn. Sobre estos modelos, el alumno serรก capaz de aplicar algoritmos de aprendizaje supervisado para diferentes dominios de aplicaciรณn, como por ejemplo, clasificadores de polaridad, determinar la autorรญa basado en el texto, determinar la temรกtica de un texto, entre otras.
Temas
Introducciรณn
Motivaciรณn (anรกlisis de sentimientos, detecciรณn de predadores, spam, gรฉnero, edad, autorรญa en general, marketing, prestigio, etc)
Estado del arte (competencias)
Uso de herramientas: $\mu$TC, Python, numpy, nltk, sklearn
Representaciรณn vectorial del texto
Normalizaciรณn
Tokenizaciรณn (n-words, q-grams, skip-grams)
Pesado de texto (TFIDF)
Medidas de similitud
Aprendizaje supervisado
Modelo general de aprendizaje; Entrenamiento, test, score (accuracy, recall, precision, f1)
Mรกquinas de soporte vectorial (SVM)
Programaciรณn genรฉtica (EvoDAG)
Distant supervision
$\mu$TC
Pipeline de transformaciones
Optimizaciรณn de parรกmetros
Clasificadores
Uso del $\mu$TC
Aplicaciones
Anรกlisis de sentimientos
Determinaciรณn de autorรญa
Clasificaciรณn de noticias
Spam
Gรฉnero y edad
Conclusiones
Anรกlisis de Polaridad de Tuits GEO Referenciados
Anรกlisis de la polaridad de tuits geo-referenciados
Recolectados desde 16 de diciembre del 2015 hasta 25 de noviembre de 2016
Todos los tuits estรกn escritos en espaรฑol y estรกn geo-localizados.
Servicio Web de Anรกlisis de Polaridad (SWAP)
Metodologรญa
Seleccionaron aquellos que indican que su paรญs de origen es Mรฉxico (etiqueta MX)
Aproximadamente 37,198,787 tuits
Generados por 695,345 usuarios
Analizado por SWAP
Valor es la positividad del tuit.
Eliminar el sezgo que pueden introducir los usuarios mรกs activos
Midiendo la positividad promedio por usuario, por dรญa.
La positividad es la media el promedio de la positividad promedio de los usuarios por dรญa.
Anรกlisis de Positividad para Mรฉxico
El eje de abscisas se encuentran los diferentes dรญas
En el eje de las ordenadas estรก el valor de positividad
El mรกximo valor es 1 y el mรญnimo valor es 0.
En las abscisas se muestran los 10 picos mas significativos asรญ como el valle mรกs pronunciado.
24 y 25 de diciembre del 2015
31 de diciembre y 1 de enero del 2016
14 de febrero
8 de marzo (dรญa de la mujer)
30 de abril
10 de mayo
15 de mayo
19 de junio (dรญa del padre)
9 de noviembre de 2016 (elecciones presidenciales de Estados Unidos)
End of explanation
def remove_median(pos):
median = np.array(mx_pos[: -int((len(pos) % 7))])[:, 1]
median.shape = (int(median.shape[0] / 7), 7)
median = np.median(median, axis=0)
median = np.concatenate((np.concatenate([median for x in range(int(len(pos) / median.shape[0]))], axis=0),
median[:int(len(mx_pos) % 7)]), axis=0)
return [(x[0], x[1] - y) for x, y in zip(pos, median)]
plt.figure(figsize=(20, 10))
nmx_pos = remove_median(mx_pos)
mu = np.array([x[1] for x in nmx_pos])
plt.plot(mu)
index = np.argsort(mu)
index = np.concatenate((index[:1], index[-10:]))
_ = plt.xticks(index, [ticks[x] for x in index], rotation=90)
plt.grid()
plt.figure(figsize=(20, 20))
for k, D in enumerate([mx_pos, remove_median(mx_pos)]):
ticks = [str(x[0])[2:] for x in D]
mu = [x[1] for x in D]
plt.subplot(4, 1, k+1)
plt.plot(mu)
index = np.argsort(mu)
index = np.concatenate((index[:1], index[-10:]))
_ = plt.xticks(index, [ticks[x] for x in index], rotation=45)
plt.grid()
Explanation: Efecto oscilatorio cuyo periodo corresponde a los dรญas de la semana.
TGIF (Thanks god, it's friday).
Remover este fenรณmeno
Quitar la mediana por dรญa
End of explanation
pos = [read_data('spanish/polarity_by_country/%s.json.gz' % x) for x in ['US', 'AR', 'ES']]
us_pos, ar_pos, es_pos = pos
plt.figure(figsize=(20, 10))
for code, D, k in zip(['US', 'MX', 'AR', 'ES'], [us_pos, mx_pos, ar_pos, es_pos],
range(4)):
D = remove_median(D)
ticks = [str(x[0])[2:] for x in D]
mu = [x[1] for x in D]
plt.subplot(4, 1, k+1)
plt.plot(mu)
plt.title(code)
index = np.argsort(mu)
index = np.concatenate((index[:1], index[-10:]))
_ = plt.xticks(index, [ticks[x] for x in index], rotation=45)
plt.grid()
plt.ylim(-0.20, 0.20)
Explanation: Anรกlisis de polaridad en Estados Unidos, Argentina, Mรฉxico y Espaรฑa
Estados Unidos
Mรฉxico
Argentina
Espaรฑa.
19 de marzo
19 de junio es importante para todas las naciones excepto Espaรฑa
20 de julio que se celebra el dรญa del amigo en Argentina.
End of explanation
%matplotlib inline
from glob import glob
from multiprocessing import Pool
from tqdm import tqdm
from collections import Counter
def number_users(fname):
return fname, len(read_data(fname))
fnames = [i for i in glob('spanish/users_by_country/*.json.gz') if len(i.split('.')[0].split('/')[1]) == 2]
p = Pool(8)
res = [x for x in p.imap_unordered(number_users, fnames)]
p.close()
country_code = Counter()
for name, value in res:
code = name.split('.')[0].split('/')[1]
country_code[code] = value
mc = country_code.most_common()
size = 19
first = mc[:size]
extra = ('REST', sum([x[1] for x in mc[size:]]))
first.append(extra)
plt.figure(figsize=(10, 10))
_ = plt.pie([x[1] for x in first], labels=[x[0] for x in first])
Explanation: Anรกlisis Descriptivo de los Tweets en Espaรฑol
A partir de los tweets almacenados se presentan algunas estadรญsticas bรกsicas que describen algunas caracterรญsticas interesantes de los datos como la cantidad de usuarios por paรญs y la movilidad de los usuarios.
Cantidad de usuarios por paรญs
La siguiente figura muestra los usuarios por paรญs, se puede observar que Estados Unidos es el que cuenta con un mayor nรบmero de usuarios, seguido por Argentina y en tercer lugar Mรฉxico. Ademas sorpresivamente Brasil es el cuarto y en quinto lugar se encuentra Espaรฑa.
End of explanation
def migration(country_code='MX'):
fname = 'spanish/users_by_country/%s.json.gz' % country_code
d = read_data(fname)
other = Counter()
for x in d.values():
if len(x) == 1:
continue
c = Counter(x)
for xx in c.most_common()[1:]:
if xx[0] == country_code:
continue
other[xx[0]] += 1
return other
plt.figure(figsize=(10, 10))
for k, c in enumerate(['US', 'AR', 'MX', 'ES']):
other = migration(c)
mc = other.most_common()
first = mc[:size]
extra = ('REST', sum([x[1] for x in mc[size:]]))
first.append(extra)
plt.subplot(2, 2, k+1)
_ = plt.pie([x[1] for x in first], labels=[x[0] for x in first])
plt.title(c)
Explanation: Movilidad de los Tuiteros (de habla hispana)
La siguiente figura muestra cuรกles son los paรญses que visitan mรกs frecuentemente usuarios de un paรญs en particular. Por ejemplo, la mayorรญa de los usuarios de Estados Unidos que viajan a otro paรญs viajan a Mรฉxico, en segundo lugar a Puerto Rico y asรญ sucesivamente; los usuarios de Argentina viajan a Brasil en primer lugar;
los de Mรฉxico viajan a Estados Unidos; y los de Espaรฑa tambiรฉn viajan a Estados Unidos.
End of explanation |
2,961 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Back to the main Index
Inline OMEX and COMBINE archives
Tellurium provides a way to easily edit the contents of COMBINE archives in a human-readable format called inline OMEX. To create a COMBINE archive, simply create a string containing all models (in Antimony format) and all simulations (in PhraSEDML format). Tellurium will transparently convert the Antimony to SBML and PhraSEDML to SED-ML, then execute the resulting SED-ML. The following example will work in either Jupyter or the Tellurium notebook viewer. The Tellurium notebook viewer allows you to create specialized cells for inline OMEX, which contain correct syntax-highlighting for the format.
Step1: Forcing Functions
A common task in modeling is to represent the influence of an external, time-varying input on the system. In SED-ML, this can be accomplished using a repeated task to run a simulation for a short amount of time and update the forcing function between simulations. In the example, the forcing function is a pulse represented with a piecewise directive, but it can be any arbitrarily complex time-varying function, as shown in the second example.
Step2: 1d Parameter Scan
This example shows how to perform a one-dimensional parameter scan using Antimony/PhraSEDML and convert the study to a COMBINE archive. The example uses a PhraSEDML repeated task task1 to run a timecourse simulation task0 on a model for different values of the parameter J0_v0.
Step3: 2d Parameter Scan
There are multiple was to specify the set of values that should be swept over. This example uses two repeated tasks instead of one. It sweeps through a discrete set of values for the parameter J1_KK2, and then sweeps through a uniform range for another parameter J4_KK5.
Step5: Stochastic Simulation and RNG Seeding
It is possible to programatically set the RNG seed of a stochastic simulation in PhraSEDML using the <simulation-name>.algorithm.seed = <value> directive. Simulations run with the same seed are identical. If the seed is not specified, a different value is used each time, leading to different results.
Step8: Resetting Models
This example is another parameter scan which shows the effect of resetting the model or not after each simulation. When using the repeated task directive in PhraSEDML, you can pass the reset=true argument to reset the model to its initial conditions after each repeated simulation. Leaving this argument off causes the model to retain its current state between simulations. In this case, the time value is not reset.
Step9: 3d Plotting
This example shows how to use PhraSEDML to perform 3d plotting. The syntax is plot <x> vs <y> vs <z>, where <x>, <y>, and <z> are references to model state variables used in specific tasks. | Python Code:
import tellurium as te, tempfile, os
te.setDefaultPlottingEngine('matplotlib')
%matplotlib inline
antimony_str = '''
model myModel
S1 -> S2; k1*S1
S1 = 10; S2 = 0
k1 = 1
end
'''
phrasedml_str = '''
model1 = model "myModel"
sim1 = simulate uniform(0, 5, 100)
task1 = run sim1 on model1
plot "Figure 1" time vs S1, S2
'''
# create an inline OMEX (inline representation of a COMBINE archive)
# from the antimony and phrasedml strings
inline_omex = '\n'.join([antimony_str, phrasedml_str])
# execute the inline OMEX
te.executeInlineOmex(inline_omex)
# export to a COMBINE archive
workingDir = tempfile.mkdtemp(suffix="_omex")
te.exportInlineOmex(inline_omex, os.path.join(workingDir, 'archive.omex'))
Explanation: Back to the main Index
Inline OMEX and COMBINE archives
Tellurium provides a way to easily edit the contents of COMBINE archives in a human-readable format called inline OMEX. To create a COMBINE archive, simply create a string containing all models (in Antimony format) and all simulations (in PhraSEDML format). Tellurium will transparently convert the Antimony to SBML and PhraSEDML to SED-ML, then execute the resulting SED-ML. The following example will work in either Jupyter or the Tellurium notebook viewer. The Tellurium notebook viewer allows you to create specialized cells for inline OMEX, which contain correct syntax-highlighting for the format.
End of explanation
import tellurium as te
antimony_str = '''
// Created by libAntimony v2.9
model *oneStep()
// Compartments and Species:
compartment compartment_;
species S1 in compartment_, S2 in compartment_, $X0 in compartment_, $X1 in compartment_;
species $X2 in compartment_;
// Reactions:
J0: $X0 => S1; J0_v0;
J1: S1 => $X1; J1_k3*S1;
J2: S1 => S2; (J2_k1*S1 - J2_k_1*S2)*(1 + J2_c*S2^J2_q);
J3: S2 => $X2; J3_k2*S2;
// Species initializations:
S1 = 0;
S2 = 1;
X0 = 1;
X1 = 0;
X2 = 0;
// Compartment initializations:
compartment_ = 1;
// Variable initializations:
J0_v0 = 8;
J1_k3 = 0;
J2_k1 = 1;
J2_k_1 = 0;
J2_c = 1;
J2_q = 3;
J3_k2 = 5;
// Other declarations:
const compartment_, J0_v0, J1_k3, J2_k1, J2_k_1, J2_c, J2_q, J3_k2;
end
'''
phrasedml_str = '''
model1 = model "oneStep"
stepper = simulate onestep(0.1)
task0 = run stepper on model1
task1 = repeat task0 for local.x in uniform(0, 10, 100), J0_v0 = piecewise(8, x<4, 0.1, 4<=x<6, 8)
task2 = repeat task0 for local.index in uniform(0, 10, 1000), local.current = index -> abs(sin(1 / (0.1 * index + 0.1))), model1.J0_v0 = current : current
plot "Forcing Function (Pulse)" task1.time vs task1.S1, task1.S2, task1.J0_v0
plot "Forcing Function (Custom)" task2.time vs task2.S1, task2.S2, task2.J0_v0
'''
# create the inline OMEX string
inline_omex = '\n'.join([antimony_str, phrasedml_str])
# export to a COMBINE archive
workingDir = tempfile.mkdtemp(suffix="_omex")
archive_name = os.path.join(workingDir, 'archive.omex')
te.exportInlineOmex(inline_omex, archive_name)
# convert the COMBINE archive back into an
# inline OMEX (transparently) and execute it
te.convertAndExecuteCombineArchive(archive_name)
Explanation: Forcing Functions
A common task in modeling is to represent the influence of an external, time-varying input on the system. In SED-ML, this can be accomplished using a repeated task to run a simulation for a short amount of time and update the forcing function between simulations. In the example, the forcing function is a pulse represented with a piecewise directive, but it can be any arbitrarily complex time-varying function, as shown in the second example.
End of explanation
import tellurium as te
antimony_str = '''
// Created by libAntimony v2.9
model *parameterScan1D()
// Compartments and Species:
compartment compartment_;
species S1 in compartment_, S2 in compartment_, $X0 in compartment_, $X1 in compartment_;
species $X2 in compartment_;
// Reactions:
J0: $X0 => S1; J0_v0;
J1: S1 => $X1; J1_k3*S1;
J2: S1 => S2; (J2_k1*S1 - J2_k_1*S2)*(1 + J2_c*S2^J2_q);
J3: S2 => $X2; J3_k2*S2;
// Species initializations:
S1 = 0;
S2 = 1;
X0 = 1;
X1 = 0;
X2 = 0;
// Compartment initializations:
compartment_ = 1;
// Variable initializations:
J0_v0 = 8;
J1_k3 = 0;
J2_k1 = 1;
J2_k_1 = 0;
J2_c = 1;
J2_q = 3;
J3_k2 = 5;
// Other declarations:
const compartment_, J0_v0, J1_k3, J2_k1, J2_k_1, J2_c, J2_q, J3_k2;
end
'''
phrasedml_str = '''
model1 = model "parameterScan1D"
timecourse1 = simulate uniform(0, 20, 1000)
task0 = run timecourse1 on model1
task1 = repeat task0 for J0_v0 in [8, 4, 0.4], reset=true
plot task1.time vs task1.S1, task1.S2
'''
# create the inline OMEX string
inline_omex = '\n'.join([antimony_str, phrasedml_str])
# execute the inline OMEX
te.executeInlineOmex(inline_omex)
Explanation: 1d Parameter Scan
This example shows how to perform a one-dimensional parameter scan using Antimony/PhraSEDML and convert the study to a COMBINE archive. The example uses a PhraSEDML repeated task task1 to run a timecourse simulation task0 on a model for different values of the parameter J0_v0.
End of explanation
import tellurium as te
antimony_str = '''
// Created by libAntimony v2.9
model *parameterScan2D()
// Compartments and Species:
compartment compartment_;
species MKKK in compartment_, MKKK_P in compartment_, MKK in compartment_;
species MKK_P in compartment_, MKK_PP in compartment_, MAPK in compartment_;
species MAPK_P in compartment_, MAPK_PP in compartment_;
// Reactions:
J0: MKKK => MKKK_P; (J0_V1*MKKK)/((1 + (MAPK_PP/J0_Ki)^J0_n)*(J0_K1 + MKKK));
J1: MKKK_P => MKKK; (J1_V2*MKKK_P)/(J1_KK2 + MKKK_P);
J2: MKK => MKK_P; (J2_k3*MKKK_P*MKK)/(J2_KK3 + MKK);
J3: MKK_P => MKK_PP; (J3_k4*MKKK_P*MKK_P)/(J3_KK4 + MKK_P);
J4: MKK_PP => MKK_P; (J4_V5*MKK_PP)/(J4_KK5 + MKK_PP);
J5: MKK_P => MKK; (J5_V6*MKK_P)/(J5_KK6 + MKK_P);
J6: MAPK => MAPK_P; (J6_k7*MKK_PP*MAPK)/(J6_KK7 + MAPK);
J7: MAPK_P => MAPK_PP; (J7_k8*MKK_PP*MAPK_P)/(J7_KK8 + MAPK_P);
J8: MAPK_PP => MAPK_P; (J8_V9*MAPK_PP)/(J8_KK9 + MAPK_PP);
J9: MAPK_P => MAPK; (J9_V10*MAPK_P)/(J9_KK10 + MAPK_P);
// Species initializations:
MKKK = 90;
MKKK_P = 10;
MKK = 280;
MKK_P = 10;
MKK_PP = 10;
MAPK = 280;
MAPK_P = 10;
MAPK_PP = 10;
// Compartment initializations:
compartment_ = 1;
// Variable initializations:
J0_V1 = 2.5;
J0_Ki = 9;
J0_n = 1;
J0_K1 = 10;
J1_V2 = 0.25;
J1_KK2 = 8;
J2_k3 = 0.025;
J2_KK3 = 15;
J3_k4 = 0.025;
J3_KK4 = 15;
J4_V5 = 0.75;
J4_KK5 = 15;
J5_V6 = 0.75;
J5_KK6 = 15;
J6_k7 = 0.025;
J6_KK7 = 15;
J7_k8 = 0.025;
J7_KK8 = 15;
J8_V9 = 0.5;
J8_KK9 = 15;
J9_V10 = 0.5;
J9_KK10 = 15;
// Other declarations:
const compartment_, J0_V1, J0_Ki, J0_n, J0_K1, J1_V2, J1_KK2, J2_k3, J2_KK3;
const J3_k4, J3_KK4, J4_V5, J4_KK5, J5_V6, J5_KK6, J6_k7, J6_KK7, J7_k8;
const J7_KK8, J8_V9, J8_KK9, J9_V10, J9_KK10;
end
'''
phrasedml_str = '''
model_3 = model "parameterScan2D"
sim_repeat = simulate uniform(0,3000,100)
task_1 = run sim_repeat on model_3
repeatedtask_1 = repeat task_1 for J1_KK2 in [1, 5, 10, 50, 60, 70, 80, 90, 100], reset=true
repeatedtask_2 = repeat repeatedtask_1 for J4_KK5 in uniform(1, 40, 10), reset=true
plot repeatedtask_2.J4_KK5 vs repeatedtask_2.J1_KK2
plot repeatedtask_2.time vs repeatedtask_2.MKK, repeatedtask_2.MKK_P
'''
# create the inline OMEX string
inline_omex = '\n'.join([antimony_str, phrasedml_str])
# execute the inline OMEX
te.executeInlineOmex(inline_omex)
Explanation: 2d Parameter Scan
There are multiple was to specify the set of values that should be swept over. This example uses two repeated tasks instead of one. It sweeps through a discrete set of values for the parameter J1_KK2, and then sweeps through a uniform range for another parameter J4_KK5.
End of explanation
# -*- coding: utf-8 -*-
phrasedml repeated stochastic test
import tellurium as te
antimony_str = '''
// Created by libAntimony v2.9
model *repeatedStochastic()
// Compartments and Species:
compartment compartment_;
species MKKK in compartment_, MKKK_P in compartment_, MKK in compartment_;
species MKK_P in compartment_, MKK_PP in compartment_, MAPK in compartment_;
species MAPK_P in compartment_, MAPK_PP in compartment_;
// Reactions:
J0: MKKK => MKKK_P; (J0_V1*MKKK)/((1 + (MAPK_PP/J0_Ki)^J0_n)*(J0_K1 + MKKK));
J1: MKKK_P => MKKK; (J1_V2*MKKK_P)/(J1_KK2 + MKKK_P);
J2: MKK => MKK_P; (J2_k3*MKKK_P*MKK)/(J2_KK3 + MKK);
J3: MKK_P => MKK_PP; (J3_k4*MKKK_P*MKK_P)/(J3_KK4 + MKK_P);
J4: MKK_PP => MKK_P; (J4_V5*MKK_PP)/(J4_KK5 + MKK_PP);
J5: MKK_P => MKK; (J5_V6*MKK_P)/(J5_KK6 + MKK_P);
J6: MAPK => MAPK_P; (J6_k7*MKK_PP*MAPK)/(J6_KK7 + MAPK);
J7: MAPK_P => MAPK_PP; (J7_k8*MKK_PP*MAPK_P)/(J7_KK8 + MAPK_P);
J8: MAPK_PP => MAPK_P; (J8_V9*MAPK_PP)/(J8_KK9 + MAPK_PP);
J9: MAPK_P => MAPK; (J9_V10*MAPK_P)/(J9_KK10 + MAPK_P);
// Species initializations:
MKKK = 90;
MKKK_P = 10;
MKK = 280;
MKK_P = 10;
MKK_PP = 10;
MAPK = 280;
MAPK_P = 10;
MAPK_PP = 10;
// Compartment initializations:
compartment_ = 1;
// Variable initializations:
J0_V1 = 2.5;
J0_Ki = 9;
J0_n = 1;
J0_K1 = 10;
J1_V2 = 0.25;
J1_KK2 = 8;
J2_k3 = 0.025;
J2_KK3 = 15;
J3_k4 = 0.025;
J3_KK4 = 15;
J4_V5 = 0.75;
J4_KK5 = 15;
J5_V6 = 0.75;
J5_KK6 = 15;
J6_k7 = 0.025;
J6_KK7 = 15;
J7_k8 = 0.025;
J7_KK8 = 15;
J8_V9 = 0.5;
J8_KK9 = 15;
J9_V10 = 0.5;
J9_KK10 = 15;
// Other declarations:
const compartment_, J0_V1, J0_Ki, J0_n, J0_K1, J1_V2, J1_KK2, J2_k3, J2_KK3;
const J3_k4, J3_KK4, J4_V5, J4_KK5, J5_V6, J5_KK6, J6_k7, J6_KK7, J7_k8;
const J7_KK8, J8_V9, J8_KK9, J9_V10, J9_KK10;
end
'''
phrasedml_str = '''
model1 = model "repeatedStochastic"
timecourse1 = simulate uniform_stochastic(0, 4000, 1000)
timecourse1.algorithm.seed = 1003
timecourse2 = simulate uniform_stochastic(0, 4000, 1000)
task1 = run timecourse1 on model1
task2 = run timecourse2 on model1
repeat1 = repeat task1 for local.x in uniform(0, 10, 10), reset=true
repeat2 = repeat task2 for local.x in uniform(0, 10, 10), reset=true
plot "Repeats with same seed" repeat1.time vs repeat1.MAPK, repeat1.MAPK_P, repeat1.MAPK_PP, repeat1.MKK, repeat1.MKK_P, repeat1.MKKK, repeat1.MKKK_P
plot "Repeats without seeding" repeat2.time vs repeat2.MAPK, repeat2.MAPK_P, repeat2.MAPK_PP, repeat2.MKK, repeat2.MKK_P, repeat2.MKKK, repeat2.MKKK_P
'''
# create the inline OMEX string
inline_omex = '\n'.join([antimony_str, phrasedml_str])
# execute the inline OMEX
te.executeInlineOmex(inline_omex)
Explanation: Stochastic Simulation and RNG Seeding
It is possible to programatically set the RNG seed of a stochastic simulation in PhraSEDML using the <simulation-name>.algorithm.seed = <value> directive. Simulations run with the same seed are identical. If the seed is not specified, a different value is used each time, leading to different results.
End of explanation
import tellurium as te
antimony_str =
model case_02
J0: S1 -> S2; k1*S1;
S1 = 10.0; S2=0.0;
k1 = 0.1;
end
phrasedml_str =
model0 = model "case_02"
model1 = model model0 with S1=5.0
sim0 = simulate uniform(0, 6, 100)
task0 = run sim0 on model1
# reset the model after each simulation
task1 = repeat task0 for k1 in uniform(0.0, 5.0, 5), reset = true
# show the effect of not resetting for comparison
task2 = repeat task0 for k1 in uniform(0.0, 5.0, 5)
plot "Repeated task with reset" task1.time vs task1.S1, task1.S2
plot "Repeated task without reset" task2.time vs task2.S1, task2.S2
# create the inline OMEX string
inline_omex = '\n'.join([antimony_str, phrasedml_str])
# execute the inline OMEX
te.executeInlineOmex(inline_omex)
Explanation: Resetting Models
This example is another parameter scan which shows the effect of resetting the model or not after each simulation. When using the repeated task directive in PhraSEDML, you can pass the reset=true argument to reset the model to its initial conditions after each repeated simulation. Leaving this argument off causes the model to retain its current state between simulations. In this case, the time value is not reset.
End of explanation
import tellurium as te
antimony_str = '''
// Created by libAntimony v2.9
model *case_09()
// Compartments and Species:
compartment compartment_;
species MKKK in compartment_, MKKK_P in compartment_, MKK in compartment_;
species MKK_P in compartment_, MKK_PP in compartment_, MAPK in compartment_;
species MAPK_P in compartment_, MAPK_PP in compartment_;
// Reactions:
J0: MKKK => MKKK_P; (J0_V1*MKKK)/((1 + (MAPK_PP/J0_Ki)^J0_n)*(J0_K1 + MKKK));
J1: MKKK_P => MKKK; (J1_V2*MKKK_P)/(J1_KK2 + MKKK_P);
J2: MKK => MKK_P; (J2_k3*MKKK_P*MKK)/(J2_KK3 + MKK);
J3: MKK_P => MKK_PP; (J3_k4*MKKK_P*MKK_P)/(J3_KK4 + MKK_P);
J4: MKK_PP => MKK_P; (J4_V5*MKK_PP)/(J4_KK5 + MKK_PP);
J5: MKK_P => MKK; (J5_V6*MKK_P)/(J5_KK6 + MKK_P);
J6: MAPK => MAPK_P; (J6_k7*MKK_PP*MAPK)/(J6_KK7 + MAPK);
J7: MAPK_P => MAPK_PP; (J7_k8*MKK_PP*MAPK_P)/(J7_KK8 + MAPK_P);
J8: MAPK_PP => MAPK_P; (J8_V9*MAPK_PP)/(J8_KK9 + MAPK_PP);
J9: MAPK_P => MAPK; (J9_V10*MAPK_P)/(J9_KK10 + MAPK_P);
// Species initializations:
MKKK = 90;
MKKK_P = 10;
MKK = 280;
MKK_P = 10;
MKK_PP = 10;
MAPK = 280;
MAPK_P = 10;
MAPK_PP = 10;
// Compartment initializations:
compartment_ = 1;
// Variable initializations:
J0_V1 = 2.5;
J0_Ki = 9;
J0_n = 1;
J0_K1 = 10;
J1_V2 = 0.25;
J1_KK2 = 8;
J2_k3 = 0.025;
J2_KK3 = 15;
J3_k4 = 0.025;
J3_KK4 = 15;
J4_V5 = 0.75;
J4_KK5 = 15;
J5_V6 = 0.75;
J5_KK6 = 15;
J6_k7 = 0.025;
J6_KK7 = 15;
J7_k8 = 0.025;
J7_KK8 = 15;
J8_V9 = 0.5;
J8_KK9 = 15;
J9_V10 = 0.5;
J9_KK10 = 15;
// Other declarations:
const compartment_, J0_V1, J0_Ki, J0_n, J0_K1, J1_V2, J1_KK2, J2_k3, J2_KK3;
const J3_k4, J3_KK4, J4_V5, J4_KK5, J5_V6, J5_KK6, J6_k7, J6_KK7, J7_k8;
const J7_KK8, J8_V9, J8_KK9, J9_V10, J9_KK10;
end
'''
phrasedml_str = '''
mod1 = model "case_09"
# sim1 = simulate uniform_stochastic(0, 4000, 1000)
sim1 = simulate uniform(0, 4000, 1000)
task1 = run sim1 on mod1
repeat1 = repeat task1 for local.x in uniform(0, 10, 10), reset=true
plot "MAPK oscillations" repeat1.MAPK vs repeat1.time vs repeat1.MAPK_P, repeat1.MAPK vs repeat1.time vs repeat1.MAPK_PP, repeat1.MAPK vs repeat1.time vs repeat1.MKK
# report repeat1.MAPK vs repeat1.time vs repeat1.MAPK_P, repeat1.MAPK vs repeat1.time vs repeat1.MAPK_PP, repeat1.MAPK vs repeat1.time vs repeat1.MKK
'''
# create the inline OMEX string
inline_omex = '\n'.join([antimony_str, phrasedml_str])
# execute the inline OMEX
te.executeInlineOmex(inline_omex)
Explanation: 3d Plotting
This example shows how to use PhraSEDML to perform 3d plotting. The syntax is plot <x> vs <y> vs <z>, where <x>, <y>, and <z> are references to model state variables used in specific tasks.
End of explanation |
2,962 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Here we examine whether publishing volume has an impact on overall, article, or place traffic, specifically whether total, average, or median traffic is affected by increased publishing volume.
TL;DR, it's somewhat inconclusive, but we're probably better off publishing about 6 places per day. As far as articles are concerned, we seem to perform better the more we publish, but only to a point. That points seems to be around 12 or 13 articles per day.
Step1: Let's truncate the data to the period between June 1st, 2015 and March 4, 2016. This is to keep super old content out of the window while also eliminating any content less than 6 days old.
Step2: Here we plot the total page views, "PVs total", and the average page views based on how many pieces of content were published in a given day.
From this we see increasing returns for publishing more, but sparse data on the high end of the dataset.
Step3: Articles
Here we plot the average and total pageviews for articles based on number of articles published per day. It looks like there is improvement in performance for a while, but then there is a drop-off when publishing of articles exceeds 13 / day.
But it's not a very strong correlation
Step4: Places
It appears that there is a very weak correction between total places published per day and either total or average Place performance.
Step5: Median Article Traffic
Step6: Again, there is nearly zero correlation between median article traffic per day and the volume of publishing
Let's look at Places, just to be sure
Step7: It looks like there is actually a weak relationship between median place traffic and overall publishing volume. It seems optimal to publish around 6 places per day.
import statsmodels.formula.api as smf | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv('All content.csv', index_col='Published',parse_dates=True)
df['count']=1
df = df[(df['Page Views'] > 200)]
df_resampled = df.resample('D',how ='sum')
Explanation: Here we examine whether publishing volume has an impact on overall, article, or place traffic, specifically whether total, average, or median traffic is affected by increased publishing volume.
TL;DR, it's somewhat inconclusive, but we're probably better off publishing about 6 places per day. As far as articles are concerned, we seem to perform better the more we publish, but only to a point. That points seems to be around 12 or 13 articles per day.
End of explanation
df_trunc = df_resampled.truncate(before='2015-06-01', after='2016-03-04')
df_trunc = df_trunc.dropna()
df_trunc = df_trunc[['Page Views', 'Social Actions', 'Social Referrals', 'Facebook Shares', 'count']]
df_trunc['mean']=df_trunc['Page Views']//df_trunc['count']
Explanation: Let's truncate the data to the period between June 1st, 2015 and March 4, 2016. This is to keep super old content out of the window while also eliminating any content less than 6 days old.
End of explanation
df_trunc.plot(kind='scatter',x='count',y='Page Views',title='PVs total')
df_trunc.plot(kind='scatter',x='count',y='mean',title='Average PVs')
df_trunc.plot(kind='scatter',x='count',y='Facebook Shares',title='Total Facebook Shares')
df2 = pd.read_csv('All content.csv',index_col='Published',parse_dates=True)
df_articles = df2[(df2['Url'].str.contains('/articles/',na=False))]
df_places = df2[(df2['Url'].str.contains('/places/',na=False))]
df_articles['count']=1
df_places['count']=1
df_articles_resampled = df_articles.resample('D',how='sum')
df_articles_trunc = df_articles_resampled.truncate(before='2015-06-01', after='2016-03-04')
df_articles_trunc = df_articles_trunc.dropna()
df_articles_trunc = df_articles_trunc[['Page Views', 'Social Actions', 'Social Referrals', 'Facebook Shares', 'count']]
df_articles_trunc['mean']=df_articles_trunc['Page Views']//df_articles_trunc['count']
Explanation: Here we plot the total page views, "PVs total", and the average page views based on how many pieces of content were published in a given day.
From this we see increasing returns for publishing more, but sparse data on the high end of the dataset.
End of explanation
df_articles_trunc.plot(kind='scatter',x='count',y='Page Views',title='Articles PVs total')
df_articles_trunc.plot(kind='scatter',x='count',y='mean',title='Articles Average PVs')
df_articles_trunc.plot(kind='scatter',x='count',y='Facebook Shares',title='Total Facebook Shares')
df_places_resampled = df_places.resample('D',how='sum')
df_places_trunc = df_places_resampled.truncate(before='2015-06-01', after='2016-03-04')
df_places_trunc = df_places_trunc.dropna()
df_places_trunc = df_places_trunc[['Page Views', 'Social Actions', 'Social Referrals', 'Facebook Shares', 'count']]
df_places_trunc['mean']=df_places_trunc['Page Views']//df_places_trunc['count']
Explanation: Articles
Here we plot the average and total pageviews for articles based on number of articles published per day. It looks like there is improvement in performance for a while, but then there is a drop-off when publishing of articles exceeds 13 / day.
But it's not a very strong correlation
End of explanation
df_places_trunc.plot(kind='scatter',x='count',y='Page Views',title='Places PVs total')
df_places_trunc.plot(kind='scatter',x='count',y='mean',title='Places Average PVs')
df_places_trunc.plot(kind='scatter',x='count',y='Facebook Shares',title='Total Facebook Shares')
Explanation: Places
It appears that there is a very weak correction between total places published per day and either total or average Place performance.
End of explanation
df_articles_resampled2 = df_articles.resample('D',how='median')
df_articles_trunc2 = df_articles_resampled2.truncate(before='2015-06-01', after='2016-03-04')
df_articles_trunc2 = df_articles_trunc2.dropna()
df_articles_trunc2 = df_articles_trunc2[['Page Views', 'Social Actions', 'Social Referrals', 'Facebook Shares']]
df_articles_trunc2['count']=df_articles_trunc['count']
df_articles_trunc2.plot(kind='scatter',x='count',y='Page Views',title='Median Articles PVs')
Explanation: Median Article Traffic
End of explanation
df_places_resampled2 = df_places.resample('D',how='median')
df_places_trunc2 = df_places_resampled2.truncate(before='2015-06-01', after='2016-03-04')
df_places_trunc2 = df_places_trunc2.dropna()
df_places_trunc2 = df_places_trunc2[['Page Views', 'Social Actions', 'Social Referrals', 'Facebook Shares']]
df_places_trunc2['count']=df_places_trunc['count']
df_places_trunc2.plot(kind='scatter',x='count',y='Page Views',title='Median Places PVs')
Explanation: Again, there is nearly zero correlation between median article traffic per day and the volume of publishing
Let's look at Places, just to be sure
End of explanation
import statsmodels.formula.api as smf
df_trunc
lm = smf.ols(formula="Page Views ~ count", data=df_trunc).fit()
lm.summary()
Explanation: It looks like there is actually a weak relationship between median place traffic and overall publishing volume. It seems optimal to publish around 6 places per day.
import statsmodels.formula.api as smf
End of explanation |
2,963 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Creating a Simple Autoencoder
By
Step1: Problem 1a
Step2: Problem 1b.
Split the training and test set with a 66/33 split.
Problem 2
Step3: Problem 3. Training
This is going to be a lot of guess-and-check. You've been warned. In this block, we will train the autoencoder. Add a plotting function into the training.
Note that instead of cross-entropy, we use the "mean-square-error" loss. Switch between SGD and Adam optimized. Which seems to work better? Optimize the learning-rate parameter and do not change other parameters, like momentum.
Write a piece of code to run train_model for 10 epochs. Play with the size of each hidden layer and encoded layer. When you feel you've found a reasonable learning rate, up this to 100 (or even 500 if you're patient) epochs. Hint
Step4: Problem 4a. Understand our Results
Plot an image (remember you will need to reshape it to a 14x14 grid) with imshow, and plot the autoencoder output for the same galaxy. Try plotting the difference between the two. What does your algorithm do well reconstructing? Are there certain features which it fails to reproduce?
Step5: Problem 4b.
Make a scatter plot of two of the 10 latent space dimensions. Do you notice any interesting correlations between different subsets of the latent space? Any interesting clustering?
Try color coding each point by the galaxy label using plt.scatter
Step6: Bonus Problem 5a Playing with the Latent Space
Create a random forest classifier to classiy each galaxy using only your latent space.
Step7: Bonus Problem 5b Playing with the Latent Space
Create an isolation forest to find the most anomalous galaxies. Made a cumulative distribution plot showing the anomaly scores of each class of galaxies. Which ones are the most anomalous? Why do you think that is? | Python Code:
!pip install astronn
import torch
import matplotlib.pyplot as plt
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.ensemble import IsolationForest
from astroNN.datasets import load_galaxy10
from astroNN.datasets.galaxy10 import galaxy10cls_lookup
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, confusion_matrix, ConfusionMatrixDisplay
Explanation: Creating a Simple Autoencoder
By: V. Ashley Villar (PSU)
In this problem set, we will use Pytorch to learn a latent space for the same galaxy image dataset we have previously played with.
End of explanation
# Readin the data
images, labels = load_galaxy10()
labels = labels.astype(np.float32)
images = images.astype(np.float32)
images = torch.tensor(images)
labels = torch.tensor(labels)
# Cut down the resolution of the images!!! What is this line doing in words?
images = images[:,::6,::6,1]
#Plot an example image here
#Flatten images here
#Normalize the flux of the images here
Explanation: Problem 1a: Understanding our dataset...again
Our data is a little too big for us to train an autoencoder in ~1 minute. Let's lower the resolution of our images and only keep one filter. Plot an example of the lower resolution galaxies.
Next, flatten each image into a 1D array. Then rescale the flux of the images such that the mean is 0 and the standard deviation is 1.
End of explanation
class Autoencoder(torch.nn.Module):
# this defines the model
def __init__(self, input_size, hidden_size, hidden_inner, encoded_size):
super(Autoencoder, self).__init__()
print(input_size,hidden_size,encoded_size)
self.input_size = input_size
self.hidden_size = hidden_size
self.encoded_size = encoded_size
self.hidden_inner = hidden_inner
self.hiddenlayer1 = torch.nn.Linear(self.input_size, self.hidden_size)
# ADD A LAYER HERE
self.encodedlayer = torch.nn.Linear(self.hidden_inner, self.encoded_size)
self.hiddenlayer3 = torch.nn.Linear(self.encoded_size, self.hidden_inner)
# ADD A LAYER HERE
self.outputlayer = torch.nn.Linear(self.hidden_size, self.input_size)
# some nonlinear options
self.sigmoid = torch.nn.Sigmoid()
self.softmax = torch.nn.Softmax()
self.relu = torch.nn.ReLU()
def forward(self, x):
layer1 = self.hiddenlayer1(x)
activation1 = self.ACTIVATION?(layer1)
layer2 = self.hiddenlayer2(activation1)
activation2 = self.ACTIVATION?(layer2)
layer3 = self.encodedlayer(activation2)
activation3 = self.ACTIVATION?(layer3)
layer4 = self.hiddenlayer3(activation3)
activation4 = self.ACTIVATION?(layer4)
layer5 = self.hiddenlayer4(activation4)
activation5 = self.ACTIVATION?(layer5)
layer6 = self.outputlayer(activation5)
output = self.ACTIVATION?(layer6)
# Why do I have two outputs?
return output, layer3
Explanation: Problem 1b.
Split the training and test set with a 66/33 split.
Problem 2: Understanding the Autoencoder
Below is sample of an autoencoder, built in Pytorch. Describe the code line-by-line with a partner. Add another hidden layer before and after the encoded (latent) layer (this will be a total of 2 new layers). Choose the appropriate activation function for this regression problem. Make all of the activation functions the same.
End of explanation
# train the model
def train_model(training_data,test_data, model):
# define the optimization
criterion = torch.nn.MSELoss()
# Choose between these two optimizers
#optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
#optimizer = torch.optim.Adam(model.parameters(), lr=0.1,weight_decay=1e-6)
for epoch in range(500):
# clear the gradient
optimizer.zero_grad()
# compute the model output
myoutput, encodings_train = model(training_data)
# calculate loss
loss = criterion(myoutput, training_data)
# credit assignment
loss.backward()
# update model weights
optimizer.step()
# Add a plot of the loss vs epoch for the test and training sets here
#Do your training here!!
hidden_size_1 = 100
hidden_size_2 = 50
encoded_size = 10
model = Autoencoder(np.shape(images_train[0])[0],hidden_size_1,hidden_size_2,encoded_size)
train_model(images_train, images_test, model)
Explanation: Problem 3. Training
This is going to be a lot of guess-and-check. You've been warned. In this block, we will train the autoencoder. Add a plotting function into the training.
Note that instead of cross-entropy, we use the "mean-square-error" loss. Switch between SGD and Adam optimized. Which seems to work better? Optimize the learning-rate parameter and do not change other parameters, like momentum.
Write a piece of code to run train_model for 10 epochs. Play with the size of each hidden layer and encoded layer. When you feel you've found a reasonable learning rate, up this to 100 (or even 500 if you're patient) epochs. Hint: You want to find MSE~0.25.
End of explanation
#Make an image of the original image
#Make an image of its reconstruction
#Make an image of (original - reconstruction)
Explanation: Problem 4a. Understand our Results
Plot an image (remember you will need to reshape it to a 14x14 grid) with imshow, and plot the autoencoder output for the same galaxy. Try plotting the difference between the two. What does your algorithm do well reconstructing? Are there certain features which it fails to reproduce?
End of explanation
#Scatter plot between two dimensions of the latent space
#Try coloring the points
Explanation: Problem 4b.
Make a scatter plot of two of the 10 latent space dimensions. Do you notice any interesting correlations between different subsets of the latent space? Any interesting clustering?
Try color coding each point by the galaxy label using plt.scatter
End of explanation
clf = RandomForestClassifier(...)
clf.fit(...)
new_labels = clf.predict(...)
cm = confusion_matrix(labels_test,new_labels,normalize='true')
disp = ConfusionMatrixDisplay(confusion_matrix=cm)
disp.plot()
plt.show()
Explanation: Bonus Problem 5a Playing with the Latent Space
Create a random forest classifier to classiy each galaxy using only your latent space.
End of explanation
clf = IsolationForest(...).fit(encodings)
scores = -clf.score_samples(encodings) #I am taking the negative because the lowest score is actually the weirdest, which I don't like...
#Plot an image of the weirdest galazy!
#This plots the cumulative distribution
def cdf(x, label='',plot=True, *args, **kwargs):
x, y = sorted(x), np.arange(len(x)) / len(x)
return plt.plot(x, y, *args, **kwargs, label=label) if plot else (x, y)
ulabels = np.unique(labels)
for ulabel in ulabels:
gind = np.where(labels==ulabel)
cdf(...)
Explanation: Bonus Problem 5b Playing with the Latent Space
Create an isolation forest to find the most anomalous galaxies. Made a cumulative distribution plot showing the anomaly scores of each class of galaxies. Which ones are the most anomalous? Why do you think that is?
End of explanation |
2,964 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Create new train folders with only the files from train_and_test_data_labels_safe.csv
Step1: Create TFRecords | Python Code:
import shutil
# Read files list. Header: file, class (0: interictal, 1: preictal), safe (or not to use)
files_list = np.genfromtxt('./train_and_test_data_labels_safe.csv',
dtype=("|S15", np.int32, np.int32), delimiter=',', skip_header=1)
# Get only files which are safe to use
files_list = [fl for fl in files_list if fl[2] == 1]
# Construct new file names based on class field
new_files_list = []
for fl in files_list:
name = fl[0].split('.')[0].split('_')
if len(name) == 3:
name = name[0] + '_' + name[1] + '_' + str(fl[1]) + '.mat'
else:
name = name[0] + '_' + name[1] + 'test_' + str(fl[1]) + '.mat'
new_files_list.append(name)
# Get only files names
files_list = [fl[0] for fl in files_list]
# Move files to new folder
print('Train data size:', len(files_list))
for idx in xrange(len(files_list)):
print('Copying', files_list[idx], '----->', new_files_list[idx], 'index:', idx)
shutil.copy('../data/train/'+files_list[idx], '../data/train_new/'+new_files_list[idx])
Explanation: Create new train folders with only the files from train_and_test_data_labels_safe.csv
End of explanation
_SOURCE_FILES = "../data/train/*.mat"
_DEST_FOLDER = "../dataset/train/"
_NUM_FILES = None # None is the total number of files
def mat2tfr(p_file, rem_dropout = False):
# getting the filename and retrieving the patient, segement and label data
pat, seg, label = p_file.split('/')[-1].split('.')[0].split("_")
filename = pat + "_" + seg + "_" + label + ".tfr"
fullpathname = _DEST_FOLDER + filename
if os.path.exists(fullpathname):
print("Dataset file", fullpathname, "already exists, skipping...")
else:
t = time.time()
print("Converting " + p_file + " ----> " + fullpathname)
# converting mat file as numpy
mat = loadmat(p_file)
data = mat['dataStruct']['data'][0][0]
# Check if file is mainly zero's (100% dropout)
if rem_dropout:
if (np.count_nonzero(data) < 10) or (np.any(np.std(data, axis=0) < 0.5)):
print("WARNING: File %s is all dropout." %p_file)
return
# TensorFlow Records writer
with tf.python_io.TFRecordWriter(fullpathname) as tfrwriter:
# Fill protobuff
protobuf = tf.train.Example(features=tf.train.Features(feature={
'data' : tf.train.Feature(float_list=tf.train.FloatList(value=data.flatten().tolist())),
'label': tf.train.Feature(int64_list=tf.train.Int64List(value=[int(label)])),
'filename': tf.train.Feature(bytes_list=tf.train.BytesList(value=[filename])),
}))
write = tfrwriter.write(protobuf.SerializeToString())
elapsed = time.time() - t
print("elapsed: %.3fs"%elapsed)
def dataset(folder, num_files=None):
# get files
filenames = tf.gfile.Glob(folder)
# truncate reading
if num_files is not None:
filenames = filenames[:num_files]
print("Converting #%d files."%len(filenames))
for files in filenames:
mat2tfr(files)
dataset(_SOURCE_FILES, _NUM_FILES)
print('finished')
def plot_eeg(data):
plt.figure(figsize=(10,20))
for i in range(0,16):
plt.subplot(8,2,i+1)
plt.plot(data[:,i])
#plt.savefig('foo.pdf', bbox_inches='tight')
Explanation: Create TFRecords
End of explanation |
2,965 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Orbital Elements
Note
Step1: Any components not passed automatically default to 0. REBOUND can also accept orbital elements.
Reference bodies
As a reminder, there is a one-to-one mapping between (x,y,z,vx,vy,vz) and orbital elements, and one should always specify what the orbital elements are referenced against (e.g., the central star, the system's barycenter, etc.). The differences betwen orbital elements referenced to these centers differ by $\sim$ the mass ratio of the largest body to the central mass. By default, REBOUND always uses Jacobi elements, which for each particle are always referenced to the center of mass of all particles with lower index in the simulation. This is a useful set for theoretical calculations, and gives a logical behavior as the mass ratio increase, e.g., in the case of a circumbinary planet. Let's set up a binary,
Step2: We always have to pass a semimajor axis (to set a length scale), but any other elements are by default set to 0. Notice that our second star has the same vz as the first one due to the default Jacobi elements. Now we could add a distant planet on a circular orbit,
Step3: This planet is set up relative to the binary center of mass (again due to the Jacobi coordinates), which is probably what we want. But imagine we now want to place a test mass in a tight orbit around the second star. If we passed things as above, the orbital elements would be referenced to the binary/outer-planet center of mass. We can override the default by explicitly passing a primary (any instance of the Particle class)
Step4: All simulations are performed in Cartesian elements, so to avoid the overhead, REBOUND does not update particles' orbital elements as the simulation progresses. However, you can always access any orbital element through, e.g., sim.particles[1].inc (see the diagram, and table of orbital elements under the Orbit structure at http
Step5: Notice that there is always one less orbit than there are particles, since orbits are only defined between pairs of particles. We see that we got the first two orbits right, but the last one is way off. The reason is that again the REBOUND default is that we always get Jacobi elements. But we initialized the last particle relative to the second star, rather than the center of mass of all the previous particles.
To get orbital elements relative to a specific body, you can manually use the calculate_orbit method of the Particle class
Step6: though we could have simply avoided this problem by adding bodies from the inside out (second star, test mass, first star, circumbinary planet).
When you access orbital elements individually, e.g., sim.particles[1].inc, you always get Jacobi elements. If you need to specify the primary, you have to do it with sim.calculate_orbit() as above.
Edge cases and orbital element sets
Different orbital elements lose meaning in various limits, e.g., a planar orbit and a circular orbit. REBOUND therefore allows initialization with several different types of variables that are appropriate in different cases. It's important to keep in mind that the procedure to initialize particles from orbital elements is not exactly invertible, so one can expect discrepant results for elements that become ill defined. For example,
Step7: The problem here is that $\omega$ (the angle from the ascending node to pericenter) is ill-defined for a circular orbit, so it's not clear what we mean when we pass it, and we get spurious results (i.e., $\omega = 0$ rather than 0.1, and $f=0.1$ rather than the default 0). Similarly, $f$, the angle from pericenter to the particle's position, is undefined. However, the true longitude $\theta$, the broken angle from the $x$ axis to the ascending node = $\Omega + \omega + f$, and then to the particle's position, is always well defined
Step8: To be clearer and ensure we get the results we expect, we could instead pass theta to specify the longitude we want, e.g.
Step9: Here we have a planar orbit, in which case the line of nodes becomes ill defined, so $\Omega$ is not a good variable, but we pass it anyway! In this case, $\omega$ is also undefined since it is referenced to the ascending node. Here we get that now these two ill-defined variables get flipped. The appropriate variable is pomega ($\varpi = \Omega + \omega$), which is the angle from the $x$ axis to pericenter
Step10: We can specify the pericenter of the orbit with either $\omega$ or $\varpi$
Step11: Note that if the inclination is exactly zero, REBOUND sets $\Omega$ (which is undefined) to 0, so $\omega = \varpi$.
Finally, we can specify the position of the particle along its orbit using mean (rather than true) longitudes or anomalies (for example, this might be useful for resonances). We can either use the mean anomaly $M$, which is referenced to pericenter (again ill-defined for circular orbits), or its better-defined counterpart the mean longitude l $= \lambda = \Omega + \omega + M$, which is analogous to $\theta$ above,
Step12: In summary, you can specify the phase of the orbit through any one of the angles f, f, theta or l=$\lambda$. Additionally, one can instead use the time of pericenter passage T. This time should be set in the appropriate time units, and you'd initialize sim.t to the appropriate time you want to start the simulation.
Accuracy
As a test of accuracy and demonstration of issues related to the last section, let's test the numerical stability by intializing particles with small eccentricities and true anomalies, computing their orbital elements back, and comparing the relative error. We choose the inclination and node longitude randomly
Step13: We see that the behavior is poor, which is physically due to $f$ becoming poorly defined at low $e$. If instead we initialize the orbits with the true longitude $\theta$ as discussed above, we get much better results
Step14: Hyperbolic & Parabolic Orbits
REBOUND can also handle hyperbolic orbits, which have negative $a$ and $e>1$
Step15: Currently there is no support for exactly parabolic orbits, but we can get a close approximation by passing a nearby hyperbolic orbit where we can specify the pericenter = $|a|(e-1)$ with $a$ and $e$. For example, for a 0.1 AU pericenter,
Step16: Retrograde Orbits
Orbital elements can be counterintuitive for retrograde orbits, but REBOUND tries to sort them out consistently. This can lead to some initially surprising results. For example, | Python Code:
import rebound
sim = rebound.Simulation()
sim.add(m=1., x=1., vz = 2.)
Explanation: Orbital Elements
Note: All angles for orbital elements are in radians
We can add particles to a simulation by specifying cartesian components:
End of explanation
sim.add(m=1., a=1.)
sim.status()
Explanation: Any components not passed automatically default to 0. REBOUND can also accept orbital elements.
Reference bodies
As a reminder, there is a one-to-one mapping between (x,y,z,vx,vy,vz) and orbital elements, and one should always specify what the orbital elements are referenced against (e.g., the central star, the system's barycenter, etc.). The differences betwen orbital elements referenced to these centers differ by $\sim$ the mass ratio of the largest body to the central mass. By default, REBOUND always uses Jacobi elements, which for each particle are always referenced to the center of mass of all particles with lower index in the simulation. This is a useful set for theoretical calculations, and gives a logical behavior as the mass ratio increase, e.g., in the case of a circumbinary planet. Let's set up a binary,
End of explanation
sim.add(m=1.e-3, a=100.)
Explanation: We always have to pass a semimajor axis (to set a length scale), but any other elements are by default set to 0. Notice that our second star has the same vz as the first one due to the default Jacobi elements. Now we could add a distant planet on a circular orbit,
End of explanation
sim.add(primary=sim.particles[1], a=0.01)
Explanation: This planet is set up relative to the binary center of mass (again due to the Jacobi coordinates), which is probably what we want. But imagine we now want to place a test mass in a tight orbit around the second star. If we passed things as above, the orbital elements would be referenced to the binary/outer-planet center of mass. We can override the default by explicitly passing a primary (any instance of the Particle class):
End of explanation
print(sim.particles[1].a)
orbits = sim.calculate_orbits()
for orbit in orbits:
print(orbit)
Explanation: All simulations are performed in Cartesian elements, so to avoid the overhead, REBOUND does not update particles' orbital elements as the simulation progresses. However, you can always access any orbital element through, e.g., sim.particles[1].inc (see the diagram, and table of orbital elements under the Orbit structure at http://rebound.readthedocs.org/en/latest/python_api.html). This will calculate that orbital element individually--you can calculate all the particles' orbital elements at once with sim.calculate_orbits(). REBOUND will always output angles in the range $[-\pi,\pi]$, except the inclination which is always in $[0,\pi]$.
End of explanation
print(sim.particles[3].calculate_orbit(primary=sim.particles[1]))
Explanation: Notice that there is always one less orbit than there are particles, since orbits are only defined between pairs of particles. We see that we got the first two orbits right, but the last one is way off. The reason is that again the REBOUND default is that we always get Jacobi elements. But we initialized the last particle relative to the second star, rather than the center of mass of all the previous particles.
To get orbital elements relative to a specific body, you can manually use the calculate_orbit method of the Particle class:
End of explanation
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(a=1., e=0., inc=0.1, Omega=0.3, omega=0.1)
print(sim.particles[1].orbit)
Explanation: though we could have simply avoided this problem by adding bodies from the inside out (second star, test mass, first star, circumbinary planet).
When you access orbital elements individually, e.g., sim.particles[1].inc, you always get Jacobi elements. If you need to specify the primary, you have to do it with sim.calculate_orbit() as above.
Edge cases and orbital element sets
Different orbital elements lose meaning in various limits, e.g., a planar orbit and a circular orbit. REBOUND therefore allows initialization with several different types of variables that are appropriate in different cases. It's important to keep in mind that the procedure to initialize particles from orbital elements is not exactly invertible, so one can expect discrepant results for elements that become ill defined. For example,
End of explanation
print(sim.particles[1].theta)
Explanation: The problem here is that $\omega$ (the angle from the ascending node to pericenter) is ill-defined for a circular orbit, so it's not clear what we mean when we pass it, and we get spurious results (i.e., $\omega = 0$ rather than 0.1, and $f=0.1$ rather than the default 0). Similarly, $f$, the angle from pericenter to the particle's position, is undefined. However, the true longitude $\theta$, the broken angle from the $x$ axis to the ascending node = $\Omega + \omega + f$, and then to the particle's position, is always well defined:
End of explanation
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(a=1., e=0., inc=0.1, Omega=0.3, theta = 0.4)
print(sim.particles[1].theta)
import rebound
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(a=1., e=0.2, Omega=0.1)
print(sim.particles[1].orbit)
Explanation: To be clearer and ensure we get the results we expect, we could instead pass theta to specify the longitude we want, e.g.
End of explanation
print(sim.particles[1].pomega)
Explanation: Here we have a planar orbit, in which case the line of nodes becomes ill defined, so $\Omega$ is not a good variable, but we pass it anyway! In this case, $\omega$ is also undefined since it is referenced to the ascending node. Here we get that now these two ill-defined variables get flipped. The appropriate variable is pomega ($\varpi = \Omega + \omega$), which is the angle from the $x$ axis to pericenter:
End of explanation
import rebound
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(a=1., e=0.2, pomega=0.1)
print(sim.particles[1].orbit)
Explanation: We can specify the pericenter of the orbit with either $\omega$ or $\varpi$:
End of explanation
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(a=1., e=0.1, Omega=0.3, M = 0.1)
sim.add(a=1., e=0.1, Omega=0.3, l = 0.4)
print(sim.particles[1].l)
print(sim.particles[2].l)
import rebound
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(a=1., e=0.1, omega=1.)
print(sim.particles[1].orbit)
Explanation: Note that if the inclination is exactly zero, REBOUND sets $\Omega$ (which is undefined) to 0, so $\omega = \varpi$.
Finally, we can specify the position of the particle along its orbit using mean (rather than true) longitudes or anomalies (for example, this might be useful for resonances). We can either use the mean anomaly $M$, which is referenced to pericenter (again ill-defined for circular orbits), or its better-defined counterpart the mean longitude l $= \lambda = \Omega + \omega + M$, which is analogous to $\theta$ above,
End of explanation
import random
import numpy as np
def simulation(par):
e,f = par
e = 10**e
f = 10**f
sim = rebound.Simulation()
sim.add(m=1.)
a = 1.
inc = random.random()*np.pi
Omega = random.random()*2*np.pi
sim.add(m=0.,a=a,e=e,inc=inc,Omega=Omega, f=f)
o=sim.particles[1].orbit
if o.f < 0: # avoid wrapping issues
o.f += 2*np.pi
err = max(np.fabs(o.e-e)/e, np.fabs(o.f-f)/f)
return err
random.seed(1)
N = 100
es = np.linspace(-16.,-1.,N)
fs = np.linspace(-16.,-1.,N)
params = [(e,f) for e in es for f in fs]
pool=rebound.InterruptiblePool()
res = pool.map(simulation, params)
res = np.array(res).reshape(N,N)
res = np.nan_to_num(res)
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import ticker
from matplotlib.colors import LogNorm
import matplotlib
f,ax = plt.subplots(1,1,figsize=(7,5))
extent=[fs.min(), fs.max(), es.min(), es.max()]
ax.set_xlim(extent[0], extent[1])
ax.set_ylim(extent[2], extent[3])
ax.set_xlabel(r"true anomaly (f)")
ax.set_ylabel(r"eccentricity")
im = ax.imshow(res, norm=LogNorm(), vmax=1., vmin=1.e-16, aspect='auto', origin="lower", interpolation='nearest', cmap="RdYlGn_r", extent=extent)
cb = plt.colorbar(im, ax=ax)
cb.solids.set_rasterized(True)
cb.set_label("Relative Error")
Explanation: In summary, you can specify the phase of the orbit through any one of the angles f, f, theta or l=$\lambda$. Additionally, one can instead use the time of pericenter passage T. This time should be set in the appropriate time units, and you'd initialize sim.t to the appropriate time you want to start the simulation.
Accuracy
As a test of accuracy and demonstration of issues related to the last section, let's test the numerical stability by intializing particles with small eccentricities and true anomalies, computing their orbital elements back, and comparing the relative error. We choose the inclination and node longitude randomly:
End of explanation
def simulation(par):
e,theta = par
e = 10**e
theta = 10**theta
sim = rebound.Simulation()
sim.add(m=1.)
a = 1.
inc = random.random()*np.pi
Omega = random.random()*2*np.pi
omega = random.random()*2*np.pi
sim.add(m=0.,a=a,e=e,inc=inc,Omega=Omega, theta=theta)
o=sim.particles[1].orbit
if o.theta < 0:
o.theta += 2*np.pi
err = max(np.fabs(o.e-e)/e, np.fabs(o.theta-theta)/theta)
return err
random.seed(1)
N = 100
es = np.linspace(-16.,-1.,N)
thetas = np.linspace(-16.,-1.,N)
params = [(e,theta) for e in es for theta in thetas]
pool=rebound.InterruptiblePool()
res = pool.map(simulation, params)
res = np.array(res).reshape(N,N)
res = np.nan_to_num(res)
f,ax = plt.subplots(1,1,figsize=(7,5))
extent=[thetas.min(), thetas.max(), es.min(), es.max()]
ax.set_xlim(extent[0], extent[1])
ax.set_ylim(extent[2], extent[3])
ax.set_xlabel(r"true longitude (\theta)")
ax.set_ylabel(r"eccentricity")
im = ax.imshow(res, norm=LogNorm(), vmax=1., vmin=1.e-16, aspect='auto', origin="lower", interpolation='nearest', cmap="RdYlGn_r", extent=extent)
cb = plt.colorbar(im, ax=ax)
cb.solids.set_rasterized(True)
cb.set_label("Relative Error")
Explanation: We see that the behavior is poor, which is physically due to $f$ becoming poorly defined at low $e$. If instead we initialize the orbits with the true longitude $\theta$ as discussed above, we get much better results:
End of explanation
sim.add(a=-0.2, e=1.4)
sim.status()
Explanation: Hyperbolic & Parabolic Orbits
REBOUND can also handle hyperbolic orbits, which have negative $a$ and $e>1$:
End of explanation
sim = rebound.Simulation()
sim.add(m=1.)
q = 0.1
a=-1.e14
e=1.+q/np.fabs(a)
sim.add(a=a, e=e)
print(sim.particles[1].orbit)
Explanation: Currently there is no support for exactly parabolic orbits, but we can get a close approximation by passing a nearby hyperbolic orbit where we can specify the pericenter = $|a|(e-1)$ with $a$ and $e$. For example, for a 0.1 AU pericenter,
End of explanation
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(a=1.,inc=np.pi,e=0.1, Omega=0., pomega=1.)
print(sim.particles[1].orbit)
Explanation: Retrograde Orbits
Orbital elements can be counterintuitive for retrograde orbits, but REBOUND tries to sort them out consistently. This can lead to some initially surprising results. For example,
End of explanation |
2,966 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have some data structured as below, trying to predict t from the features. | Problem:
import numpy as np
import pandas as pd
from sklearn.preprocessing import StandardScaler
data = load_data()
scaler = StandardScaler()
scaler.fit(data)
scaled = scaler.transform(data)
inversed = scaler.inverse_transform(scaled) |
2,967 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Q-Network implementation
This notebook shamelessly demands you to implement a DQN - an approximate q-learning algorithm with experience replay and target networks - and see if it works any better this way.
Step1: Frameworks - we'll accept this homework in any deep learning framework. This particular notebook was designed for tensorflow, but you will find it easy to adapt it to almost any python-based deep learning framework.
Step4: Let's play some old videogames
This time we're gonna apply approximate q-learning to an atari game called Breakout. It's not the hardest thing out there, but it's definitely way more complex than anything we tried before.
Processing game image
Raw atari images are large, 210x160x3 by default. However, we don't need that level of detail in order to learn them.
We can thus save a lot of time by preprocessing game image, including
* Resizing to a smaller shape, 64 x 64
* Converting to grayscale
* Cropping irrelevant image parts (top & bottom)
Step5: Frame buffer
Our agent can only process one observation at a time, so we gotta make sure it contains enough information to fing optimal actions. For instance, agent has to react to moving objects so he must be able to measure object's velocity.
To do so, we introduce a buffer that stores 4 last images. This time everything is pre-implemented for you.
Step10: Building a network
We now need to build a neural network that can map images to state q-values. This network will be called on every agent's step so it better not be resnet-152 unless you have an array of GPUs. Instead, you can use strided convolutions with a small number of features to save time and memory.
You can build any architecture you want, but for reference, here's something that will more or less work
Step12: Now let's try out our agent to see if it raises any errors.
Step14: Experience replay
For this assignment, we provide you with experience replay buffer. If you implemented experience replay buffer in last week's assignment, you can copy-paste it here to get 2 bonus points.
The interface is fairly simple
Step16: Target networks
We also employ the so called "target network" - a copy of neural network weights to be used for reference Q-values
Step17: Learning with... Q-learning
Here we write a function similar to agent.update from tabular q-learning.
Step18: Take q-values for actions agent just took
Step19: Compute Q-learning TD error
Step20: Main loop
It's time to put everything together and see if it learns anything.
Step22: How to interpret plots
Step23: More
If you want to play with DQN a bit more, here's a list of things you can try with it | Python Code:
#XVFB will be launched if you run on a server
import os
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
os.environ['DISPLAY'] = ':1'
Explanation: Deep Q-Network implementation
This notebook shamelessly demands you to implement a DQN - an approximate q-learning algorithm with experience replay and target networks - and see if it works any better this way.
End of explanation
import gym
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Frameworks - we'll accept this homework in any deep learning framework. This particular notebook was designed for tensorflow, but you will find it easy to adapt it to almost any python-based deep learning framework.
End of explanation
from gym.core import ObservationWrapper
from gym.spaces import Box
from scipy.misc import imresize
from skimage import color
class PreprocessAtari(ObservationWrapper):
def __init__(self, env):
A gym wrapper that crops, scales image into the desired shapes and optionally grayscales it.
ObservationWrapper.__init__(self,env)
self.img_size = (64, 64)
self.observation_space = Box(0.0, 1.0, (self.img_size[0], self.img_size[1], 1))
def observation(self, img):
what happens to each observation
# Here's what you need to do:
# * crop image, remove irrelevant parts
# * resize image to self.img_size
# (use imresize imported above or any library you want,
# e.g. opencv, skimage, PIL, keras)
# * cast image to grayscale
# * convert image pixels to (0,1) range, float32 type
#top = self.img_size[0]
#bottom = self.img_size[0] - 18
#left = self.img_size[1]
#right = self.img_size[1] - left
#crop = img[top:bottom, left:right, :]
#print(top, bottom, left, right)
img2 = imresize(img, self.img_size)
img2 = color.rgb2gray(img2)
s = (img2.shape[0], img2.shape[1], 1 )
img2 = img2.reshape(s)
img2 = img2.astype('float32') / img2.max()
return img2
import gym
#spawn game instance for tests
env = gym.make("BreakoutDeterministic-v0") #create raw env
env = PreprocessAtari(env)
observation_shape = env.observation_space.shape
n_actions = env.action_space.n
obs = env.reset()
#test observation
assert obs.ndim == 3, "observation must be [batch, time, channels] even if there's just one channel"
assert obs.shape == observation_shape
assert obs.dtype == 'float32'
assert len(np.unique(obs))>2, "your image must not be binary"
assert 0 <= np.min(obs) and np.max(obs) <=1, "convert image pixels to (0,1) range"
print("Formal tests seem fine. Here's an example of what you'll get.")
plt.title("what your network gonna see")
plt.imshow(obs[:, :, 0], interpolation='none',cmap='gray');
Explanation: Let's play some old videogames
This time we're gonna apply approximate q-learning to an atari game called Breakout. It's not the hardest thing out there, but it's definitely way more complex than anything we tried before.
Processing game image
Raw atari images are large, 210x160x3 by default. However, we don't need that level of detail in order to learn them.
We can thus save a lot of time by preprocessing game image, including
* Resizing to a smaller shape, 64 x 64
* Converting to grayscale
* Cropping irrelevant image parts (top & bottom)
End of explanation
from framebuffer import FrameBuffer
def make_env():
env = gym.make("BreakoutDeterministic-v4")
env = PreprocessAtari(env)
env = FrameBuffer(env, n_frames=4, dim_order='tensorflow')
return env
env = make_env()
env.reset()
n_actions = env.action_space.n
state_dim = env.observation_space.shape
for _ in range(50):
obs, _, _, _ = env.step(env.action_space.sample())
plt.title("Game image")
plt.imshow(env.render("rgb_array"))
plt.show()
plt.title("Agent observation (4 frames left to right)")
plt.imshow(obs.transpose([0,2,1]).reshape([state_dim[0],-1]));
Explanation: Frame buffer
Our agent can only process one observation at a time, so we gotta make sure it contains enough information to fing optimal actions. For instance, agent has to react to moving objects so he must be able to measure object's velocity.
To do so, we introduce a buffer that stores 4 last images. This time everything is pre-implemented for you.
End of explanation
import tensorflow as tf
tf.reset_default_graph()
sess = tf.InteractiveSession()
from keras.layers import Conv2D, Dense, Flatten
from keras.models import Sequential
class DQNAgent:
def __init__(self, name, state_shape, n_actions, epsilon=0, reuse=False):
A simple DQN agent
with tf.variable_scope(name, reuse=reuse):
#< Define your network body here. Please make sure you don't use any layers created elsewhere >
self.network = Sequential()
self.network.add(Conv2D(filters=16, kernel_size=(3, 3), strides=(2, 2), activation='relu'))
self.network.add(Conv2D(filters=32, kernel_size=(3, 3), strides=(2, 2), activation='relu'))
self.network.add(Conv2D(filters=64, kernel_size=(3, 3), strides=(2, 2), activation='relu'))
self.network.add(Flatten())
self.network.add(Dense(256, activation='relu'))
self.network.add(Dense(n_actions, activation='linear'))
# prepare a graph for agent step
self.state_t = tf.placeholder('float32', [None,] + list(state_shape))
self.qvalues_t = self.get_symbolic_qvalues(self.state_t)
self.weights = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope=name)
self.epsilon = epsilon
def get_symbolic_qvalues(self, state_t):
takes agent's observation, returns qvalues. Both are tf Tensors
#< apply your network layers here >
qvalues = self.network(state_t) #< symbolic tensor for q-values >
assert tf.is_numeric_tensor(qvalues) and qvalues.shape.ndims == 2, \
"please return 2d tf tensor of qvalues [you got %s]" % repr(qvalues)
assert int(qvalues.shape[1]) == n_actions
return qvalues
def get_qvalues(self, state_t):
Same as symbolic step except it operates on numpy arrays
sess = tf.get_default_session()
return sess.run(self.qvalues_t, {self.state_t: state_t})
def sample_actions(self, qvalues):
pick actions given qvalues. Uses epsilon-greedy exploration strategy.
epsilon = self.epsilon
batch_size, n_actions = qvalues.shape
random_actions = np.random.choice(n_actions, size=batch_size)
best_actions = qvalues.argmax(axis=-1)
should_explore = np.random.choice([0, 1], batch_size, p = [1-epsilon, epsilon])
return np.where(should_explore, random_actions, best_actions)
agent = DQNAgent("dqn_agent", state_dim, n_actions, epsilon=0.5)
sess.run(tf.global_variables_initializer())
Explanation: Building a network
We now need to build a neural network that can map images to state q-values. This network will be called on every agent's step so it better not be resnet-152 unless you have an array of GPUs. Instead, you can use strided convolutions with a small number of features to save time and memory.
You can build any architecture you want, but for reference, here's something that will more or less work:
End of explanation
def evaluate(env, agent, n_games=1, greedy=False, t_max=10000):
Plays n_games full games. If greedy, picks actions as argmax(qvalues). Returns mean reward.
rewards = []
for _ in range(n_games):
s = env.reset()
reward = 0
for _ in range(t_max):
qvalues = agent.get_qvalues([s])
action = qvalues.argmax(axis=-1)[0] if greedy else agent.sample_actions(qvalues)[0]
s, r, done, _ = env.step(action)
reward += r
if done:
break
rewards.append(reward)
return np.mean(rewards)
evaluate(env, agent, n_games=1)
Explanation: Now let's try out our agent to see if it raises any errors.
End of explanation
from replay_buffer import ReplayBuffer
exp_replay = ReplayBuffer(10)
for _ in range(30):
exp_replay.add(env.reset(), env.action_space.sample(), 1.0, env.reset(), done=False)
obs_batch, act_batch, reward_batch, next_obs_batch, is_done_batch = exp_replay.sample(5)
assert len(exp_replay) == 10, "experience replay size should be 10 because that's what maximum capacity is"
def play_and_record(agent, env, exp_replay, n_steps=1):
Play the game for exactly n steps, record every (s,a,r,s', done) to replay buffer.
Whenever game ends, add record with done=True and reset the game.
:returns: return sum of rewards over time
Note: please do not env.reset() unless env is done.
It is guaranteed that env has done=False when passed to this function.
# State at the beginning of rollout
s = env.framebuffer
greedy = True
# Play the game for n_steps as per instructions above
#<YOUR CODE>
last_info = None
total_reward = 0
for _ in range(n_steps):
qvalues = agent.get_qvalues([s])
a = agent.sample_actions(qvalues)[0]
next_s, r, done, info = env.step(a)
r = -10 if ( last_info is not None and last_info['ale.lives'] > info['ale.lives'] ) else r
last_info = info
# Experience Replay
exp_replay.add(s, a, r, next_s, done)
total_reward += r
s = next_s
if done:
s = env.reset()
return total_reward
# testing your code. This may take a minute...
exp_replay = ReplayBuffer(20000)
play_and_record(agent, env, exp_replay, n_steps=10000)
# if you're using your own experience replay buffer, some of those tests may need correction.
# just make sure you know what your code does
assert len(exp_replay) == 10000, "play_and_record should have added exactly 10000 steps, "\
"but instead added %i"%len(exp_replay)
is_dones = list(zip(*exp_replay._storage))[-1]
assert 0 < np.mean(is_dones) < 0.1, "Please make sure you restart the game whenever it is 'done' and record the is_done correctly into the buffer."\
"Got %f is_done rate over %i steps. [If you think it's your tough luck, just re-run the test]"%(np.mean(is_dones), len(exp_replay))
for _ in range(100):
obs_batch, act_batch, reward_batch, next_obs_batch, is_done_batch = exp_replay.sample(10)
assert obs_batch.shape == next_obs_batch.shape == (10,) + state_dim
assert act_batch.shape == (10,), "actions batch should have shape (10,) but is instead %s"%str(act_batch.shape)
assert reward_batch.shape == (10,), "rewards batch should have shape (10,) but is instead %s"%str(reward_batch.shape)
assert is_done_batch.shape == (10,), "is_done batch should have shape (10,) but is instead %s"%str(is_done_batch.shape)
assert [int(i) in (0,1) for i in is_dones], "is_done should be strictly True or False"
assert [0 <= a <= n_actions for a in act_batch], "actions should be within [0, n_actions]"
print("Well done!")
Explanation: Experience replay
For this assignment, we provide you with experience replay buffer. If you implemented experience replay buffer in last week's assignment, you can copy-paste it here to get 2 bonus points.
The interface is fairly simple:
exp_replay.add(obs, act, rw, next_obs, done) - saves (s,a,r,s',done) tuple into the buffer
exp_replay.sample(batch_size) - returns observations, actions, rewards, next_observations and is_done for batch_size random samples.
len(exp_replay) - returns number of elements stored in replay buffer.
End of explanation
target_network = DQNAgent("target_network", state_dim, n_actions)
def load_weigths_into_target_network(agent, target_network):
assign target_network.weights variables to their respective agent.weights values.
assigns = []
for w_agent, w_target in zip(agent.weights, target_network.weights):
assigns.append(tf.assign(w_target, w_agent, validate_shape=True))
tf.get_default_session().run(assigns)
load_weigths_into_target_network(agent, target_network)
# check that it works
sess.run([tf.assert_equal(w, w_target) for w, w_target in zip(agent.weights, target_network.weights)]);
print("It works!")
Explanation: Target networks
We also employ the so called "target network" - a copy of neural network weights to be used for reference Q-values:
The network itself is an exact copy of agent network, but it's parameters are not trained. Instead, they are moved here from agent's actual network every so often.
$$ Q_{reference}(s,a) = r + \gamma \cdot \max {a'} Q{target}(s',a') $$
End of explanation
# placeholders that will be fed with exp_replay.sample(batch_size)
obs_ph = tf.placeholder(tf.float32, shape=(None,) + state_dim)
actions_ph = tf.placeholder(tf.int32, shape=[None])
rewards_ph = tf.placeholder(tf.float32, shape=[None])
next_obs_ph = tf.placeholder(tf.float32, shape=(None,) + state_dim)
is_done_ph = tf.placeholder(tf.float32, shape=[None])
is_not_done = 1 - is_done_ph
gamma = 0.99
Explanation: Learning with... Q-learning
Here we write a function similar to agent.update from tabular q-learning.
End of explanation
current_qvalues = agent.get_symbolic_qvalues(obs_ph)
current_action_qvalues = tf.reduce_sum(tf.one_hot(actions_ph, n_actions) * current_qvalues, axis=1)
Explanation: Take q-values for actions agent just took
End of explanation
# compute q-values for NEXT states with target network
next_qvalues_target = target_network.get_symbolic_qvalues(next_obs_ph) #<your code>
# compute state values by taking max over next_qvalues_target for all actions
next_state_values_target = tf.reduce_max(next_qvalues_target, axis=1) #<YOUR CODE>
# compute Q_reference(s,a) as per formula above.
reference_qvalues = rewards_ph + gamma * next_state_values_target #<YOUR CODE>
# Define loss function for sgd.
td_loss = (current_action_qvalues - reference_qvalues) ** 2
td_loss = tf.reduce_mean(td_loss)
train_step = tf.train.AdamOptimizer(1e-3).minimize(td_loss, var_list=agent.weights)
sess.run(tf.global_variables_initializer())
for chk_grad in tf.gradients(reference_qvalues, agent.weights):
error_msg = "Reference q-values should have no gradient w.r.t. agent weights. Make sure you used target_network qvalues! "
error_msg += "If you know what you're doing, ignore this assert."
assert chk_grad is None or np.allclose(sess.run(chk_grad), sess.run(chk_grad * 0)), error_msg
#assert tf.gradients(reference_qvalues, is_not_done)[0] is not None, "make sure you used is_not_done"
assert tf.gradients(reference_qvalues, rewards_ph)[0] is not None, "make sure you used rewards"
assert tf.gradients(reference_qvalues, next_obs_ph)[0] is not None, "make sure you used next states"
assert tf.gradients(reference_qvalues, obs_ph)[0] is None, "reference qvalues shouldn't depend on current observation!" # ignore if you're certain it's ok
print("Splendid!")
Explanation: Compute Q-learning TD error:
$$ L = { 1 \over N} \sum_i [ Q_{\theta}(s,a) - Q_{reference}(s,a) ] ^2 $$
With Q-reference defined as
$$ Q_{reference}(s,a) = r(s,a) + \gamma \cdot max_{a'} Q_{target}(s', a') $$
Where
* $Q_{target}(s',a')$ denotes q-value of next state and next action predicted by target_network
* $s, a, r, s'$ are current state, action, reward and next state respectively
* $\gamma$ is a discount factor defined two cells above.
End of explanation
from tqdm import trange
from IPython.display import clear_output
import matplotlib.pyplot as plt
from pandas import DataFrame
moving_average = lambda x, span, **kw: DataFrame({'x':np.asarray(x)}).x.ewm(span=span, **kw).mean().values
%matplotlib inline
mean_rw_history = []
td_loss_history = []
#exp_replay = ReplayBuffer(10**5)
exp_replay = ReplayBuffer(10**4)
play_and_record(agent, env, exp_replay, n_steps=10000)
def sample_batch(exp_replay, batch_size):
obs_batch, act_batch, reward_batch, next_obs_batch, is_done_batch = exp_replay.sample(batch_size)
return {
obs_ph:obs_batch, actions_ph:act_batch, rewards_ph:reward_batch,
next_obs_ph:next_obs_batch, is_done_ph:is_done_batch
}
for i in trange(10**5):
# play
play_and_record(agent, env, exp_replay, 10)
# train
_, loss_t = sess.run([train_step, td_loss], sample_batch(exp_replay, batch_size=64))
td_loss_history.append(loss_t)
# adjust agent parameters
if i % 500 == 0:
load_weigths_into_target_network(agent, target_network)
agent.epsilon = max(agent.epsilon * 0.99, 0.01)
mean_rw_history.append(evaluate(make_env(), agent, n_games=3))
if i % 100 == 0:
clear_output(True)
print("buffer size = %i, epsilon = %.5f" % (len(exp_replay), agent.epsilon))
plt.subplot(1,2,1)
plt.title("mean reward per game")
plt.plot(mean_rw_history)
plt.grid()
assert not np.isnan(loss_t)
plt.figure(figsize=[12, 4])
plt.subplot(1,2,2)
plt.title("TD loss history (moving average)")
plt.plot(moving_average(np.array(td_loss_history), span=100, min_periods=100))
plt.grid()
plt.show()
if np.mean(mean_rw_history[-10:]) > 10.:
break
assert np.mean(mean_rw_history[-10:]) > 10.
print("That's good enough for tutorial.")
Explanation: Main loop
It's time to put everything together and see if it learns anything.
End of explanation
agent.epsilon=0 # Don't forget to reset epsilon back to previous value if you want to go on training
#record sessions
import gym.wrappers
env_monitor = gym.wrappers.Monitor(make_env(),directory="videos",force=True)
sessions = [evaluate(env_monitor, agent, n_games=1) for _ in range(100)]
env_monitor.close()
#show video
from IPython.display import HTML
import os
video_names = list(filter(lambda s:s.endswith(".mp4"),os.listdir("./videos/")))
HTML(
<video width="640" height="480" controls>
<source src="{}" type="video/mp4">
</video>
.format("./videos/"+video_names[-1])) #this may or may not be _last_ video. Try other indices
Explanation: How to interpret plots:
This aint no supervised learning so don't expect anything to improve monotonously.
* TD loss is the MSE between agent's current Q-values and target Q-values. It may slowly increase or decrease, it's ok. The "not ok" behavior includes going NaN or stayng at exactly zero before agent has perfect performance.
* mean reward is the expected sum of r(s,a) agent gets over the full game session. It will oscillate, but on average it should get higher over time (after a few thousand iterations...).
* In basic q-learning implementation it takes 5-10k steps to "warm up" agent before it starts to get better.
* buffer size - this one is simple. It should go up and cap at max size.
* epsilon - agent's willingness to explore. If you see that agent's already at 0.01 epsilon before it's average reward is above 0 - it means you need to increase epsilon. Set it back to some 0.2 - 0.5 and decrease the pace at which it goes down.
* Also please ignore first 100-200 steps of each plot - they're just oscillations because of the way moving average works.
At first your agent will lose quickly. Then it will learn to suck less and at least hit the ball a few times before it loses. Finally it will learn to actually score points.
Training will take time. A lot of it actually. An optimistic estimate is to say it's gonna start winning (average reward > 10) after 10k steps.
But hey, look on the bright side of things:
Video
End of explanation
from submit import submit_breakout
env = make_env()
submit_breakout(agent, env, evaluate, "[email protected]", "CGeZfHhZ10uqx4Ud")
Explanation: More
If you want to play with DQN a bit more, here's a list of things you can try with it:
Easy:
Implementing double q-learning shouldn't be a problem if you've already have target networks in place.
You will probably need tf.argmax to select best actions
Here's an original article
Dueling architecture is also quite straightforward if you have standard DQN.
You will need to change network architecture, namely the q-values layer
It must now contain two heads: V(s) and A(s,a), both dense layers
You should then add them up via elemwise sum layer.
Here's an article
Hard: Prioritized experience replay
In this section, you're invited to implement prioritized experience replay
You will probably need to provide a custom data structure
Once pool.update is called, collect the pool.experience_replay.observations, actions, rewards and is_alive and store them in your data structure
You can now sample such transitions in proportion to the error (see article) for training.
It's probably more convenient to explicitly declare inputs for "sample observations", "sample actions" and so on to plug them into q-learning.
Prioritized (and even normal) experience replay should greatly reduce amount of game sessions you need to play in order to achieve good performance.
While it's effect on runtime is limited for atari, more complicated envs (further in the course) will certainly benefit for it.
There is even more out there - see this overview article.
End of explanation |
2,968 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Morph volumetric source estimate
This example demonstrates how to morph an individual subject's
Step1: Setup paths
Step2: Compute example data. For reference see
sphx_glr_auto_examples_inverse_plot_compute_mne_inverse_volume.py
Load data
Step3: Get a SourceMorph object for VolSourceEstimate
subject_from can typically be inferred from
Step4: Apply morph to VolSourceEstimate
The morph can be applied to the source estimate data, by giving it as the
first argument to the
Step5: Convert morphed VolSourceEstimate into NIfTI
We can convert our morphed source estimate into a NIfTI volume using
Step6: Plot results
Step7: Reading and writing SourceMorph from and to disk
An instance of SourceMorph can be saved, by calling | Python Code:
# Author: Tommy Clausner <[email protected]>
#
# License: BSD (3-clause)
import os
import nibabel as nib
import mne
from mne.datasets import sample
from mne.minimum_norm import apply_inverse, read_inverse_operator
from nilearn.plotting import plot_glass_brain
print(__doc__)
Explanation: Morph volumetric source estimate
This example demonstrates how to morph an individual subject's
:class:mne.VolSourceEstimate to a common reference space. We achieve this
using :class:mne.SourceMorph. Pre-computed data will be morphed based on
an affine transformation and a nonlinear registration method
known as Symmetric Diffeomorphic Registration (SDR) by Avants et al. [1]_.
Transformation is estimated from the subject's anatomical T1 weighted MRI
(brain) to FreeSurfer's 'fsaverage' T1 weighted MRI (brain)
<https://surfer.nmr.mgh.harvard.edu/fswiki/FsAverage>__.
Afterwards the transformation will be applied to the volumetric source
estimate. The result will be plotted, showing the fsaverage T1 weighted
anatomical MRI, overlaid with the morphed volumetric source estimate.
References
.. [1] Avants, B. B., Epstein, C. L., Grossman, M., & Gee, J. C. (2009).
Symmetric Diffeomorphic Image Registration with Cross- Correlation:
Evaluating Automated Labeling of Elderly and Neurodegenerative
Brain, 12(1), 26-41.
<div class="alert alert-info"><h4>Note</h4><p>For a tutorial about morphing see `ch_morph`.</p></div>
End of explanation
sample_dir_raw = sample.data_path()
sample_dir = os.path.join(sample_dir_raw, 'MEG', 'sample')
subjects_dir = os.path.join(sample_dir_raw, 'subjects')
fname_evoked = os.path.join(sample_dir, 'sample_audvis-ave.fif')
fname_inv = os.path.join(sample_dir, 'sample_audvis-meg-vol-7-meg-inv.fif')
fname_t1_fsaverage = os.path.join(subjects_dir, 'fsaverage', 'mri',
'brain.mgz')
Explanation: Setup paths
End of explanation
evoked = mne.read_evokeds(fname_evoked, condition=0, baseline=(None, 0))
inverse_operator = read_inverse_operator(fname_inv)
# Apply inverse operator
stc = apply_inverse(evoked, inverse_operator, 1.0 / 3.0 ** 2, "dSPM")
# To save time
stc.crop(0.09, 0.09)
Explanation: Compute example data. For reference see
sphx_glr_auto_examples_inverse_plot_compute_mne_inverse_volume.py
Load data:
End of explanation
morph = mne.compute_source_morph(inverse_operator['src'],
subject_from='sample', subject_to='fsaverage',
subjects_dir=subjects_dir)
Explanation: Get a SourceMorph object for VolSourceEstimate
subject_from can typically be inferred from
:class:src <mne.SourceSpaces>,
and subject_to is set to 'fsaverage' by default. subjects_dir can be
None when set in the environment. In that case SourceMorph can be initialized
taking src as only argument. See :class:mne.SourceMorph for more
details.
The default parameter setting for spacing will cause the reference volumes
to be resliced before computing the transform. A value of '5' would cause
the function to reslice to an isotropic voxel size of 5 mm. The higher this
value the less accurate but faster the computation will be.
A standard usage for volumetric data reads:
End of explanation
stc_fsaverage = morph.apply(stc)
Explanation: Apply morph to VolSourceEstimate
The morph can be applied to the source estimate data, by giving it as the
first argument to the :meth:morph.apply() <mne.SourceMorph.apply> method:
End of explanation
# Create mri-resolution volume of results
img_fsaverage = morph.apply(stc, mri_resolution=2, output='nifti1')
Explanation: Convert morphed VolSourceEstimate into NIfTI
We can convert our morphed source estimate into a NIfTI volume using
:meth:morph.apply(..., output='nifti1') <mne.SourceMorph.apply>.
End of explanation
# Load fsaverage anatomical image
t1_fsaverage = nib.load(fname_t1_fsaverage)
# Plot glass brain (change to plot_anat to display an overlaid anatomical T1)
display = plot_glass_brain(t1_fsaverage,
title='subject results to fsaverage',
draw_cross=False,
annotate=True)
# Add functional data as overlay
display.add_overlay(img_fsaverage, alpha=0.75)
Explanation: Plot results
End of explanation
stc_fsaverage_new = mne.compute_source_morph(
inverse_operator['src'], subject_from='sample',
subjects_dir=subjects_dir).apply(stc)
Explanation: Reading and writing SourceMorph from and to disk
An instance of SourceMorph can be saved, by calling
:meth:morph.save <mne.SourceMorph.save>.
This methods allows for specification of a filename under which the morph
will be save in ".h5" format. If no file extension is provided, "-morph.h5"
will be appended to the respective defined filename::
>>> morph.save('my-file-name')
Reading a saved source morph can be achieved by using
:func:mne.read_source_morph::
>>> morph = mne.read_source_morph('my-file-name-morph.h5')
Once the environment is set up correctly, no information such as
subject_from or subjects_dir must be provided, since it can be
inferred from the data and used morph to 'fsaverage' by default. SourceMorph
can further be used without creating an instance and assigning it to a
variable. Instead :func:mne.compute_source_morph and
:meth:mne.SourceMorph.apply can be
easily chained into a handy one-liner. Taking this together the shortest
possible way to morph data directly would be:
End of explanation |
2,969 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial
In this tutorial we'll explore the two main pieces of functionality that
HSMMLearn provides
Step1: Decoding
For this part of the tutorial we'll use an HSMM with 3 internal states and 4 durations 1, ..., 4. We'll stick to using Gaussian emissions throughout (i.e. the probability density of the observations given the state is a 1D Gaussian with a fixed mean and standard deviation), but discrete observations (or observations modeled on another class of PDF) can be slotted in equally easily.
The transition matrix is chosen so that there are no self-transitions (all the diagonal elements are 0). This forces the system to go to a different state at the end of each duration. This is not strictly necessary, but allows us to visualize the effect of the different durations a little better.
The duration matrix is chosen so that for state 1, there is a 90% chance of having the duration equal to 4, and 10% duration 1. For state 2 and 3, there is a 90% probability of having duration 3, resp. 2. As the duration lengths don't differ much, this won't be very visible, but for sparse duration distributions with many durations it does make a difference.
For the emission PDFs, we choose clearly separated Gaussians, with means 0, 5, and 10, and standard deviation equal to 1 in each case.
Step2: Having initialized the Gaussian HSMM, let's sample from it to obtain a sequence of internal states and observations
Step3: In the figure, the red line is the mean of the PDF that was selected for that internal state, and the blue line shows the observations. As is expected, the observations are clustered around the mean for each state.
Assuming now that we only have the observations, and we want to reconstruct, or decode, the most likely internal states for those observations. This can be done by means of the classical Viterbi algorithm, which has an extension for HSMMs.
Step4: Given that our Gaussian peaks are so clearly separated, the Viterbi decoder manages to reconstruct the entire state sequence correctly. No surprise, and not very exciting.
Step5: Things become a little more interesting when the peaks are not so well separated. In the example below, we move the mean of the second Gaussian up to 8.0 (up from 5.0), and then we sample and decode again. We also set the duration distribution to something a little more uniform, just to make things harder on the decoder (it turns out that otherwise the decoder is able to infer the internal state sequence almost completely on the basis of the inferred durations alone).
Step6: In the figure, the red line again shows the mean of the original sequence of internal states, while the gray line (offset by 0.5 to avoid overlapping with the rest of the plot) shows the reconstructed sequence. They track each other pretty faithfully, except in areas where the observations give not much information about the internal state.
Aside
Step7: This class can be used in much the same way as the more practical GaussianHSMM. However, it lacks some convenience attributes (.means, .scales, ...) that GaussianHSMM does have.
To create an HSMM with a different class of emission PDFs, it suffices to derive from hsmmlearn.emissions.AbstractEmissions and supply the required functionality there. This is an abstract base class with a couple of abstract methods that need to be overridden in the concrete class. If you require only some of the functionality, you can override the methods that you don't need with an empty function. To see this in practice, let's create an HSMM with Laplace emission PDFs. Below, we override sample_for_state because we want to sample from the HSMM, and likelihood, needed to run Viterbi (not demonstrated).
Step8: Let's check that this defines indeed an HSMM with Laplacian output PDFs
Step9: Looks indeed like a Laplacian distribution!
Model inference
In the next section, we'll tackle the "third question" outlined by Rabiner
Step10: The plot shows clearly that the durations are separated
Step11: Using the .fit method, we'll run the expectation-maximization algorithm on our new duration-agnostic HSMM. This will adjust the parameters of the HSMM in place, to best match the given observations.
Step12: If we examine the adjusted duration distribution, we see that this reproduces to some extent the original distribution, with very pronounced probabilities for duration 2 in state 0, duration 6 in state 1, and duration 10 in state 2.
Step13: In tandem with the duration distributions, the other parameters have also changed. The transition matrix has become a little more pronounced to emphasize the transitions between different states, and the locations and scales of the emission PDFs have shifted a bit. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Tutorial
In this tutorial we'll explore the two main pieces of functionality that
HSMMLearn provides:
Viterbi decoding: given a sequence of observations, find the sequence of
hidden states that maximizes the joint probability of the observations and
the states.
Baum-Welch model inference: given a sequence of observations, find the model
parameters (the transition and emission matrices, and the duration
distributions) that maximize the probability of the observations given the
model.
In case you are reading this as part of the Sphinx documentation, there is an IPython 4 notebook with the contents of this tutorial in the notebooks/ folder at the root of the repository.
End of explanation
from hsmmlearn.hsmm import GaussianHSMM
durations = np.array([
[0.1, 0.0, 0.0, 0.9],
[0.1, 0.0, 0.9, 0.0],
[0.1, 0.9, 0.0, 0.0]
])
tmat = np.array([
[0.0, 0.5, 0.5],
[0.3, 0.0, 0.7],
[0.6, 0.4, 0.0]
])
means = np.array([0.0, 5.0, 10.0])
scales = np.ones_like(means)
hsmm = GaussianHSMM(
means, scales, durations, tmat,
)
Explanation: Decoding
For this part of the tutorial we'll use an HSMM with 3 internal states and 4 durations 1, ..., 4. We'll stick to using Gaussian emissions throughout (i.e. the probability density of the observations given the state is a 1D Gaussian with a fixed mean and standard deviation), but discrete observations (or observations modeled on another class of PDF) can be slotted in equally easily.
The transition matrix is chosen so that there are no self-transitions (all the diagonal elements are 0). This forces the system to go to a different state at the end of each duration. This is not strictly necessary, but allows us to visualize the effect of the different durations a little better.
The duration matrix is chosen so that for state 1, there is a 90% chance of having the duration equal to 4, and 10% duration 1. For state 2 and 3, there is a 90% probability of having duration 3, resp. 2. As the duration lengths don't differ much, this won't be very visible, but for sparse duration distributions with many durations it does make a difference.
For the emission PDFs, we choose clearly separated Gaussians, with means 0, 5, and 10, and standard deviation equal to 1 in each case.
End of explanation
observations, states = hsmm.sample(300)
print(states[:20])
print(observations[:20])
fig, ax = plt.subplots(figsize=(15, 3))
ax.plot(means[states], 'r', linewidth=2, alpha=.8)
ax.plot(observations)
Explanation: Having initialized the Gaussian HSMM, let's sample from it to obtain a sequence of internal states and observations:
End of explanation
decoded_states = hsmm.decode(observations)
Explanation: In the figure, the red line is the mean of the PDF that was selected for that internal state, and the blue line shows the observations. As is expected, the observations are clustered around the mean for each state.
Assuming now that we only have the observations, and we want to reconstruct, or decode, the most likely internal states for those observations. This can be done by means of the classical Viterbi algorithm, which has an extension for HSMMs.
End of explanation
np.sum(states != decoded_states) # Number of differences between the original and the decoded states
Explanation: Given that our Gaussian peaks are so clearly separated, the Viterbi decoder manages to reconstruct the entire state sequence correctly. No surprise, and not very exciting.
End of explanation
new_means = np.array([0.0, 8.0, 10.0])
hsmm.durations = np.full((3, 4), 0.25)
hsmm.means = new_means
observations, states = hsmm.sample(200)
decoded_states = hsmm.decode(observations)
np.sum(states != decoded_states)
fig, ax = plt.subplots(figsize=(15, 3))
ax.plot(new_means[states], 'r', linewidth=2, alpha=.8)
ax.plot(new_means[decoded_states] - 0.5, 'k', linewidth=2, alpha=.5)
ax.plot(observations)
Explanation: Things become a little more interesting when the peaks are not so well separated. In the example below, we move the mean of the second Gaussian up to 8.0 (up from 5.0), and then we sample and decode again. We also set the duration distribution to something a little more uniform, just to make things harder on the decoder (it turns out that otherwise the decoder is able to infer the internal state sequence almost completely on the basis of the inferred durations alone).
End of explanation
from hsmmlearn.hsmm import HSMMModel
from hsmmlearn.emissions import GaussianEmissions
gaussian_hsmm = HSMMModel(
GaussianEmissions(means, scales), durations, tmat
)
Explanation: In the figure, the red line again shows the mean of the original sequence of internal states, while the gray line (offset by 0.5 to avoid overlapping with the rest of the plot) shows the reconstructed sequence. They track each other pretty faithfully, except in areas where the observations give not much information about the internal state.
Aside: different emission PDFs
In the previous example, we worked throughout with an instance of the class GaussianHSMM. This is a convenience wrapper around a more general class hsmmlearn.hsmm.HSMMModel, which allows for more general emission PDFs via Python descriptors. To see how this works, let's first re-instantiate our Gaussian HSMM via HSMMModel:
End of explanation
from scipy.stats import laplace
from hsmmlearn.emissions import AbstractEmissions
# Note: this is almost identical to the GaussianEmissions class,
# the only difference being that we replaced the Gaussian RV (norm)
# with a Laplacian RV (laplace).
class LaplaceEmissions(AbstractEmissions):
# Note: this property is a hack, and will become unnecessary soon!
dtype = np.float64
def __init__(self, means, scales):
self.means = means
self.scales = scales
def likelihood(self, obs):
obs = np.squeeze(obs)
# TODO: build in some check for the shape of the likelihoods, otherwise
# this will silently fail and give the wrong results.
return laplace.pdf(obs,
loc=self.means[:, np.newaxis],
scale=self.scales[:, np.newaxis])
def sample_for_state(self, state, size=None):
return laplace.rvs(self.means[state], self.scales[state], size)
laplace_hsmm = HSMMModel(
LaplaceEmissions(means, scales), durations, tmat
)
observations, states = laplace_hsmm.sample(10000)
Explanation: This class can be used in much the same way as the more practical GaussianHSMM. However, it lacks some convenience attributes (.means, .scales, ...) that GaussianHSMM does have.
To create an HSMM with a different class of emission PDFs, it suffices to derive from hsmmlearn.emissions.AbstractEmissions and supply the required functionality there. This is an abstract base class with a couple of abstract methods that need to be overridden in the concrete class. If you require only some of the functionality, you can override the methods that you don't need with an empty function. To see this in practice, let's create an HSMM with Laplace emission PDFs. Below, we override sample_for_state because we want to sample from the HSMM, and likelihood, needed to run Viterbi (not demonstrated).
End of explanation
state0_mask = states == 0
observations_state0 = observations[state0_mask]
fig, ax = plt.subplots()
_ = ax.hist(observations_state0, bins=40, normed=True)
Explanation: Let's check that this defines indeed an HSMM with Laplacian output PDFs:
End of explanation
from hsmmlearn.hsmm import GaussianHSMM
durations = np.zeros((3, 10)) # XXXX
durations[:, :] = 0.05
durations[0, 1] = durations[1, 5] = durations[2, 9] = 0.55
tmat = np.array([
[0.0, 0.5, 0.5],
[0.3, 0.0, 0.7],
[0.6, 0.4, 0.0]
])
means = np.array([0.0, 5.0, 10.0])
scales = np.ones_like(means)
hsmm = GaussianHSMM(
means, scales, durations, tmat,
)
hsmm.durations
observations, states = hsmm.sample(200)
fig, ax = plt.subplots(figsize=(15, 3))
ax.plot(means[states], 'r', linewidth=2, alpha=.8)
ax.plot(observations)
Explanation: Looks indeed like a Laplacian distribution!
Model inference
In the next section, we'll tackle the "third question" outlined by Rabiner: given a sequence of observed data, what is the "most likely" model that could have given rise to these observations? To achieve this, we'll employ an iterative procedure known as the expectation maximization algorithm, which adjusts the transition/emission/duration data to generate the optimal model.
We start by sampling from a Gaussian HSMM that has three states, with very clearly separated durations.
End of explanation
equal_prob_durations = np.full((3, 10), 0.1)
new_hsmm = GaussianHSMM(
means, scales, equal_prob_durations, tmat,
)
equal_prob_durations
Explanation: The plot shows clearly that the durations are separated: state 0, with mean 0.0, has very short durations, while states 1 and 2 have much longer durations.
Having sampled from the HSMM, we'll now forget our duration distribution and set up a new HSMM with a flat duration distribution.
End of explanation
# Fit returns a bool to indicate whether the EM algorithm converged,
# and the log-likelihood after the adjustments are made.
new_hsmm.fit(observations)
Explanation: Using the .fit method, we'll run the expectation-maximization algorithm on our new duration-agnostic HSMM. This will adjust the parameters of the HSMM in place, to best match the given observations.
End of explanation
np.set_printoptions(precision=1)
print new_hsmm.durations
np.set_printoptions()
Explanation: If we examine the adjusted duration distribution, we see that this reproduces to some extent the original distribution, with very pronounced probabilities for duration 2 in state 0, duration 6 in state 1, and duration 10 in state 2.
End of explanation
np.set_printoptions(precision=1)
print 'New transition matrices:'
print new_hsmm.tmat
print 'New emission parameters:'
print 'Means:', new_hsmm.means
print 'Scales:', new_hsmm.scales
np.set_printoptions()
Explanation: In tandem with the duration distributions, the other parameters have also changed. The transition matrix has become a little more pronounced to emphasize the transitions between different states, and the locations and scales of the emission PDFs have shifted a bit.
End of explanation |
2,970 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MuJoCo tutorial with dm_control Python bindings
<p><small><small>Copyright 2021 The dm_control Authors.</small></p>
<p><small><small>Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at <a href="http
Step1: Imports
Run both of these cells
Step3: Model definition, compilation and rendering
We begin by describing some basic concepts of the MuJoCo physics simulation library, but recommend the official documentation for details.
Let's define a simple model with two geoms and a light.
Step5: static_model is written in MuJoCo's XML-based MJCF modeling language. The from_xml_string() method invokes the model compiler, which instantiates the library's internal data structures. These can be accessed via the physics object, see below.
Adding DOFs and simulating, advanced rendering
This is a perfectly legitimate model, but if we simulate it, nothing will happen except for time advancing. This is because this model has no degrees of freedom (DOFs). We add DOFs by adding joints to bodies, specifying how they can move with respect to their parents. Let us add a hinge joint and re-render, visualizing the joint axis.
Step6: The things that move (and which have inertia) are called bodies. The body's child joint specifies how that body can move with respect to its parent, in this case box_and_sphere with respect to the worldbody.
Note that the body's frame is rotated with an euler directive, and its children, the geoms and the joint, rotate with it. This is to emphasize the local-to-parent-frame nature of position and orientation directives in MJCF.
Let's make a video, to get a sense of the dynamics and to see the body swinging under gravity.
Step7: Note how we collect the video frames. Because physics simulation timesteps are generally much smaller than framerates (the default timestep is 2ms), we don't render after each step.
Rendering options
Like joint visualisation, additional rendering options are exposed as parameters to the render method.
Step8: MuJoCo basics and named indexing
mjModel
MuJoCo's mjModel, encapsulated in physics.model, contains the model description, including the default initial state and other fixed quantities which are not a function of the state, e.g. the positions of geoms in the frame of their parent body. The (x, y, z) offsets of the box and sphere geoms, relative their parent body box_and_sphere are given by model.geom_pos
Step9: Docstrings of attributes provide short descriptions.
Step10: The model.opt structure contains global quantities like
Step11: mjData
mjData, encapsulated in physics.data, contains the state and quantities that depend on it. The state is made up of time, generalized positions and generalised velocities. These are respectively data.time, data.qpos and data.qvel.
Let's print the state of the swinging body where we left it
Step12: physics.data also contains functions of the state, for example the cartesian positions of objects in the world frame. The (x, y, z) positions of our two geoms are in data.geom_xpos
Step13: Named indexing
The semantics of the above arrays are made clearer using the named wrapper, which assigns names to rows and type names to columns.
Step14: Note how model.geom_pos and data.geom_xpos have similar semantics but very different meanings.
Step15: Name strings can be used to index into the relevant quantities, making code much more readable and robust.
Step16: Joint names can be used to index into quantities in joint space (beginning with the letter q)
Step17: We can mix NumPy slicing operations with named indexing. As an example, we can set the color of the box using its name ("red_box") as an index into the rows of the geom_rgba array.
Step18: Note that while physics.model quantities will not be changed by the engine, we can change them ourselves between steps.
Setting the state with reset_context()
In order for data quantities that are functions of the state to be in sync with the state, MuJoCo's mj_step1() needs to be called. This is facilitated by the reset_context() context, please see in-depth discussion in Section 2.1 of the dm_control tech report.
Step20: Free bodies
Step21: Note several new features of this model definition
Step22: The velocities are easy to interpret, 6 zeros, one for each DOF. What about the length-7 positions? We can see the initial 2cm height of the body; the subsequent four numbers are the 3D orientation, defined by a unit quaternion. These normalized four-vectors, which preserve the topology of the orientation group, are the reason that data.qpos can be bigger than data.qvel
Step23: Measuring values from physics.data
The physics.data structure contains all of the dynamic variables and intermediate results produced by the simulation. These are expected to change on each timestep.
Below we simulate for 2000 timesteps and plot the state and height of the sphere as a function of time.
Step25: Example
Step26: Timing
Let's see a video of it in action while we time the components
Step27: Chaos
This is a chaotic system, small pertubations in initial conditions accumulate quickly
Step28: Timestep and accuracy
Q
Step29: Timestep and divergence
When we increase the time step, the simulation quickly diverges
Step31: Contacts
Step32: Analysis of contact forces
Step34: Friction
Step36: Actuators and tendons
Step37: Let's ignore the actuator and apply forces directly to the body
Step39: Kinematic Jacobians
A Jacobian is a derivative matrix of a vector-valued function. MuJoCo computes the Jacobians of all transformations between joint space and Cartesian space.
Below we use the Jacobian of the end effector position to create a virtual spring to some random target. | Python Code:
#@title Run to install MuJoCo and `dm_control`
import distutils.util
import subprocess
if subprocess.run('nvidia-smi').returncode:
raise RuntimeError(
'Cannot communicate with GPU. '
'Make sure you are using a GPU Colab runtime. '
'Go to the Runtime menu and select Choose runtime type.')
print('Installing dm_control...')
!pip install -q dm_control>=1.0.3.post1
# Configure dm_control to use the EGL rendering backend (requires GPU)
%env MUJOCO_GL=egl
print('Checking that the dm_control installation succeeded...')
try:
from dm_control import suite
env = suite.load('cartpole', 'swingup')
pixels = env.physics.render()
except Exception as e:
raise e from RuntimeError(
'Something went wrong during installation. Check the shell output above '
'for more information.\n'
'If using a hosted Colab runtime, make sure you enable GPU acceleration '
'by going to the Runtime menu and selecting "Choose runtime type".')
else:
del suite, pixels
print('dm_control installation succeeded.')
Explanation: MuJoCo tutorial with dm_control Python bindings
<p><small><small>Copyright 2021 The dm_control Authors.</small></p>
<p><small><small>Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at <a href="http://www.apache.org/licenses/LICENSE-2.0">http://www.apache.org/licenses/LICENSE-2.0</a>.</small></small></p>
<p><small><small>Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.</small></small></p>
This notebook provides an overview tutorial of the MuJoCo physics simulator, using the dm_control Python bindings. It is similar to the notebook in dm_control/tutorial.ipynb, but focuses on teaching MuJoCo itself, rather than the additional features provided by the Python package.
A Colab runtime with GPU acceleration is required. If you're using a CPU-only runtime, you can switch using the menu "Runtime > Change runtime type".
<!-- Internal installation instructions. -->
Installing dm_control on Colab
End of explanation
#@title All `dm_control` imports required for this tutorial
# The basic mujoco wrapper.
from dm_control import mujoco
# Access to enums and MuJoCo library functions.
from dm_control.mujoco.wrapper.mjbindings import enums
from dm_control.mujoco.wrapper.mjbindings import mjlib
# Composer high level imports
from dm_control import composer
from dm_control.composer.observation import observable
from dm_control.composer import variation
# Imports for Composer tutorial example
from dm_control.composer.variation import distributions
from dm_control.composer.variation import noises
from dm_control.locomotion.arenas import floors
# Control Suite
from dm_control import suite
# Run through corridor example
from dm_control.locomotion.walkers import cmu_humanoid
from dm_control.locomotion.arenas import corridors as corridor_arenas
from dm_control.locomotion.tasks import corridors as corridor_tasks
# Soccer
from dm_control.locomotion import soccer
# Manipulation
from dm_control import manipulation
#@title Other imports and helper functions
# General
import copy
import os
import time
import itertools
from IPython.display import clear_output
import numpy as np
# Graphics-related
import matplotlib
import matplotlib.animation as animation
import matplotlib.pyplot as plt
from IPython.display import HTML
import PIL.Image
# Internal loading of video libraries.
# Use svg backend for figure rendering
%config InlineBackend.figure_format = 'svg'
# Font sizes
SMALL_SIZE = 8
MEDIUM_SIZE = 10
BIGGER_SIZE = 12
plt.rc('font', size=SMALL_SIZE) # controls default text sizes
plt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize
plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title
# Inline video helper function
if os.environ.get('COLAB_NOTEBOOK_TEST', False):
# We skip video generation during tests, as it is quite expensive.
display_video = lambda *args, **kwargs: None
else:
def display_video(frames, framerate=30):
height, width, _ = frames[0].shape
dpi = 70
orig_backend = matplotlib.get_backend()
matplotlib.use('Agg') # Switch to headless 'Agg' to inhibit figure rendering.
fig, ax = plt.subplots(1, 1, figsize=(width / dpi, height / dpi), dpi=dpi)
matplotlib.use(orig_backend) # Switch back to the original backend.
ax.set_axis_off()
ax.set_aspect('equal')
ax.set_position([0, 0, 1, 1])
im = ax.imshow(frames[0])
def update(frame):
im.set_data(frame)
return [im]
interval = 1000/framerate
anim = animation.FuncAnimation(fig=fig, func=update, frames=frames,
interval=interval, blit=True, repeat=False)
return HTML(anim.to_html5_video())
# Seed numpy's global RNG so that cell outputs are deterministic. We also try to
# use RandomState instances that are local to a single cell wherever possible.
np.random.seed(42)
Explanation: Imports
Run both of these cells:
End of explanation
#@title A static model {vertical-output: true}
static_model =
<mujoco>
<worldbody>
<light name="top" pos="0 0 1"/>
<geom name="red_box" type="box" size=".2 .2 .2" rgba="1 0 0 1"/>
<geom name="green_sphere" pos=".2 .2 .2" size=".1" rgba="0 1 0 1"/>
</worldbody>
</mujoco>
physics = mujoco.Physics.from_xml_string(static_model)
pixels = physics.render()
PIL.Image.fromarray(pixels)
Explanation: Model definition, compilation and rendering
We begin by describing some basic concepts of the MuJoCo physics simulation library, but recommend the official documentation for details.
Let's define a simple model with two geoms and a light.
End of explanation
#@title A child body with a joint { vertical-output: true }
swinging_body =
<mujoco>
<worldbody>
<light name="top" pos="0 0 1"/>
<body name="box_and_sphere" euler="0 0 -30">
<joint name="swing" type="hinge" axis="1 -1 0" pos="-.2 -.2 -.2"/>
<geom name="red_box" type="box" size=".2 .2 .2" rgba="1 0 0 1"/>
<geom name="green_sphere" pos=".2 .2 .2" size=".1" rgba="0 1 0 1"/>
</body>
</worldbody>
</mujoco>
physics = mujoco.Physics.from_xml_string(swinging_body)
# Visualize the joint axis.
scene_option = mujoco.wrapper.core.MjvOption()
scene_option.flags[enums.mjtVisFlag.mjVIS_JOINT] = True
pixels = physics.render(scene_option=scene_option)
PIL.Image.fromarray(pixels)
Explanation: static_model is written in MuJoCo's XML-based MJCF modeling language. The from_xml_string() method invokes the model compiler, which instantiates the library's internal data structures. These can be accessed via the physics object, see below.
Adding DOFs and simulating, advanced rendering
This is a perfectly legitimate model, but if we simulate it, nothing will happen except for time advancing. This is because this model has no degrees of freedom (DOFs). We add DOFs by adding joints to bodies, specifying how they can move with respect to their parents. Let us add a hinge joint and re-render, visualizing the joint axis.
End of explanation
#@title Making a video {vertical-output: true}
duration = 2 # (seconds)
framerate = 30 # (Hz)
# Visualize the joint axis
scene_option = mujoco.wrapper.core.MjvOption()
scene_option.flags[enums.mjtVisFlag.mjVIS_JOINT] = True
# Simulate and display video.
frames = []
physics.reset() # Reset state and time
while physics.data.time < duration:
physics.step()
if len(frames) < physics.data.time * framerate:
pixels = physics.render(scene_option=scene_option)
frames.append(pixels)
display_video(frames, framerate)
Explanation: The things that move (and which have inertia) are called bodies. The body's child joint specifies how that body can move with respect to its parent, in this case box_and_sphere with respect to the worldbody.
Note that the body's frame is rotated with an euler directive, and its children, the geoms and the joint, rotate with it. This is to emphasize the local-to-parent-frame nature of position and orientation directives in MJCF.
Let's make a video, to get a sense of the dynamics and to see the body swinging under gravity.
End of explanation
#@title Enable transparency and frame visualization {vertical-output: true}
scene_option = mujoco.wrapper.core.MjvOption()
scene_option.frame = enums.mjtFrame.mjFRAME_GEOM
scene_option.flags[enums.mjtVisFlag.mjVIS_TRANSPARENT] = True
pixels = physics.render(scene_option=scene_option)
PIL.Image.fromarray(pixels)
#@title Depth rendering {vertical-output: true}
# depth is a float array, in meters.
depth = physics.render(depth=True)
# Shift nearest values to the origin.
depth -= depth.min()
# Scale by 2 mean distances of near rays.
depth /= 2*depth[depth <= 1].mean()
# Scale to [0, 255]
pixels = 255*np.clip(depth, 0, 1)
PIL.Image.fromarray(pixels.astype(np.uint8))
#@title Segmentation rendering {vertical-output: true}
seg = physics.render(segmentation=True)
# Display the contents of the first channel, which contains object
# IDs. The second channel, seg[:, :, 1], contains object types.
geom_ids = seg[:, :, 0]
# Infinity is mapped to -1
geom_ids = geom_ids.astype(np.float64) + 1
# Scale to [0, 1]
geom_ids = geom_ids / geom_ids.max()
pixels = 255*geom_ids
PIL.Image.fromarray(pixels.astype(np.uint8))
#@title Projecting from world to camera coordinates {vertical-output: true}
# Get the world coordinates of the box corners
box_pos = physics.named.data.geom_xpos['red_box']
box_mat = physics.named.data.geom_xmat['red_box'].reshape(3, 3)
box_size = physics.named.model.geom_size['red_box']
offsets = np.array([-1, 1]) * box_size[:, None]
xyz_local = np.stack(itertools.product(*offsets)).T
xyz_global = box_pos[:, None] + box_mat @ xyz_local
# Camera matrices multiply homogenous [x, y, z, 1] vectors.
corners_homogeneous = np.ones((4, xyz_global.shape[1]), dtype=float)
corners_homogeneous[:3, :] = xyz_global
# Get the camera matrix.
camera = mujoco.Camera(physics)
camera_matrix = camera.matrix
# Project world coordinates into pixel space. See:
# https://en.wikipedia.org/wiki/3D_projection#Mathematical_formula
xs, ys, s = camera_matrix @ corners_homogeneous
# x and y are in the pixel coordinate system.
x = xs / s
y = ys / s
# Render the camera view and overlay the projected corner coordinates.
pixels = camera.render()
fig, ax = plt.subplots(1, 1)
ax.imshow(pixels)
ax.plot(x, y, '+', c='w')
ax.set_axis_off()
Explanation: Note how we collect the video frames. Because physics simulation timesteps are generally much smaller than framerates (the default timestep is 2ms), we don't render after each step.
Rendering options
Like joint visualisation, additional rendering options are exposed as parameters to the render method.
End of explanation
physics.model.geom_pos
Explanation: MuJoCo basics and named indexing
mjModel
MuJoCo's mjModel, encapsulated in physics.model, contains the model description, including the default initial state and other fixed quantities which are not a function of the state, e.g. the positions of geoms in the frame of their parent body. The (x, y, z) offsets of the box and sphere geoms, relative their parent body box_and_sphere are given by model.geom_pos:
End of explanation
help(type(physics.model).geom_pos)
Explanation: Docstrings of attributes provide short descriptions.
End of explanation
print('timestep', physics.model.opt.timestep)
print('gravity', physics.model.opt.gravity)
Explanation: The model.opt structure contains global quantities like
End of explanation
print(physics.data.time, physics.data.qpos, physics.data.qvel)
Explanation: mjData
mjData, encapsulated in physics.data, contains the state and quantities that depend on it. The state is made up of time, generalized positions and generalised velocities. These are respectively data.time, data.qpos and data.qvel.
Let's print the state of the swinging body where we left it:
End of explanation
print(physics.data.geom_xpos)
Explanation: physics.data also contains functions of the state, for example the cartesian positions of objects in the world frame. The (x, y, z) positions of our two geoms are in data.geom_xpos:
End of explanation
print(physics.named.data.geom_xpos)
Explanation: Named indexing
The semantics of the above arrays are made clearer using the named wrapper, which assigns names to rows and type names to columns.
End of explanation
print(physics.named.model.geom_pos)
Explanation: Note how model.geom_pos and data.geom_xpos have similar semantics but very different meanings.
End of explanation
physics.named.data.geom_xpos['green_sphere', 'z']
Explanation: Name strings can be used to index into the relevant quantities, making code much more readable and robust.
End of explanation
physics.named.data.qpos['swing']
Explanation: Joint names can be used to index into quantities in joint space (beginning with the letter q):
End of explanation
#@title Changing colors using named indexing{vertical-output: true}
random_rgb = np.random.rand(3)
physics.named.model.geom_rgba['red_box', :3] = random_rgb
pixels = physics.render()
PIL.Image.fromarray(pixels)
Explanation: We can mix NumPy slicing operations with named indexing. As an example, we can set the color of the box using its name ("red_box") as an index into the rows of the geom_rgba array.
End of explanation
physics.named.data.qpos['swing'] = np.pi
print('Without reset_context, spatial positions are not updated:',
physics.named.data.geom_xpos['green_sphere', ['z']])
with physics.reset_context():
physics.named.data.qpos['swing'] = np.pi
print('After reset_context, positions are up-to-date:',
physics.named.data.geom_xpos['green_sphere', ['z']])
Explanation: Note that while physics.model quantities will not be changed by the engine, we can change them ourselves between steps.
Setting the state with reset_context()
In order for data quantities that are functions of the state to be in sync with the state, MuJoCo's mj_step1() needs to be called. This is facilitated by the reset_context() context, please see in-depth discussion in Section 2.1 of the dm_control tech report.
End of explanation
#@title The "tippe-top" model{vertical-output: true}
tippe_top =
<mujoco model="tippe top">
<option integrator="RK4"/>
<asset>
<texture name="grid" type="2d" builtin="checker" rgb1=".1 .2 .3"
rgb2=".2 .3 .4" width="300" height="300"/>
<material name="grid" texture="grid" texrepeat="8 8" reflectance=".2"/>
</asset>
<worldbody>
<geom size=".2 .2 .01" type="plane" material="grid"/>
<light pos="0 0 .6"/>
<camera name="closeup" pos="0 -.1 .07" xyaxes="1 0 0 0 1 2"/>
<body name="top" pos="0 0 .02">
<freejoint/>
<geom name="ball" type="sphere" size=".02" />
<geom name="stem" type="cylinder" pos="0 0 .02" size="0.004 .008"/>
<geom name="ballast" type="box" size=".023 .023 0.005" pos="0 0 -.015"
contype="0" conaffinity="0" group="3"/>
</body>
</worldbody>
<keyframe>
<key name="spinning" qpos="0 0 0.02 1 0 0 0" qvel="0 0 0 0 1 200" />
</keyframe>
</mujoco>
physics = mujoco.Physics.from_xml_string(tippe_top)
PIL.Image.fromarray(physics.render(camera_id='closeup'))
Explanation: Free bodies: the self-inverting "tippe-top"
A free body is a body with a free joint, with 6 movement DOFs: 3 translations and 3 rotations. We could give our box_and_sphere body a free joint and watch it fall, but let's look at something more interesting. A "tippe top" is a spinning toy which flips itself on its head (Wikipedia). We model it as follows:
End of explanation
print('positions', physics.data.qpos)
print('velocities', physics.data.qvel)
Explanation: Note several new features of this model definition:
0. The free joint is added with the <freejoint/> clause, which is similar to <joint type="free"/>, but prohibits unphysical attributes like friction or stiffness.
1. We use the <option/> clause to set the integrator to the more accurate Runge Kutta 4th order.
2. We define the floor's grid material inside the <asset/> clause and reference it in the floor geom.
3. We use an invisible and non-colliding box geom called ballast to move the top's center-of-mass lower. Having a low center of mass is (counter-intuitively) required for the flipping behaviour to occur.
4. We save our initial spinning state as a keyframe. It has a high rotational velocity around the z-axis, but is not perfectly oriented with the world.
5. We define a <camera> in our model, and then render from it using the camera_id argument to render().
Let us examine the state:
End of explanation
#@title Video of the tippe-top {vertical-output: true}
duration = 7 # (seconds)
framerate = 60 # (Hz)
# Simulate and display video.
frames = []
physics.reset(0) # Reset to keyframe 0 (load a saved state).
while physics.data.time < duration:
physics.step()
if len(frames) < (physics.data.time) * framerate:
pixels = physics.render(camera_id='closeup')
frames.append(pixels)
display_video(frames, framerate)
Explanation: The velocities are easy to interpret, 6 zeros, one for each DOF. What about the length-7 positions? We can see the initial 2cm height of the body; the subsequent four numbers are the 3D orientation, defined by a unit quaternion. These normalized four-vectors, which preserve the topology of the orientation group, are the reason that data.qpos can be bigger than data.qvel: 3D orientations are represented with 4 numbers while angular velocities are 3 numbers.
End of explanation
#@title Measuring values {vertical-output: true}
timevals = []
angular_velocity = []
stem_height = []
# Simulate and save data
physics.reset(0)
while physics.data.time < duration:
physics.step()
timevals.append(physics.data.time)
angular_velocity.append(physics.data.qvel[3:6].copy())
stem_height.append(physics.named.data.geom_xpos['stem', 'z'])
dpi = 100
width = 480
height = 640
figsize = (width / dpi, height / dpi)
_, ax = plt.subplots(2, 1, figsize=figsize, dpi=dpi, sharex=True)
ax[0].plot(timevals, angular_velocity)
ax[0].set_title('angular velocity')
ax[0].set_ylabel('radians / second')
ax[1].plot(timevals, stem_height)
ax[1].set_xlabel('time (seconds)')
ax[1].set_ylabel('meters')
_ = ax[1].set_title('stem height')
Explanation: Measuring values from physics.data
The physics.data structure contains all of the dynamic variables and intermediate results produced by the simulation. These are expected to change on each timestep.
Below we simulate for 2000 timesteps and plot the state and height of the sphere as a function of time.
End of explanation
#@title chaotic pendulum {vertical-output: true}
chaotic_pendulum =
<mujoco>
<option timestep=".001" >
<flag energy="enable" contact="disable"/>
</option>
<default>
<joint type="hinge" axis="0 -1 0"/>
<geom type="capsule" size=".02"/>
</default>
<worldbody>
<light pos="0 -.4 1"/>
<camera name="fixed" pos="0 -1 0" xyaxes="1 0 0 0 0 1"/>
<body name="0" pos="0 0 .2">
<joint name="root"/>
<geom fromto="-.2 0 0 .2 0 0" rgba="1 1 0 1"/>
<geom fromto="0 0 0 0 0 -.25" rgba="1 1 0 1"/>
<body name="1" pos="-.2 0 0">
<joint/>
<geom fromto="0 0 0 0 0 -.2" rgba="1 0 0 1"/>
</body>
<body name="2" pos=".2 0 0">
<joint/>
<geom fromto="0 0 0 0 0 -.2" rgba="0 1 0 1"/>
</body>
<body name="3" pos="0 0 -.25">
<joint/>
<geom fromto="0 0 0 0 0 -.2" rgba="0 0 1 1"/>
</body>
</body>
</worldbody>
</mujoco>
physics = mujoco.Physics.from_xml_string(chaotic_pendulum)
pixels = physics.render(height=480, width=640, camera_id="fixed")
PIL.Image.fromarray(pixels)
Explanation: Example: A chaotic pendulum
Below is a model of a chaotic pendulum, similar to this one in the San Francisco Exploratorium.
End of explanation
#@title physics vs. rendering: {vertical-output: true}
# setup
n_seconds = 6
framerate = 30 # Hz
n_frames = int(n_seconds * framerate)
frames = []
# set initial state
with physics.reset_context():
physics.named.data.qvel['root'] = 10
# simulate and record frames
frame = 0
sim_time = 0
render_time = 0
n_steps = 0
for i in range(n_frames):
while physics.data.time * framerate < i:
tic = time.time()
physics.step()
sim_time += time.time() - tic
n_steps += 1
tic = time.time()
frame = physics.render(240, 320, camera_id="fixed")
render_time += time.time() - tic
frames.append(frame.copy())
# print timing and play video
print('simulation: {:6.2f} ms/frame ({:5.0f}Hz)'.format(
1000*sim_time/n_steps, n_steps/sim_time))
print('rendering: {:6.2f} ms/frame ({:5.0f}Hz)'.format(
1000*render_time/n_frames, n_frames/render_time))
print('\n')
# show video
display_video(frames, framerate)
Explanation: Timing
Let's see a video of it in action while we time the components:
End of explanation
#@title chaos: sensitvity to pertubation {vertical-output: true}
PERTURBATION = 1e-7
SIM_DURATION = 10 # seconds
NUM_REPEATS = 8
# preallocate
n_steps = int(SIM_DURATION / physics.model.opt.timestep)
sim_time = np.zeros(n_steps)
angle = np.zeros(n_steps)
energy = np.zeros(n_steps)
# prepare plotting axes
_, ax = plt.subplots(2, 1, sharex=True)
# simulate NUM_REPEATS times with slightly different initial conditions
for _ in range(NUM_REPEATS):
# initialize
with physics.reset_context():
physics.data.qvel[0] = 10 # root joint velocity
# perturb initial velocities
physics.data.qvel[:] += PERTURBATION * np.random.randn(physics.model.nv)
# simulate
for i in range(n_steps):
physics.step()
sim_time[i] = physics.data.time
angle[i] = physics.named.data.qpos['root']
energy[i] = physics.data.energy[0] + physics.data.energy[1]
# plot
ax[0].plot(sim_time, angle)
ax[1].plot(sim_time, energy)
# finalize plot
ax[0].set_title('root angle')
ax[0].set_ylabel('radian')
ax[1].set_title('total energy')
ax[1].set_ylabel('Joule')
ax[1].set_xlabel('second')
plt.tight_layout()
Explanation: Chaos
This is a chaotic system, small pertubations in initial conditions accumulate quickly:
End of explanation
#@title reducing the time-step: {vertical-output: true}
SIM_DURATION = 10 # (seconds)
TIMESTEPS = np.power(10, np.linspace(-2, -4, 5))
# prepare plotting axes
_, ax = plt.subplots(1, 1)
for dt in TIMESTEPS:
# set timestep, print
physics.model.opt.timestep = dt
# allocate
n_steps = int(SIM_DURATION / physics.model.opt.timestep)
sim_time = np.zeros(n_steps)
energy = np.zeros(n_steps)
# initialize
with physics.reset_context():
physics.data.qvel[0] = 9 # root joint velocity
# simulate
print('{} steps at dt = {:2.2g}ms'.format(n_steps, 1000*dt))
for i in range(n_steps):
physics.step()
sim_time[i] = physics.data.time
energy[i] = physics.data.energy[0] + physics.data.energy[1]
# plot
ax.plot(sim_time, energy, label='timestep = {:2.2g}ms'.format(1000*dt))
# finalize plot
ax.set_title('energy')
ax.set_ylabel('Joule')
ax.set_xlabel('second')
ax.legend(frameon=True);
plt.tight_layout()
Explanation: Timestep and accuracy
Q: Why is the energy varying at all? There is no friction or damping, this system should conserve energy.
A: Because of the discretization of time.
If we decrease the timestep we'll get better accuracy, hence better energy conservation:
End of explanation
#@title increasing the time-step: {vertical-output: true}
SIM_DURATION = 10 # (seconds)
TIMESTEPS = np.power(10, np.linspace(-2, -1.5, 7))
# get plotting axes
ax = plt.gca()
for dt in TIMESTEPS:
# set timestep
physics.model.opt.timestep = dt
# allocate
n_steps = int(SIM_DURATION / physics.model.opt.timestep)
sim_time = np.zeros(n_steps)
energy = np.zeros(n_steps) * np.nan
speed = np.zeros(n_steps) * np.nan
# initialize
with physics.reset_context():
physics.data.qvel[0] = 11 # root joint velocity
# simulate
print('{} steps at dt = {:2.2g}ms'.format(n_steps, 1000*dt))
for i in range(n_steps):
try:
physics.step()
except BaseException: # raises mujoco.engine.base.PhysicsError
print('numerical divergence at timestep {}.'.format(i))
break
sim_time[i] = physics.data.time
energy[i] = sum(abs(physics.data.qvel))
speed[i] = np.linalg.norm(physics.data.qvel)
# plot
ax.plot(sim_time, energy, label='timestep = {:2.2g}ms'.format(1000*dt))
ax.set_yscale('log')
# finalize plot
ax.set_ybound(1, 1e3)
ax.set_title('energy')
ax.set_ylabel('Joule')
ax.set_xlabel('second')
ax.legend(frameon=True, loc='lower right');
plt.tight_layout()
Explanation: Timestep and divergence
When we increase the time step, the simulation quickly diverges
End of explanation
#@title 'box_and_sphere' free body: {vertical-output: true}
free_body_MJCF =
<mujoco>
<asset>
<texture name="grid" type="2d" builtin="checker" rgb1=".1 .2 .3"
rgb2=".2 .3 .4" width="300" height="300" mark="edge" markrgb=".2 .3 .4"/>
<material name="grid" texture="grid" texrepeat="2 2" texuniform="true"
reflectance=".2"/>
</asset>
<worldbody>
<light pos="0 0 1" mode="trackcom"/>
<geom name="ground" type="plane" pos="0 0 -.5" size="2 2 .1" material="grid" solimp=".99 .99 .01" solref=".001 1"/>
<body name="box_and_sphere" pos="0 0 0">
<freejoint/>
<geom name="red_box" type="box" size=".1 .1 .1" rgba="1 0 0 1" solimp=".99 .99 .01" solref=".001 1"/>
<geom name="green_sphere" size=".06" pos=".1 .1 .1" rgba="0 1 0 1"/>
<camera name="fixed" pos="0 -.6 .3" xyaxes="1 0 0 0 1 2"/>
<camera name="track" pos="0 -.6 .3" xyaxes="1 0 0 0 1 2" mode="track"/>
</body>
</worldbody>
</mujoco>
physics = mujoco.Physics.from_xml_string(free_body_MJCF)
pixels = physics.render(400, 600, "fixed")
PIL.Image.fromarray(pixels)
#@title contacts in slow-motion: (0.25x){vertical-output: true}
n_frames = 200
height = 240
width = 320
frames = np.zeros((n_frames, height, width, 3), dtype=np.uint8)
# visualize contact frames and forces, make body transparent
options = mujoco.wrapper.core.MjvOption()
mujoco.wrapper.core.mjlib.mjv_defaultOption(options.ptr)
options.flags[enums.mjtVisFlag.mjVIS_CONTACTPOINT] = True
options.flags[enums.mjtVisFlag.mjVIS_CONTACTFORCE] = True
options.flags[enums.mjtVisFlag.mjVIS_TRANSPARENT] = True
# tweak scales of contact visualization elements
physics.model.vis.scale.contactwidth = 0.1
physics.model.vis.scale.contactheight = 0.03
physics.model.vis.scale.forcewidth = 0.05
physics.model.vis.map.force = 0.3
# random initial rotational velocity:
with physics.reset_context():
physics.data.qvel[3:6] = 5*np.random.randn(3)
# simulate and render
for i in range(n_frames):
while physics.data.time < i/120.0: #1/4x real time
physics.step()
frames[i] = physics.render(height, width, camera_id="track", scene_option=options)
# show video
display_video(frames)
Explanation: Contacts
End of explanation
#@title contact-related quantities: {vertical-output: true}
n_steps = 499
# allocate
sim_time = np.zeros(n_steps)
ncon = np.zeros(n_steps)
force = np.zeros((n_steps,3))
velocity = np.zeros((n_steps, physics.model.nv))
penetration = np.zeros(n_steps)
acceleration = np.zeros((n_steps, physics.model.nv))
forcetorque = np.zeros(6)
# random initial rotational velocity:
with physics.reset_context():
physics.data.qvel[3:6] = 2*np.random.randn(3)
# simulate and save data
for i in range(n_steps):
physics.step()
sim_time[i] = physics.data.time
ncon[i] = physics.data.ncon
velocity[i] = physics.data.qvel[:]
acceleration[i] = physics.data.qacc[:]
# iterate over active contacts, save force and distance
for j,c in enumerate(physics.data.contact):
mjlib.mj_contactForce(physics.model.ptr, physics.data.ptr,
j, forcetorque)
force[i] += forcetorque[0:3]
penetration[i] = min(penetration[i], c.dist)
# we could also do
# force[i] += physics.data.qfrc_constraint[0:3]
# do you see why?
# plot
_, ax = plt.subplots(3, 2, sharex=True, figsize=(7, 10))
lines = ax[0,0].plot(sim_time, force)
ax[0,0].set_title('contact force')
ax[0,0].set_ylabel('Newton')
ax[0,0].legend(iter(lines), ('normal z', 'friction x', 'friction y'));
ax[1,0].plot(sim_time, acceleration)
ax[1,0].set_title('acceleration')
ax[1,0].set_ylabel('(meter,radian)/s/s')
ax[2,0].plot(sim_time, velocity)
ax[2,0].set_title('velocity')
ax[2,0].set_ylabel('(meter,radian)/s')
ax[2,0].set_xlabel('second')
ax[0,1].plot(sim_time, ncon)
ax[0,1].set_title('number of contacts')
ax[0,1].set_yticks(range(6))
ax[1,1].plot(sim_time, force[:,0])
ax[1,1].set_yscale('log')
ax[1,1].set_title('normal (z) force - log scale')
ax[1,1].set_ylabel('Newton')
z_gravity = -physics.model.opt.gravity[2]
mg = physics.named.model.body_mass["box_and_sphere"] * z_gravity
mg_line = ax[1,1].plot(sim_time, np.ones(n_steps)*mg, label='m*g', linewidth=1)
ax[1,1].legend()
ax[2,1].plot(sim_time, 1000*penetration)
ax[2,1].set_title('penetration depth')
ax[2,1].set_ylabel('millimeter')
ax[2,1].set_xlabel('second')
plt.tight_layout()
Explanation: Analysis of contact forces
End of explanation
#@title tangential friction and slope: {vertical-output: true}
MJCF =
<mujoco>
<asset>
<texture name="grid" type="2d" builtin="checker" rgb1=".1 .2 .3"
rgb2=".2 .3 .4" width="300" height="300" mark="none"/>
<material name="grid" texture="grid" texrepeat="6 6"
texuniform="true" reflectance=".2"/>
<material name="wall" rgba='.5 .5 .5 1'/>
</asset>
<default>
<geom type="box" size=".05 .05 .05" />
<joint type="free"/>
</default>
<worldbody>
<light name="light" pos="-.2 0 1"/>
<geom name="ground" type="plane" size=".5 .5 10" material="grid"
zaxis="-.3 0 1" friction=".1"/>
<camera name="y" pos="-.1 -.6 .3" xyaxes="1 0 0 0 1 2"/>
<body pos="0 0 .1">
<joint/>
<geom/>
</body>
<body pos="0 .2 .1">
<joint/>
<geom friction=".33"/>
</body>
</worldbody>
</mujoco>
# load
physics = mujoco.Physics.from_xml_string(MJCF)
n_frames = 60
height = 480
width = 480
video = np.zeros((n_frames, height, width, 3), dtype=np.uint8)
# simulate and render
physics.reset()
for i in range(n_frames):
while physics.data.time < i/30.0:
physics.step()
video[i] = physics.render(height, width, "y")
display_video(video)
Explanation: Friction
End of explanation
#@title bat and piรฑata: {vertical-output: true}
MJCF =
<mujoco>
<asset>
<texture name="grid" type="2d" builtin="checker" rgb1=".1 .2 .3"
rgb2=".2 .3 .4" width="300" height="300" mark="none"/>
<material name="grid" texture="grid" texrepeat="1 1"
texuniform="true" reflectance=".2"/>
</asset>
<worldbody>
<light name="light" pos="0 0 1"/>
<geom name="floor" type="plane" pos="0 0 -.5" size="2 2 .1" material="grid"/>
<site name="anchor" pos="0 0 .3" size=".01"/>
<camera name="fixed" pos="0 -1.3 .5" xyaxes="1 0 0 0 1 2"/>
<geom name="pole" type="cylinder" fromto=".3 0 -.5 .3 0 -.1" size=".04"/>
<body name="bat" pos=".3 0 -.1">
<joint name="swing" type="hinge" damping="1" axis="0 0 1"/>
<geom name="bat" type="capsule" fromto="0 0 .04 0 -.3 .04"
size=".04" rgba="0 0 1 1"/>
</body>
<body name="box_and_sphere" pos="0 0 0">
<joint name="free" type="free"/>
<geom name="red_box" type="box" size=".1 .1 .1" rgba="1 0 0 1"/>
<geom name="green_sphere" size=".06" pos=".1 .1 .1" rgba="0 1 0 1"/>
<site name="hook" pos="-.1 -.1 -.1" size=".01"/>
</body>
</worldbody>
<tendon>
<spatial name="wire" limited="true" range="0 0.35" width="0.003">
<site site="anchor"/>
<site site="hook"/>
</spatial>
</tendon>
<actuator>
<motor name="my_motor" joint="swing" gear="1"/>
</actuator>
</mujoco>
physics = mujoco.Physics.from_xml_string(MJCF)
PIL.Image.fromarray(physics.render(480, 480, "fixed") )
#@title actuated bat and passive piรฑata: {vertical-output: true}
n_frames = 180
height = 240
width = 320
video = np.zeros((n_frames, height, width, 3), dtype=np.uint8)
# constant actuator signal
with physics.reset_context():
physics.named.data.ctrl["my_motor"] = 20
# simulate and render
for i in range(n_frames):
while physics.data.time < i/30.0:
physics.step()
video[i] = physics.render(height, width, "fixed")
display_video(video)
Explanation: Actuators and tendons
End of explanation
#@title actuated piรฑata: {vertical-output: true}
n_frames = 300
height = 240
width = 320
video = np.zeros((n_frames, height, width, 3), dtype=np.uint8)
# constant actuator signal
physics.reset()
# gravity compensation
mg = -(physics.named.model.body_mass["box_and_sphere"] *
physics.model.opt.gravity[2])
physics.named.data.xfrc_applied["box_and_sphere", 2] = mg
# One Newton in the x direction
physics.named.data.xfrc_applied["box_and_sphere", 0] = 1
# simulate and render
for i in range(n_frames):
while physics.data.time < i/30.0:
physics.step()
video[i] = physics.render(height, width)
display_video(video)
Explanation: Let's ignore the actuator and apply forces directly to the body:
End of explanation
#@title virtual spring-damper: {vertical-output: true}
MJCF =
<mujoco>
<asset>
<texture name="grid" type="2d" builtin="checker" rgb1=".1 .2 .3"
rgb2=".2 .3 .4" width="300" height="300" mark="none"/>
<material name="grid" texture="grid" texrepeat="6 6"
texuniform="true" reflectance=".2"/>
<material name="wall" rgba='.5 .5 .5 1'/>
</asset>
<option gravity="0 0 0">
<flag contact="disable"/>
</option>
<default>
<geom type="capsule" size=".02 .02 .02" />
<joint type="hinge" damping=".02"/>
</default>
<worldbody>
<light name="light" pos="0 0 1"/>
<geom name="ground" type="plane" size=".5 .5 10" material="grid"/>
<camera name="y" pos="0 -.8 .6" xyaxes="1 0 0 0 1 2"/>
<camera name="x" pos="-.8 0 .6" xyaxes="0 -1 0 1 0 2"/>
<geom fromto="0 0 0 0 0 .2" />
<body pos="0 0 .2">
<joint axis="0 0 1"/>
<joint axis="0 1 0"/>
<geom fromto="0 0 0 .2 0 0" />
<body pos=".2 0 0">
<joint axis="1 0 0"/>
<joint axis="0 1 0"/>
<geom fromto="0 0 0 0 0 .15" />
<body pos="0 0 .15">
<joint axis="0 0 1"/>
<joint axis="0 1 0"/>
<geom fromto="0 0 0 .1 0 0"/>
<geom name="fingertip" type="box" pos=".1 0 0" rgba="1 0 0 1" />
</body>
</body>
</body>
<geom name="target" type="box" rgba="0 1 0 1"/>
</worldbody>
</mujoco>
physics = mujoco.Physics.from_xml_string(MJCF)
# virtual spring coefficient
KP = 3
# prepare simulation
jac_pos = np.zeros((3, physics.model.nv))
jac_rot = np.zeros((3, physics.model.nv))
n_frames = 50
height = 320
width = 320
video = np.zeros((n_frames, height, 2*width, 3), dtype=np.uint8)
# place target in random location
with physics.reset_context():
target_pos = np.random.rand(3)*.5
target_pos[:2] -= .25
physics.named.model.geom_pos["target"][:] = target_pos
physics.named.model.geom_sameframe["target"] = 0
# simulate and render
for i in range(n_frames):
while physics.data.time < i/15.0:
# get Jacobian of fingertip position
mjlib.mj_jacGeom(physics.model.ptr,
physics.data.ptr,
jac_pos,
jac_rot,
physics.model.name2id('fingertip', 'geom'))
# multiply the jacobian by error to get vector in joint space
err = (physics.named.data.geom_xpos["target"] -
physics.named.data.geom_xpos["fingertip"])
jnt_err = np.dot(err, jac_pos)
# set virutal spring force
physics.data.qfrc_applied[:] = KP * jnt_err
# step
physics.step()
video[i] = np.hstack((physics.render(height, width, "y"),
physics.render(height, width, "x")))
display_video(video, framerate=24)
Explanation: Kinematic Jacobians
A Jacobian is a derivative matrix of a vector-valued function. MuJoCo computes the Jacobians of all transformations between joint space and Cartesian space.
Below we use the Jacobian of the end effector position to create a virtual spring to some random target.
End of explanation |
2,971 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Functions
Making reusable blocks of code.
Starting point
Step1: What about for $a = 2$, $b = 8$, and $c = 1$?
Step3: Functions
Step5: Observe how this function works.
Step7: Summarize
Step11: Summarize
How do you get information into the function?
You get information into the function via arguments defined in the function name.
Modify
Alter the code below so it takes two arguments (a and b) and prints out both of them.
Step13: Predict
What does b=5 let you do?
Step15: b=5 allows you to define a default value for the argument.
How do you get information out of a function?
Step18: Summarize
How do you get information out of the function?
By putting return at the end of a function, you can capture output.
Modify
Alter the program below so it returns the calculated value.
Step20: To return multiple values, use commas
Step21: Implement
Write a function that uses the quadratic equation to find both roots of a polynomial for any $a$, $b$, and $c$. | Python Code:
## Code here
import math
(-4 + math.sqrt(4**2 - 4*1*3))/(2*1)
Explanation: Functions
Making reusable blocks of code.
Starting point:
In this exercise, we're going to calculate one of the roots from the quadratic formula:
$r_{p} = \frac{-b + \sqrt{b^{2} - 4ac}}{2a}$
Determine $r_{p}$ for $a = 1$, $b=4$, and $c=3$.
End of explanation
## Code here
# gonna make this saner
a = 2
b = 8
c = 1
(-b + math.sqrt(b**2 - 4*a*c))/(2*a)
Explanation: What about for $a = 2$, $b = 8$, and $c = 1$?
End of explanation
def square(x):
This function will square x.
return x*x
s = square(5)
help(square)
Explanation: Functions:
Code can be organized into functions.
Functions allow you to wrap a piece of code and use it over and over.
Makes code reusable
Avoid having to re-type the same code (each time, maybe making an error)
Observe how this function works.
End of explanation
import math
def hypotenuse(y,theta):
Return a hypotenuse given y and theta in radians.
return math.sin(theta)*y
h = hypotenuse(1,math.pi/2)
print(h)
Explanation: Observe how this function works.
End of explanation
def some_function(ARGUMENT):
Print out ARGUMENT.
return 1
print(ARGUMENT)
some_function(10)
some_function("test")
Explanation: Summarize:
What is the syntax for defining a function?
```python
def FUNCTION_NAME(ARGUMENT_1,ARGUMENT_2,...,ARGUMENT_N):
Description of function.
do_stuff
do_other_stuff
do_more_stuff
return VALUE_1, VALUE_2, ... VALUE_N
```
How do you get information into a function?
End of explanation
def some_function(a):
print out a
print(a)
def some_function(a,b):
print out a and b
print(a,b)
Explanation: Summarize
How do you get information into the function?
You get information into the function via arguments defined in the function name.
Modify
Alter the code below so it takes two arguments (a and b) and prints out both of them.
End of explanation
def some_function(a,b=5,c=7):
Print a and b.
print(a,b,c)
some_function(1,c=2)
some_function(1,2)
some_function(a=5,b=4)
Explanation: Predict
What does b=5 let you do?
End of explanation
def some_function(a):
Multiply a by 5.
return a*5
print(some_function(2))
print(some_function(80.5))
x = some_function(5)
print(x)
Explanation: b=5 allows you to define a default value for the argument.
How do you get information out of a function?
End of explanation
def some_function(a,b):
Sum up a and b.
v = a + b
return v
v = some_function(1,2)
print(v)
def some_function(a,b):
Sum up a and b.
v = a + b
return v
Explanation: Summarize
How do you get information out of the function?
By putting return at the end of a function, you can capture output.
Modify
Alter the program below so it returns the calculated value.
End of explanation
def some_function(a):
Multiply a by 5 and 2.
return a*5, a*2
x, y = some_function(5)
print(x)
Explanation: To return multiple values, use commas:
End of explanation
## Code here
def get_root(a,b,c):
return (-b + math.sqrt(b**2 - 4*a*c))/(2*a)
print(get_root(1,4,3))
print(get_root(2,8,1))
Explanation: Implement
Write a function that uses the quadratic equation to find both roots of a polynomial for any $a$, $b$, and $c$.
End of explanation |
2,972 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cppyy Tutorial
(Modified from Enrico Guiraud's cppyy tutorial.)
This tutorial introduces the basic concepts for using cppyy, the automatic Python-C++ generator. To install cppyy on your system, simply run (this may take a while as it will pull in and compile a custom version of LLVM)
Step2: There are three layers to cppyy
Step3: We now have a class 'Integer1'. Note that this class exists on the C++ side and has to follow C++ rules. For example, whereas in Python we can simply redefine a class, we can't do that in C++. Therefore, we will number the Integer classes as we go along, to be able to extend the example as we see fit.
Python classes are constructed dynamically. It doesn't matter where or how they are defined, whether in a Python script, "compiled" into a C extension module, or otherwise. Cppyy takes advantage of this fact to generate bindings on-the-fly. This leads to performance advantages for large libraries with thousands of C++ classes; general distribution advantages since, other than the module cppyy itself, no code depends on any specific version of Python; and it enablers, through the Cling backend, interactive access to C++.
To access our first class, find it in gbl, the global namespace
Step4: Namespaces have simularities to modules, so we could have imported the class as well.
Bound C++ classes are first-class Python object. We can instantiate them, use normal Python introspection tools, call help(), they raise Python exceptions on failure, manage memory through Python's ref-counting and garbage collection, etc., etc. Furthermore, we can use them in conjunction with other C++ classes.
Step5: Hum, that doesn't look very pretty. However, since Integer1 is now a Python class we can decorate it, with a custom __repr__ function (we'll punt on the vector and instead convert it to a Python list for printing).
Step8: Pythonizations
As we have seen so far, automatic bindings are simple and easy to use. However, even though they are first-class Python objects, they do have some rough C++ edges left. There is some pythonization going on in the background
Step10: Class Hierarchies
Both Python and C++ support multiple programming paradigms, making it relatively straightforward to map language features (e.g. class inheritance, free functions, etc.); many other features can be cleanly hidden, merely because the syntax is very similar or otherwise natural (e.g. overloading, abstract classes, static data members, etc.); and yet others map gracefully because their semantic intent is expressed clearly in the syntax (e.g. smart pointers, STL, etc.).
The following presents a range of C++ features that map naturally, and exercises them in Python.
Step12: Modern C++
As C++ matures, more and more semantic intent (such as object ownership) is expressed in the syntax. This not for the benefit of bindings generators, but for the poor programmer having to read the code. Still, a bindings generator benefits greatly from this increased expression. | Python Code:
import cppyy
Explanation: Cppyy Tutorial
(Modified from Enrico Guiraud's cppyy tutorial.)
This tutorial introduces the basic concepts for using cppyy, the automatic Python-C++ generator. To install cppyy on your system, simply run (this may take a while as it will pull in and compile a custom version of LLVM):
$ pip install cppyy
For further details on the installation, as well as the location of binary wheels, see:
http://cppyy.readthedocs.io/en/latest/installation.html
To start, import module cppyy. All functionality, including using bound classes, always starts at this top-level.
End of explanation
cppyy.cppdef(
class Integer1 {
public:
Integer1(int i) : m_data(i) {}
int m_data;
};)
Explanation: There are three layers to cppyy: at the top there are the module 'gbl' (the global namespace), a range of helper functions, and a set of sub-modules (such as py) that serve specific purposes. Let's start with defining a little helper class in C++ using the helper function cppdef, to make the example more interesting:
End of explanation
print(cppyy.gbl.Integer1)
Explanation: We now have a class 'Integer1'. Note that this class exists on the C++ side and has to follow C++ rules. For example, whereas in Python we can simply redefine a class, we can't do that in C++. Therefore, we will number the Integer classes as we go along, to be able to extend the example as we see fit.
Python classes are constructed dynamically. It doesn't matter where or how they are defined, whether in a Python script, "compiled" into a C extension module, or otherwise. Cppyy takes advantage of this fact to generate bindings on-the-fly. This leads to performance advantages for large libraries with thousands of C++ classes; general distribution advantages since, other than the module cppyy itself, no code depends on any specific version of Python; and it enablers, through the Cling backend, interactive access to C++.
To access our first class, find it in gbl, the global namespace:
End of explanation
# for convenience, bring Integer1 into __main__
from cppyy.gbl import Integer1
# create a C++ Integer1 object
i = Integer1(42)
# use Python inspection
print("Variable has an 'm_data' data member?", hasattr(i, 'm_data') and 'Yes!' or 'No!')
print("Variable is an instance of int?", isinstance(i, int) and 'Yes!' or 'No!')
print("Variable is an instance of Integer1?", isinstance(i, Integer1) and 'Yes!' or 'No!')
# pull in the STL vector class
from cppyy.gbl.std import vector
# create a vector of Integer1 objects; note how [] instantiates the template and () instantiates the class
v = vector[Integer1]()
# populate it
v += [Integer1(j) for j in range(10)]
# display our vector
print(v)
Explanation: Namespaces have simularities to modules, so we could have imported the class as well.
Bound C++ classes are first-class Python object. We can instantiate them, use normal Python introspection tools, call help(), they raise Python exceptions on failure, manage memory through Python's ref-counting and garbage collection, etc., etc. Furthermore, we can use them in conjunction with other C++ classes.
End of explanation
# add a custom conversion for printing
Integer1.__repr__ = lambda self: repr(self.m_data)
# now try again (note the conversion of the vector to a Python list)
print(list(v))
Explanation: Hum, that doesn't look very pretty. However, since Integer1 is now a Python class we can decorate it, with a custom __repr__ function (we'll punt on the vector and instead convert it to a Python list for printing).
End of explanation
# create an Integer2 class, living in namespace Math
cppyy.cppdef(
namespace Math {
class Integer2 : public Integer1 {
public:
using Integer1::Integer1;
operator int() { return m_data; }
};
})
# prepare a pythonizor
def pythonizor(klass, name):
# A pythonizor receives the freshly prepared bound C++ class, and a name stripped down to
# the namespace the pythonizor is applied. Also accessible are klass.__name__ (for the
# Python name) and klass.__cpp_name__ (for the C++ name)
if name == 'Integer2':
klass.__repr__ = lambda self: repr(self.m_data)
# install the pythonizor as a callback on namespace 'Math' (default is the global namespace)
cppyy.py.add_pythonization(pythonizor, 'Math')
# when we next get the Integer2 class, it will have been decorated
Integer2 = cppyy.gbl.Math.Integer2 # first time a new namespace is used, it can not be imported from
v2 = vector[Integer2]()
v2 += [Integer2(j) for j in range(10)]
# now test the effect of the pythonizor:
print(list(v2))
# in addition, Integer2 has a conversion function, which is automatically recognized and pythonized
i2 = Integer2(13)
print("Converted Integer2 variable:", int(i2))
# continue the decoration on the C++ side, by adding an operator+ overload
cppyy.cppdef(
namespace Math {
Integer2 operator+(const Integer2& left, const Integer1& right) {
return left.m_data + right.m_data;
}
})
# now use that fresh decoration (it will be located and bound on use):
k = i2 + i
print(k, i2.m_data + i.m_data)
Explanation: Pythonizations
As we have seen so far, automatic bindings are simple and easy to use. However, even though they are first-class Python objects, they do have some rough C++ edges left. There is some pythonization going on in the background: the vector, for example, played nice with += and the list conversion. But for presenting your own classes to end-users, specific pythonizations are desirable. To have this work correctly with lazy binding, a callback-based API exists.
Now, it's too late for Integer1, so let's create Integer2, which lives in a namespace and in addition has a conversion feature.
End of explanation
# create some animals to play with
cppyy.cppdef(
namespace Zoo {
enum EAnimal { eLion, eMouse };
class Animal {
public:
virtual ~Animal() {}
virtual std::string make_sound() = 0;
};
class Lion : public Animal {
public:
virtual std::string make_sound() { return s_lion_sound; }
static std::string s_lion_sound;
};
std::string Lion::s_lion_sound = "growl!";
class Mouse : public Animal {
public:
virtual std::string make_sound() { return "peep!"; }
};
Animal* release_animal(EAnimal animal) {
if (animal == eLion) return new Lion{};
if (animal == eMouse) return new Mouse{};
return nullptr;
}
std::string identify_animal(Lion*) {
return "the animal is a lion";
}
std::string identify_animal(Mouse*) {
return "the animal is a mouse";
}
}
)
# pull in the Zoo (after which we can import from it)
Zoo = cppyy.gbl.Zoo
# pythonize the animal release function to take ownership on return
Zoo.release_animal.__creates__ = True
# abstract base classes can not be instantiated:
try:
animal = Zoo.Animal()
except TypeError as e:
print('Failed:', e, '\n')
# derived classes can be inspected in the same class hierarchy on the Python side
print('A Lion is an Animal?', issubclass(Zoo.Lion, Zoo.Animal) and 'Yes!' or 'No!', '\n')
# returned pointer types are auto-casted to the lowest known derived type:
mouse = Zoo.release_animal(Zoo.eMouse)
print('Type of mouse:', type(mouse))
lion = Zoo.release_animal(Zoo.eLion)
print('Type of lion:', type(lion), '\n')
# as pythonized, the ownership of the return value from release_animal is Python's
print("Does Python own the 'lion'?", lion.__python_owns__ and 'Yes!' or 'No!')
print("Does Python own the 'mouse'?", mouse.__python_owns__ and 'Yes!' or 'No!', '\n')
# virtual functions work as expected:
print('The mouse says:', mouse.make_sound())
print('The lion says:', lion.make_sound(), '\n')
# now change what the lion says through its static (class) variable
Zoo.Lion.s_lion_sound = "mooh!"
print('The lion says:', lion.make_sound(), '\n')
# overloads are combined into a single function on the Python side and resolved dynamically
print("Identification of \'mouse\':", Zoo.identify_animal(mouse))
print("Identification of \'lion\':", Zoo.identify_animal(lion))
Explanation: Class Hierarchies
Both Python and C++ support multiple programming paradigms, making it relatively straightforward to map language features (e.g. class inheritance, free functions, etc.); many other features can be cleanly hidden, merely because the syntax is very similar or otherwise natural (e.g. overloading, abstract classes, static data members, etc.); and yet others map gracefully because their semantic intent is expressed clearly in the syntax (e.g. smart pointers, STL, etc.).
The following presents a range of C++ features that map naturally, and exercises them in Python.
End of explanation
cppyy.cppdef(
namespace Zoo {
std::shared_ptr<Lion> free_lion{new Lion{}};
std::string identify_animal_smart(std::shared_ptr<Lion>& smart) {
return "the animal is a lion";
}
}
)
# shared pointers are presented transparently as the wrapped type
print("Type of the 'free_lion' global:", type(Zoo.free_lion).__name__)
# if need be, the smart pointer is accessible with a helper
smart_lion = Zoo.free_lion.__smartptr__()
print("Type of the 'free_lion' smart ptr:", type(smart_lion).__name__)
# pass through functions that expect a naked pointer or smart pointer
print("Dumb passing: ", Zoo.identify_animal(Zoo.free_lion))
print("Smart passing:", Zoo.identify_animal_smart(Zoo.free_lion))
Explanation: Modern C++
As C++ matures, more and more semantic intent (such as object ownership) is expressed in the syntax. This not for the benefit of bindings generators, but for the poor programmer having to read the code. Still, a bindings generator benefits greatly from this increased expression.
End of explanation |
2,973 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Boosting a decision stump
The goal of this notebook is to implement your own boosting module.
Brace yourselves! This is going to be a fun and challenging assignment.
Use SFrames to do some feature engineering.
Modify the decision trees to incorporate weights.
Implement Adaboost ensembling.
Use your implementation of Adaboost to train a boosted decision stump ensemble.
Evaluate the effect of boosting (adding more decision stumps) on performance of the model.
Explore the robustness of Adaboost to overfitting.
Let's get started!
Fire up GraphLab Create
Make sure you have the latest version of GraphLab Create (1.8.3 or newer). Upgrade by
pip install graphlab-create --upgrade
See this page for detailed instructions on upgrading.
Step1: Getting the data ready
We will be using the same LendingClub dataset as in the previous assignment.
Step2: Extracting the target and the feature columns
We will now repeat some of the feature processing steps that we saw in the previous assignment
Step3: Subsample dataset to make sure classes are balanced
Just as we did in the previous assignment, we will undersample the larger class (safe loans) in order to balance out our dataset. This means we are throwing away many data points. We use seed=1 so everyone gets the same results.
Step4: Note
Step5: Let's see what the feature columns look like now
Step6: Train-test split
We split the data into training and test sets with 80% of the data in the training set and 20% of the data in the test set. We use seed=1 so that everyone gets the same result.
Step7: Weighted decision trees
Let's modify our decision tree code from Module 5 to support weighting of individual data points.
Weighted error definition
Consider a model with $N$ data points with
Step8: Checkpoint
Step9: Recall that the classification error is defined as follows
Step10: Checkpoint
Step11: Note. If you get an exception in the line of "the logical filter has different size than the array", try upgradting your GraphLab Create installation to 1.8.3 or newer.
Very Optional. Relationship between weighted error and weight of mistakes
By definition, the weighted error is the weight of mistakes divided by the weight of all data points, so
$$
\mathrm{E}(\mathbf{\alpha}, \mathbf{\hat{y}}) = \frac{\sum_{i=1}^{n} \alpha_i \times 1[y_i \neq \hat{y_i}]}{\sum_{i=1}^{n} \alpha_i} = \frac{\mathrm{WM}(\mathbf{\alpha}, \mathbf{\hat{y}})}{\sum_{i=1}^{n} \alpha_i}.
$$
In the code above, we obtain $\mathrm{E}(\mathbf{\alpha}, \mathbf{\hat{y}})$ from the two weights of mistakes from both sides, $\mathrm{WM}(\mathbf{\alpha}{\mathrm{left}}, \mathbf{\hat{y}}{\mathrm{left}})$ and $\mathrm{WM}(\mathbf{\alpha}{\mathrm{right}}, \mathbf{\hat{y}}{\mathrm{right}})$. First, notice that the overall weight of mistakes $\mathrm{WM}(\mathbf{\alpha}, \mathbf{\hat{y}})$ can be broken into two weights of mistakes over either side of the split
Step12: We provide a function that learns a weighted decision tree recursively and implements 3 stopping conditions
Step13: Here is a recursive function to count the nodes in your tree
Step14: Run the following test code to check your implementation. Make sure you get 'Test passed' before proceeding.
Step15: Let us take a quick look at what the trained tree is like. You should get something that looks like the following
{'is_leaf'
Step16: Making predictions with a weighted decision tree
We give you a function that classifies one data point. It can also return the probability if you want to play around with that as well.
Step17: Evaluating the tree
Now, we will write a function to evaluate a decision tree by computing the classification error of the tree on the given dataset.
Again, recall that the classification error is defined as follows
Step18: Example
Step19: Now, we will compute the classification error on the subset_20, i.e. the subset of data points whose weight is 1 (namely the first and last 10 data points).
Step20: Now, let us compare the classification error of the model small_data_decision_tree_subset_20 on the entire test set train_data
Step21: The model small_data_decision_tree_subset_20 performs a lot better on subset_20 than on train_data.
So, what does this mean?
* The points with higher weights are the ones that are more important during the training process of the weighted decision tree.
* The points with zero weights are basically ignored during training.
Quiz Question
Step22: Checking your Adaboost code
Train an ensemble of two tree stumps and see which features those stumps split on. We will run the algorithm with the following parameters
Step23: Here is what the first stump looks like
Step24: Here is what the next stump looks like
Step25: If your Adaboost is correctly implemented, the following things should be true
Step26: Making predictions
Recall from the lecture that in order to make predictions, we use the following formula
Step27: Now, let us take a quick look what the stump_weights look like at the end of each iteration of the 10-stump ensemble
Step28: Quiz Question
Step29: Computing training error at the end of each iteration
Now, we will compute the classification error on the train_data and see how it is reduced as trees are added.
Step30: Visualizing training error vs number of iterations
We have provided you with a simple code snippet that plots classification error with the number of iterations.
Step31: Quiz Question
Step32: Visualize both the training and test errors
Now, let us plot the training & test error with the number of iterations. | Python Code:
import graphlab
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Boosting a decision stump
The goal of this notebook is to implement your own boosting module.
Brace yourselves! This is going to be a fun and challenging assignment.
Use SFrames to do some feature engineering.
Modify the decision trees to incorporate weights.
Implement Adaboost ensembling.
Use your implementation of Adaboost to train a boosted decision stump ensemble.
Evaluate the effect of boosting (adding more decision stumps) on performance of the model.
Explore the robustness of Adaboost to overfitting.
Let's get started!
Fire up GraphLab Create
Make sure you have the latest version of GraphLab Create (1.8.3 or newer). Upgrade by
pip install graphlab-create --upgrade
See this page for detailed instructions on upgrading.
End of explanation
loans = graphlab.SFrame('lending-club-data.gl/')
Explanation: Getting the data ready
We will be using the same LendingClub dataset as in the previous assignment.
End of explanation
features = ['grade', # grade of the loan
'term', # the term of the loan
'home_ownership', # home ownership status: own, mortgage or rent
'emp_length', # number of years of employment
]
loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)
loans.remove_column('bad_loans')
target = 'safe_loans'
loans = loans[features + [target]]
Explanation: Extracting the target and the feature columns
We will now repeat some of the feature processing steps that we saw in the previous assignment:
First, we re-assign the target to have +1 as a safe (good) loan, and -1 as a risky (bad) loan.
Next, we select four categorical features:
1. grade of the loan
2. the length of the loan term
3. the home ownership status: own, mortgage, rent
4. number of years of employment.
End of explanation
safe_loans_raw = loans[loans[target] == 1]
risky_loans_raw = loans[loans[target] == -1]
# Undersample the safe loans.
percentage = len(risky_loans_raw)/float(len(safe_loans_raw))
risky_loans = risky_loans_raw
safe_loans = safe_loans_raw.sample(percentage, seed=1)
loans_data = risky_loans_raw.append(safe_loans)
print "Percentage of safe loans :", len(safe_loans) / float(len(loans_data))
print "Percentage of risky loans :", len(risky_loans) / float(len(loans_data))
print "Total number of loans in our new dataset :", len(loans_data)
Explanation: Subsample dataset to make sure classes are balanced
Just as we did in the previous assignment, we will undersample the larger class (safe loans) in order to balance out our dataset. This means we are throwing away many data points. We use seed=1 so everyone gets the same results.
End of explanation
loans_data = risky_loans.append(safe_loans)
for feature in features:
loans_data_one_hot_encoded = loans_data[feature].apply(lambda x: {x: 1})
loans_data_unpacked = loans_data_one_hot_encoded.unpack(column_name_prefix=feature)
# Change None's to 0's
for column in loans_data_unpacked.column_names():
loans_data_unpacked[column] = loans_data_unpacked[column].fillna(0)
loans_data.remove_column(feature)
loans_data.add_columns(loans_data_unpacked)
Explanation: Note: There are many approaches for dealing with imbalanced data, including some where we modify the learning algorithm. These approaches are beyond the scope of this course, but some of them are reviewed in this paper. For this assignment, we use the simplest possible approach, where we subsample the overly represented class to get a more balanced dataset. In general, and especially when the data is highly imbalanced, we recommend using more advanced methods.
Transform categorical data into binary features
In this assignment, we will work with binary decision trees. Since all of our features are currently categorical features, we want to turn them into binary features using 1-hot encoding.
We can do so with the following code block (see the first assignments for more details):
End of explanation
features = loans_data.column_names()
features.remove('safe_loans') # Remove the response variable
features
Explanation: Let's see what the feature columns look like now:
End of explanation
train_data, test_data = loans_data.random_split(0.8, seed=1)
Explanation: Train-test split
We split the data into training and test sets with 80% of the data in the training set and 20% of the data in the test set. We use seed=1 so that everyone gets the same result.
End of explanation
def intermediate_node_weighted_mistakes(labels_in_node, data_weights):
# Sum the weights of all entries with label +1
total_weight_positive = sum(data_weights[labels_in_node == +1])
# Weight of mistakes for predicting all -1's is equal to the sum above
### YOUR CODE HERE
weighted_mistakes_all_negative = total_weight_positive
# Sum the weights of all entries with label -1
### YOUR CODE HERE
total_weight_negative = sum(data_weights[labels_in_node == -1])
# Weight of mistakes for predicting all +1's is equal to the sum above
### YOUR CODE HERE
weighted_mistakes_all_positive = total_weight_negative
# Return the tuple (weight, class_label) representing the lower of the two weights
# class_label should be an integer of value +1 or -1.
# If the two weights are identical, return (weighted_mistakes_all_positive,+1)
### YOUR CODE HERE
if weighted_mistakes_all_positive <= weighted_mistakes_all_negative:
return weighted_mistakes_all_positive, +1
else:
return weighted_mistakes_all_negative, -1
Explanation: Weighted decision trees
Let's modify our decision tree code from Module 5 to support weighting of individual data points.
Weighted error definition
Consider a model with $N$ data points with:
* Predictions $\hat{y}_1 ... \hat{y}_n$
* Target $y_1 ... y_n$
* Data point weights $\alpha_1 ... \alpha_n$.
Then the weighted error is defined by:
$$
\mathrm{E}(\mathbf{\alpha}, \mathbf{\hat{y}}) = \frac{\sum_{i=1}^{n} \alpha_i \times 1[y_i \neq \hat{y_i}]}{\sum_{i=1}^{n} \alpha_i}
$$
where $1[y_i \neq \hat{y_i}]$ is an indicator function that is set to $1$ if $y_i \neq \hat{y_i}$.
Write a function to compute weight of mistakes
Write a function that calculates the weight of mistakes for making the "weighted-majority" predictions for a dataset. The function accepts two inputs:
* labels_in_node: Targets $y_1 ... y_n$
* data_weights: Data point weights $\alpha_1 ... \alpha_n$
We are interested in computing the (total) weight of mistakes, i.e.
$$
\mathrm{WM}(\mathbf{\alpha}, \mathbf{\hat{y}}) = \sum_{i=1}^{n} \alpha_i \times 1[y_i \neq \hat{y_i}].
$$
This quantity is analogous to the number of mistakes, except that each mistake now carries different weight. It is related to the weighted error in the following way:
$$
\mathrm{E}(\mathbf{\alpha}, \mathbf{\hat{y}}) = \frac{\mathrm{WM}(\mathbf{\alpha}, \mathbf{\hat{y}})}{\sum_{i=1}^{n} \alpha_i}
$$
The function intermediate_node_weighted_mistakes should first compute two weights:
* $\mathrm{WM}{-1}$: weight of mistakes when all predictions are $\hat{y}_i = -1$ i.e $\mathrm{WM}(\mathbf{\alpha}, \mathbf{-1}$)
* $\mathrm{WM}{+1}$: weight of mistakes when all predictions are $\hat{y}_i = +1$ i.e $\mbox{WM}(\mathbf{\alpha}, \mathbf{+1}$)
where $\mathbf{-1}$ and $\mathbf{+1}$ are vectors where all values are -1 and +1 respectively.
After computing $\mathrm{WM}{-1}$ and $\mathrm{WM}{+1}$, the function intermediate_node_weighted_mistakes should return the lower of the two weights of mistakes, along with the class associated with that weight. We have provided a skeleton for you with YOUR CODE HERE to be filled in several places.
End of explanation
example_labels = graphlab.SArray([-1, -1, 1, 1, 1])
example_data_weights = graphlab.SArray([1., 2., .5, 1., 1.])
if intermediate_node_weighted_mistakes(example_labels, example_data_weights) == (2.5, -1):
print 'Test passed!'
else:
print 'Test failed... try again!'
Explanation: Checkpoint: Test your intermediate_node_weighted_mistakes function, run the following cell:
End of explanation
# If the data is identical in each feature, this function should return None
def best_splitting_feature(data, features, target, data_weights):
print data_weights
# These variables will keep track of the best feature and the corresponding error
best_feature = None
best_error = float('+inf')
num_points = float(len(data))
# Loop through each feature to consider splitting on that feature
for feature in features:
# The left split will have all data points where the feature value is 0
# The right split will have all data points where the feature value is 1
left_split = data[data[feature] == 0]
right_split = data[data[feature] == 1]
# Apply the same filtering to data_weights to create left_data_weights, right_data_weights
## YOUR CODE HERE
left_data_weights = data_weights[data[feature] == 0]
right_data_weights = data_weights[data[feature] == 1]
# DIFFERENT HERE
# Calculate the weight of mistakes for left and right sides
## YOUR CODE HERE
left_weighted_mistakes, left_class = intermediate_node_weighted_mistakes(left_split[target], left_data_weights)
right_weighted_mistakes, right_class = intermediate_node_weighted_mistakes(right_split[target], right_data_weights)
# DIFFERENT HERE
# Compute weighted error by computing
# ( [weight of mistakes (left)] + [weight of mistakes (right)] ) / [total weight of all data points]
## YOUR CODE HERE
error = (left_weighted_mistakes + right_weighted_mistakes) / (sum(left_data_weights) + sum(right_data_weights))
# If this is the best error we have found so far, store the feature and the error
if error < best_error:
best_feature = feature
best_error = error
# Return the best feature we found
return best_feature
Explanation: Recall that the classification error is defined as follows:
$$
\mbox{classification error} = \frac{\mbox{# mistakes}}{\mbox{# all data points}}
$$
Quiz Question: If we set the weights $\mathbf{\alpha} = 1$ for all data points, how is the weight of mistakes $\mbox{WM}(\mathbf{\alpha}, \mathbf{\hat{y}})$ related to the classification error?
Function to pick best feature to split on
We continue modifying our decision tree code from the earlier assignment to incorporate weighting of individual data points. The next step is to pick the best feature to split on.
The best_splitting_feature function is similar to the one from the earlier assignment with two minor modifications:
1. The function best_splitting_feature should now accept an extra parameter data_weights to take account of weights of data points.
2. Instead of computing the number of mistakes in the left and right side of the split, we compute the weight of mistakes for both sides, add up the two weights, and divide it by the total weight of the data.
Complete the following function. Comments starting with DIFFERENT HERE mark the sections where the weighted version differs from the original implementation.
End of explanation
example_data_weights = graphlab.SArray(len(train_data)* [1.5])
if best_splitting_feature(train_data, features, target, example_data_weights) == 'term. 36 months':
print 'Test passed!'
else:
print 'Test failed... try again!'
Explanation: Checkpoint: Now, we have another checkpoint to make sure you are on the right track.
End of explanation
def create_leaf(target_values, data_weights):
# Create a leaf node
leaf = {'splitting_feature' : None,
'is_leaf': True}
# Computed weight of mistakes.
weighted_error, best_class = intermediate_node_weighted_mistakes(target_values, data_weights)
# Store the predicted class (1 or -1) in leaf['prediction']
leaf['prediction'] = best_class ## YOUR CODE HERE
return leaf
Explanation: Note. If you get an exception in the line of "the logical filter has different size than the array", try upgradting your GraphLab Create installation to 1.8.3 or newer.
Very Optional. Relationship between weighted error and weight of mistakes
By definition, the weighted error is the weight of mistakes divided by the weight of all data points, so
$$
\mathrm{E}(\mathbf{\alpha}, \mathbf{\hat{y}}) = \frac{\sum_{i=1}^{n} \alpha_i \times 1[y_i \neq \hat{y_i}]}{\sum_{i=1}^{n} \alpha_i} = \frac{\mathrm{WM}(\mathbf{\alpha}, \mathbf{\hat{y}})}{\sum_{i=1}^{n} \alpha_i}.
$$
In the code above, we obtain $\mathrm{E}(\mathbf{\alpha}, \mathbf{\hat{y}})$ from the two weights of mistakes from both sides, $\mathrm{WM}(\mathbf{\alpha}{\mathrm{left}}, \mathbf{\hat{y}}{\mathrm{left}})$ and $\mathrm{WM}(\mathbf{\alpha}{\mathrm{right}}, \mathbf{\hat{y}}{\mathrm{right}})$. First, notice that the overall weight of mistakes $\mathrm{WM}(\mathbf{\alpha}, \mathbf{\hat{y}})$ can be broken into two weights of mistakes over either side of the split:
$$
\mathrm{WM}(\mathbf{\alpha}, \mathbf{\hat{y}})
= \sum_{i=1}^{n} \alpha_i \times 1[y_i \neq \hat{y_i}]
= \sum_{\mathrm{left}} \alpha_i \times 1[y_i \neq \hat{y_i}]
+ \sum_{\mathrm{right}} \alpha_i \times 1[y_i \neq \hat{y_i}]\
= \mathrm{WM}(\mathbf{\alpha}{\mathrm{left}}, \mathbf{\hat{y}}{\mathrm{left}}) + \mathrm{WM}(\mathbf{\alpha}{\mathrm{right}}, \mathbf{\hat{y}}{\mathrm{right}})
$$
We then divide through by the total weight of all data points to obtain $\mathrm{E}(\mathbf{\alpha}, \mathbf{\hat{y}})$:
$$
\mathrm{E}(\mathbf{\alpha}, \mathbf{\hat{y}})
= \frac{\mathrm{WM}(\mathbf{\alpha}{\mathrm{left}}, \mathbf{\hat{y}}{\mathrm{left}}) + \mathrm{WM}(\mathbf{\alpha}{\mathrm{right}}, \mathbf{\hat{y}}{\mathrm{right}})}{\sum_{i=1}^{n} \alpha_i}
$$
Building the tree
With the above functions implemented correctly, we are now ready to build our decision tree. Recall from the previous assignments that each node in the decision tree is represented as a dictionary which contains the following keys:
{
'is_leaf' : True/False.
'prediction' : Prediction at the leaf node.
'left' : (dictionary corresponding to the left tree).
'right' : (dictionary corresponding to the right tree).
'features_remaining' : List of features that are posible splits.
}
Let us start with a function that creates a leaf node given a set of target values:
End of explanation
def weighted_decision_tree_create(data, features, target, data_weights, current_depth = 1, max_depth = 10):
remaining_features = features[:] # Make a copy of the features.
target_values = data[target]
print "--------------------------------------------------------------------"
print "Subtree, depth = %s (%s data points)." % (current_depth, len(target_values))
# Stopping condition 1. Error is 0.
if intermediate_node_weighted_mistakes(target_values, data_weights)[0] <= 1e-15:
print "Stopping condition 1 reached."
return create_leaf(target_values, data_weights)
# Stopping condition 2. No more features.
if remaining_features == []:
print "Stopping condition 2 reached."
return create_leaf(target_values, data_weights)
# Additional stopping condition (limit tree depth)
if current_depth > max_depth:
print "Reached maximum depth. Stopping for now."
return create_leaf(target_values, data_weights)
# If all the datapoints are the same, splitting_feature will be None. Create a leaf
splitting_feature = best_splitting_feature(data, features, target, data_weights)
remaining_features.remove(splitting_feature)
left_split = data[data[splitting_feature] == 0]
right_split = data[data[splitting_feature] == 1]
left_data_weights = data_weights[data[splitting_feature] == 0]
right_data_weights = data_weights[data[splitting_feature] == 1]
print "Split on feature %s. (%s, %s)" % (\
splitting_feature, len(left_split), len(right_split))
# Create a leaf node if the split is "perfect"
if len(left_split) == len(data):
print "Creating leaf node."
return create_leaf(left_split[target], data_weights)
if len(right_split) == len(data):
print "Creating leaf node."
return create_leaf(right_split[target], data_weights)
# Repeat (recurse) on left and right subtrees
left_tree = weighted_decision_tree_create(
left_split, remaining_features, target, left_data_weights, current_depth + 1, max_depth)
right_tree = weighted_decision_tree_create(
right_split, remaining_features, target, right_data_weights, current_depth + 1, max_depth)
return {'is_leaf' : False,
'prediction' : None,
'splitting_feature': splitting_feature,
'left' : left_tree,
'right' : right_tree}
Explanation: We provide a function that learns a weighted decision tree recursively and implements 3 stopping conditions:
1. All data points in a node are from the same class.
2. No more features to split on.
3. Stop growing the tree when the tree depth reaches max_depth.
End of explanation
def count_nodes(tree):
if tree['is_leaf']:
return 1
return 1 + count_nodes(tree['left']) + count_nodes(tree['right'])
Explanation: Here is a recursive function to count the nodes in your tree:
End of explanation
example_data_weights = graphlab.SArray([1.0 for i in range(len(train_data))])
small_data_decision_tree = weighted_decision_tree_create(train_data, features, target,
example_data_weights, max_depth=2)
if count_nodes(small_data_decision_tree) == 7:
print 'Test passed!'
else:
print 'Test failed... try again!'
print 'Number of nodes found:', count_nodes(small_data_decision_tree)
print 'Number of nodes that should be there: 7'
Explanation: Run the following test code to check your implementation. Make sure you get 'Test passed' before proceeding.
End of explanation
small_data_decision_tree
Explanation: Let us take a quick look at what the trained tree is like. You should get something that looks like the following
{'is_leaf': False,
'left': {'is_leaf': False,
'left': {'is_leaf': True, 'prediction': -1, 'splitting_feature': None},
'prediction': None,
'right': {'is_leaf': True, 'prediction': 1, 'splitting_feature': None},
'splitting_feature': 'grade.A'
},
'prediction': None,
'right': {'is_leaf': False,
'left': {'is_leaf': True, 'prediction': 1, 'splitting_feature': None},
'prediction': None,
'right': {'is_leaf': True, 'prediction': -1, 'splitting_feature': None},
'splitting_feature': 'grade.D'
},
'splitting_feature': 'term. 36 months'
}
End of explanation
def classify(tree, x, annotate = False):
# If the node is a leaf node.
if tree['is_leaf']:
if annotate:
print "At leaf, predicting %s" % tree['prediction']
return tree['prediction']
else:
# Split on feature.
split_feature_value = x[tree['splitting_feature']]
if annotate:
print "Split on %s = %s" % (tree['splitting_feature'], split_feature_value)
if split_feature_value == 0:
return classify(tree['left'], x, annotate)
else:
return classify(tree['right'], x, annotate)
Explanation: Making predictions with a weighted decision tree
We give you a function that classifies one data point. It can also return the probability if you want to play around with that as well.
End of explanation
def evaluate_classification_error(tree, data):
# Apply the classify(tree, x) to each row in your data
prediction = data.apply(lambda x: classify(tree, x))
# Once you've made the predictions, calculate the classification error
return (prediction != data[target]).sum() / float(len(data))
evaluate_classification_error(small_data_decision_tree, test_data)
Explanation: Evaluating the tree
Now, we will write a function to evaluate a decision tree by computing the classification error of the tree on the given dataset.
Again, recall that the classification error is defined as follows:
$$
\mbox{classification error} = \frac{\mbox{# mistakes}}{\mbox{# all data points}}
$$
The function called evaluate_classification_error takes in as input:
1. tree (as described above)
2. data (an SFrame)
The function does not change because of adding data point weights.
End of explanation
# Assign weights
example_data_weights = graphlab.SArray([1.] * 10 + [0.]*(len(train_data) - 20) + [1.] * 10)
# Train a weighted decision tree model.
small_data_decision_tree_subset_20 = weighted_decision_tree_create(train_data, features, target,
example_data_weights, max_depth=2)
Explanation: Example: Training a weighted decision tree
To build intuition on how weighted data points affect the tree being built, consider the following:
Suppose we only care about making good predictions for the first 10 and last 10 items in train_data, we assign weights:
* 1 to the last 10 items
* 1 to the first 10 items
* and 0 to the rest.
Let us fit a weighted decision tree with max_depth = 2.
End of explanation
subset_20 = train_data.head(10).append(train_data.tail(10))
evaluate_classification_error(small_data_decision_tree_subset_20, subset_20)
Explanation: Now, we will compute the classification error on the subset_20, i.e. the subset of data points whose weight is 1 (namely the first and last 10 data points).
End of explanation
evaluate_classification_error(small_data_decision_tree_subset_20, train_data)
Explanation: Now, let us compare the classification error of the model small_data_decision_tree_subset_20 on the entire test set train_data:
End of explanation
from math import log
from math import exp
def adaboost_with_tree_stumps(data, features, target, num_tree_stumps):
# start with unweighted data
alpha = graphlab.SArray([1.]*len(data))
weights = []
tree_stumps = []
target_values = data[target]
for t in xrange(num_tree_stumps):
print '====================================================='
print 'Adaboost Iteration %d' % t
print '====================================================='
# Learn a weighted decision tree stump. Use max_depth=1
tree_stump = weighted_decision_tree_create(data, features, target, data_weights=alpha, max_depth=1)
tree_stumps.append(tree_stump)
# Make predictions
predictions = data.apply(lambda x: classify(tree_stump, x))
# Produce a Boolean array indicating whether
# each data point was correctly classified
is_correct = predictions == target_values
is_wrong = predictions != target_values
# Compute weighted error
# YOUR CODE HERE
weighted_error = sum(alpha * is_wrong) / sum(alpha)
# Compute model coefficient using weighted error
# YOUR CODE HERE
weight = .5 * log((1 - weighted_error) / weighted_error)
weights.append(weight)
# Adjust weights on data point
adjustment = is_correct.apply(lambda is_correct : exp(-weight) if is_correct else exp(weight))
# Scale alpha by multiplying by adjustment
# Then normalize data points weights
## YOUR CODE HERE
alpha *= adjustment
alpha /= sum(alpha)
return weights, tree_stumps
Explanation: The model small_data_decision_tree_subset_20 performs a lot better on subset_20 than on train_data.
So, what does this mean?
* The points with higher weights are the ones that are more important during the training process of the weighted decision tree.
* The points with zero weights are basically ignored during training.
Quiz Question: Will you get the same model as small_data_decision_tree_subset_20 if you trained a decision tree with only the 20 data points with non-zero weights from the set of points in subset_20?
Implementing your own Adaboost (on decision stumps)
Now that we have a weighted decision tree working, it takes only a bit of work to implement Adaboost. For the sake of simplicity, let us stick with decision tree stumps by training trees with max_depth=1.
Recall from the lecture the procedure for Adaboost:
1. Start with unweighted data with $\alpha_j = 1$
2. For t = 1,...T:
* Learn $f_t(x)$ with data weights $\alpha_j$
* Compute coefficient $\hat{w}t$:
$$\hat{w}_t = \frac{1}{2}\ln{\left(\frac{1- \mbox{E}(\mathbf{\alpha}, \mathbf{\hat{y}})}{\mbox{E}(\mathbf{\alpha}, \mathbf{\hat{y}})}\right)}$$
* Re-compute weights $\alpha_j$:
$$\alpha_j \gets \begin{cases}
\alpha_j \exp{(-\hat{w}_t)} & \text{ if }f_t(x_j) = y_j\
\alpha_j \exp{(\hat{w}_t)} & \text{ if }f_t(x_j) \neq y_j
\end{cases}$$
* Normalize weights $\alpha_j$:
$$\alpha_j \gets \frac{\alpha_j}{\sum{i=1}^{N}{\alpha_i}} $$
Complete the skeleton for the following code to implement adaboost_with_tree_stumps. Fill in the places with YOUR CODE HERE.
End of explanation
stump_weights, tree_stumps = adaboost_with_tree_stumps(train_data, features, target, num_tree_stumps=2)
def print_stump(tree):
split_name = tree['splitting_feature'] # split_name is something like 'term. 36 months'
if split_name is None:
print "(leaf, label: %s)" % tree['prediction']
return None
split_feature, split_value = split_name.split('.')
print ' root'
print ' |---------------|----------------|'
print ' | |'
print ' | |'
print ' | |'
print ' [{0} == 0]{1}[{0} == 1] '.format(split_name, ' '*(27-len(split_name)))
print ' | |'
print ' | |'
print ' | |'
print ' (%s) (%s)' \
% (('leaf, label: ' + str(tree['left']['prediction']) if tree['left']['is_leaf'] else 'subtree'),
('leaf, label: ' + str(tree['right']['prediction']) if tree['right']['is_leaf'] else 'subtree'))
Explanation: Checking your Adaboost code
Train an ensemble of two tree stumps and see which features those stumps split on. We will run the algorithm with the following parameters:
* train_data
* features
* target
* num_tree_stumps = 2
End of explanation
print_stump(tree_stumps[0])
Explanation: Here is what the first stump looks like:
End of explanation
print_stump(tree_stumps[1])
print stump_weights
Explanation: Here is what the next stump looks like:
End of explanation
stump_weights, tree_stumps = adaboost_with_tree_stumps(train_data, features,
target, num_tree_stumps=10)
Explanation: If your Adaboost is correctly implemented, the following things should be true:
tree_stumps[0] should split on term. 36 months with the prediction -1 on the left and +1 on the right.
tree_stumps[1] should split on grade.A with the prediction -1 on the left and +1 on the right.
Weights should be approximately [0.158, 0.177]
Reminders
- Stump weights ($\mathbf{\hat{w}}$) and data point weights ($\mathbf{\alpha}$) are two different concepts.
- Stump weights ($\mathbf{\hat{w}}$) tell you how important each stump is while making predictions with the entire boosted ensemble.
- Data point weights ($\mathbf{\alpha}$) tell you how important each data point is while training a decision stump.
Training a boosted ensemble of 10 stumps
Let us train an ensemble of 10 decision tree stumps with Adaboost. We run the adaboost_with_tree_stumps function with the following parameters:
* train_data
* features
* target
* num_tree_stumps = 10
End of explanation
def predict_adaboost(stump_weights, tree_stumps, data):
scores = graphlab.SArray([0.]*len(data))
for i, tree_stump in enumerate(tree_stumps):
predictions = data.apply(lambda x: classify(tree_stump, x))
# Accumulate predictions on scaores array
# YOUR CODE HERE
scores += stump_weights[i] * predictions
return scores.apply(lambda score : +1 if score > 0 else -1)
predictions = predict_adaboost(stump_weights, tree_stumps, test_data)
accuracy = graphlab.evaluation.accuracy(test_data[target], predictions)
print 'Accuracy of 10-component ensemble = %s' % accuracy
Explanation: Making predictions
Recall from the lecture that in order to make predictions, we use the following formula:
$$
\hat{y} = sign\left(\sum_{t=1}^T \hat{w}_t f_t(x)\right)
$$
We need to do the following things:
- Compute the predictions $f_t(x)$ using the $t$-th decision tree
- Compute $\hat{w}_t f_t(x)$ by multiplying the stump_weights with the predictions $f_t(x)$ from the decision trees
- Sum the weighted predictions over each stump in the ensemble.
Complete the following skeleton for making predictions:
End of explanation
stump_weights
Explanation: Now, let us take a quick look what the stump_weights look like at the end of each iteration of the 10-stump ensemble:
End of explanation
# this may take a while...
stump_weights, tree_stumps = adaboost_with_tree_stumps(train_data,
features, target, num_tree_stumps=30)
Explanation: Quiz Question: Are the weights monotonically decreasing, monotonically increasing, or neither?
Reminder: Stump weights ($\mathbf{\hat{w}}$) tell you how important each stump is while making predictions with the entire boosted ensemble.
Performance plots
In this section, we will try to reproduce some of the performance plots dicussed in the lecture.
How does accuracy change with adding stumps to the ensemble?
We will now train an ensemble with:
* train_data
* features
* target
* num_tree_stumps = 30
Once we are done with this, we will then do the following:
* Compute the classification error at the end of each iteration.
* Plot a curve of classification error vs iteration.
First, lets train the model.
End of explanation
error_all = []
for n in xrange(1, 31):
predictions = predict_adaboost(stump_weights[:n], tree_stumps[:n], train_data)
error = 1.0 - graphlab.evaluation.accuracy(train_data[target], predictions)
error_all.append(error)
print "Iteration %s, training error = %s" % (n, error_all[n-1])
Explanation: Computing training error at the end of each iteration
Now, we will compute the classification error on the train_data and see how it is reduced as trees are added.
End of explanation
plt.rcParams['figure.figsize'] = 7, 5
plt.plot(range(1,31), error_all, '-', linewidth=4.0, label='Training error')
plt.title('Performance of Adaboost ensemble')
plt.xlabel('# of iterations')
plt.ylabel('Classification error')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size': 16})
Explanation: Visualizing training error vs number of iterations
We have provided you with a simple code snippet that plots classification error with the number of iterations.
End of explanation
test_error_all = []
for n in xrange(1, 31):
predictions = predict_adaboost(stump_weights[:n], tree_stumps[:n], test_data)
error = 1.0 - graphlab.evaluation.accuracy(test_data[target], predictions)
test_error_all.append(error)
print "Iteration %s, test error = %s" % (n, test_error_all[n-1])
Explanation: Quiz Question: Which of the following best describes a general trend in accuracy as we add more and more components? Answer based on the 30 components learned so far.
Training error goes down monotonically, i.e. the training error reduces with each iteration but never increases.
Training error goes down in general, with some ups and downs in the middle.
Training error goes up in general, with some ups and downs in the middle.
Training error goes down in the beginning, achieves the best error, and then goes up sharply.
None of the above
Evaluation on the test data
Performing well on the training data is cheating, so lets make sure it works on the test_data as well. Here, we will compute the classification error on the test_data at the end of each iteration.
End of explanation
plt.rcParams['figure.figsize'] = 7, 5
plt.plot(range(1,31), error_all, '-', linewidth=4.0, label='Training error')
plt.plot(range(1,31), test_error_all, '-', linewidth=4.0, label='Test error')
plt.title('Performance of Adaboost ensemble')
plt.xlabel('# of iterations')
plt.ylabel('Classification error')
plt.rcParams.update({'font.size': 16})
plt.legend(loc='best', prop={'size':15})
plt.tight_layout()
Explanation: Visualize both the training and test errors
Now, let us plot the training & test error with the number of iterations.
End of explanation |
2,974 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Creating a document similarity microservice for the Reuters-21578 dataset.
First download the Reuters-21578 dataset in JSON format into the local folder
Step1: Create a gensim LSI document similarity model
Step2: Run accuracy tests
Run a test over the document to compute average jaccard similarity to the 1-nearest neighbour for each document using the "tags" field of the meta data as the ground truth.
Step3: Run a test again but use the Annoy approximate nearest neighbour index that would have been built. Should be much faster.
Step4: Run single nearest neighbour query
Run a nearest neighbour query on a single document and print the title and tag meta data
Step5: Save recommender
Save the recommender to the filesystem in reuters_recommender folder
Step6: Start a microservice to serve the recommender | Python Code:
import json
import codecs
import os
docs = []
for filename in os.listdir("reuters-21578-json/data/full"):
f = open("reuters-21578-json/data/full/"+filename)
js = json.load(f)
for j in js:
if 'topics' in j and 'body' in j:
d = {}
d["id"] = j['id']
d["text"] = j['body'].replace("\n","")
d["title"] = j['title']
d["tags"] = ",".join(j['topics'])
docs.append(d)
print "loaded ",len(docs)," documents"
Explanation: Creating a document similarity microservice for the Reuters-21578 dataset.
First download the Reuters-21578 dataset in JSON format into the local folder:
bash
git clone https://github.com/fergiemcdowall/reuters-21578-json
The first step will be to convert this into the default corpus format we use:
End of explanation
from seldon.text import DocumentSimilarity,DefaultJsonCorpus
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
corpus = DefaultJsonCorpus(docs)
ds = DocumentSimilarity(model_type='gensim_lsi')
ds.fit(corpus)
print "done"
Explanation: Create a gensim LSI document similarity model
End of explanation
ds.score()
Explanation: Run accuracy tests
Run a test over the document to compute average jaccard similarity to the 1-nearest neighbour for each document using the "tags" field of the meta data as the ground truth.
End of explanation
ds.score(approx=True)
Explanation: Run a test again but use the Annoy approximate nearest neighbour index that would have been built. Should be much faster.
End of explanation
query_doc=6023
print "Query doc: ",ds.get_meta(query_doc)['title'],"Tagged:",ds.get_meta(query_doc)['tags']
neighbours = ds.nn(query_doc,k=5,translate_id=True,approx=True)
print neighbours
for (doc_id,_) in neighbours:
j = ds.get_meta(doc_id)
print "Doc id",doc_id,j['title'],"Tagged:",j['tags']
Explanation: Run single nearest neighbour query
Run a nearest neighbour query on a single document and print the title and tag meta data
End of explanation
import seldon
rw = seldon.Recommender_wrapper()
rw.save_recommender(ds,"reuters_recommender")
print "done"
Explanation: Save recommender
Save the recommender to the filesystem in reuters_recommender folder
End of explanation
from seldon.microservice import Microservices
m = Microservices()
app = m.create_recommendation_microservice("reuters_recommender")
app.run(host="0.0.0.0",port=5000,debug=False)
Explanation: Start a microservice to serve the recommender
End of explanation |
2,975 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convolutional Autoencoder
Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
Step1: Network Architecture
The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.
Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.
What's going on with the decoder
Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see deconvolutional layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find with the TensorFlow API, with tf.nn.conv2d_transpose.
However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.
Exercise
Step2: Training
As before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
Step3: Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.
Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.
Exercise
Step4: Checking out the performance
Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is. | Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
Explanation: Convolutional Autoencoder
Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
End of explanation
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
decoded = tf.nn.sigmoid(logits, name='decoded')
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
Explanation: Network Architecture
The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.
Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.
What's going on with the decoder
Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see deconvolutional layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find with the TensorFlow API, with tf.nn.conv2d_transpose.
However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.
Exercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor.
End of explanation
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
Explanation: Training
As before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
End of explanation
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 =
# Now 28x28x32
maxpool1 =
# Now 14x14x32
conv2 =
# Now 14x14x32
maxpool2 =
# Now 7x7x32
conv3 =
# Now 7x7x16
encoded =
# Now 4x4x16
### Decoder
upsample1 =
# Now 7x7x16
conv4 =
# Now 7x7x16
upsample2 =
# Now 14x14x16
conv5 =
# Now 14x14x32
upsample3 =
# Now 28x28x32
conv6 =
# Now 28x28x32
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
Explanation: Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.
Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.
Exercise: Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
End of explanation
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
Explanation: Checking out the performance
Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
End of explanation |
2,976 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: Text classification with TensorFlow Lite Model Maker
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Import the required packages.
Step3: Download the sample training data.
In this tutorial, we will use the SST-2 (Stanford Sentiment Treebank) which is one of the tasks in the GLUE benchmark. It contains 67,349 movie reviews for training and 872 movie reviews for testing. The dataset has two classes
Step4: The SST-2 dataset is stored in TSV format. The only difference between TSV and CSV is that TSV uses a tab \t character as its delimiter instead of a comma , in the CSV format.
Here are the first 5 lines of the training dataset. label=0 means negative, label=1 means positive.
| sentence | label | | | |
|-------------------------------------------------------------------------------------------|-------|---|---|---|
| hide new secretions from the parental units | 0 | | | |
| contains no wit , only labored gags | 0 | | | |
| that loves its characters and communicates something rather beautiful about human nature | 1 | | | |
| remains utterly satisfied to remain the same throughout | 0 | | | |
| on the worst revenge-of-the-nerds clichรฉs the filmmakers could dredge up | 0 | | | |
Next, we will load the dataset into a Pandas dataframe and change the current label names (0 and 1) to a more human-readable ones (negative and positive) and use them for model training.
Step5: Quickstart
There are five steps to train a text classification model
Step6: Model Maker also supports other model architectures such as BERT. If you are interested to learn about other architecture, see the Choose a model architecture for Text Classifier section below.
Step 2. Load the training and test data, then preprocess them according to a specific model_spec.
Model Maker can take input data in the CSV format. We will load the training and test dataset with the human-readable label name that were created earlier.
Each model architecture requires input data to be processed in a particular way. DataLoader reads the requirement from model_spec and automatically executes the necessary preprocessing.
Step7: Step 3. Train the TensorFlow model with the training data.
The average word embedding model use batch_size = 32 by default. Therefore you will see that it takes 2104 steps to go through the 67,349 sentences in the training dataset. We will train the model for 10 epochs, which means going through the training dataset 10 times.
Step8: Step 4. Evaluate the model with the test data.
After training the text classification model using the sentences in the training dataset, we will use the remaining 872 sentences in the test dataset to evaluate how the model performs against new data it has never seen before.
As the default batch size is 32, it will take 28 steps to go through the 872 sentences in the test dataset.
Step9: Step 5. Export as a TensorFlow Lite model.
Let's export the text classification that we have trained in the TensorFlow Lite format. We will specify which folder to export the model.
By default, the float TFLite model is exported for the average word embedding model architecture.
Step10: You can download the TensorFlow Lite model file using the left sidebar of Colab. Go into the average_word_vec folder as we specified in export_dir parameter above, right-click on the model.tflite file and choose Download to download it to your local computer.
This model can be integrated into an Android or an iOS app using the NLClassifier API of the TensorFlow Lite Task Library.
See the TFLite Text Classification sample app for more details on how the model is used in a working app.
Note 1
Step11: Load training data
You can upload your own dataset to work through this tutorial. Upload your dataset by using the left sidebar in Colab.
<img src="https
Step12: The Model Maker library also supports the from_folder() method to load data. It assumes that the text data of the same class are in the same subdirectory and that the subfolder name is the class name. Each text file contains one movie review sample. The class_labels parameter is used to specify which the subfolders.
Train a TensorFlow Model
Train a text classification model using the training data.
Note
Step13: Examine the detailed model structure.
Step14: Evaluate the model
Evaluate the model that we have just trained using the test data and measure the loss and accuracy value.
Step15: Export as a TensorFlow Lite model
Convert the trained model to TensorFlow Lite model format with metadata so that you can later use in an on-device ML application. The label file and the vocab file are embedded in metadata. The default TFLite filename is model.tflite.
In many on-device ML application, the model size is an important factor. Therefore, it is recommended that you apply quantize the model to make it smaller and potentially run faster.
The default post-training quantization technique is dynamic range quantization for the BERT and MobileBERT models.
Step16: The TensorFlow Lite model file can be integrated in a mobile app using the BertNLClassifier API in TensorFlow Lite Task Library. Please note that this is different from the NLClassifier API used to integrate the text classification trained with the average word vector model architecture.
The export formats can be one or a list of the following
Step17: You can evaluate the TFLite model with evaluate_tflite method to measure its accuracy. Converting the trained TensorFlow model to TFLite format and apply quantization can affect its accuracy so it is recommended to evaluate the TFLite model accuracy before deployment.
Step18: Advanced Usage
The create function is the driver function that the Model Maker library uses to create models. The model_spec parameter defines the model specification. The AverageWordVecSpec and BertClassifierSpec classes are currently supported. The create function comprises of the following steps
Step19: Customize the average word embedding model hyperparameters
You can adjust the model infrastructure like the wordvec_dim and the seq_len variables in the AverageWordVecSpec class.
For example, you can train the model with a larger value of wordvec_dim. Note that you must construct a new model_spec if you modify the model.
Step20: Get the preprocessed data.
Step21: Train the new model.
Step22: Tune the training hyperparameters
You can also tune the training hyperparameters like epochs and batch_size that affect the model accuracy. For instance,
epochs
Step23: Evaluate the newly retrained model with 20 training epochs.
Step24: Change the Model Architecture
You can change the model by changing the model_spec. The following shows how to change to BERT-Base model.
Change the model_spec to BERT-Base model for the text classifier. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
!sudo apt -y install libportaudio2
!pip install -q tflite-model-maker
Explanation: Text classification with TensorFlow Lite Model Maker
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lite/tutorials/model_maker_text_classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
The TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications.
This notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used text classification model to classify movie reviews on a mobile device. The text classification model classifies text into predefined categories. The inputs should be preprocessed text and the outputs are the probabilities of the categories. The dataset used in this tutorial are positive and negative movie reviews.
Prerequisites
Install the required packages
To run this example, install the required packages, including the Model Maker package from the GitHub repo.
End of explanation
import numpy as np
import os
from tflite_model_maker import model_spec
from tflite_model_maker import text_classifier
from tflite_model_maker.config import ExportFormat
from tflite_model_maker.text_classifier import AverageWordVecSpec
from tflite_model_maker.text_classifier import DataLoader
import tensorflow as tf
assert tf.__version__.startswith('2')
tf.get_logger().setLevel('ERROR')
Explanation: Import the required packages.
End of explanation
data_dir = tf.keras.utils.get_file(
fname='SST-2.zip',
origin='https://dl.fbaipublicfiles.com/glue/data/SST-2.zip',
extract=True)
data_dir = os.path.join(os.path.dirname(data_dir), 'SST-2')
Explanation: Download the sample training data.
In this tutorial, we will use the SST-2 (Stanford Sentiment Treebank) which is one of the tasks in the GLUE benchmark. It contains 67,349 movie reviews for training and 872 movie reviews for testing. The dataset has two classes: positive and negative movie reviews.
End of explanation
import pandas as pd
def replace_label(original_file, new_file):
# Load the original file to pandas. We need to specify the separator as
# '\t' as the training data is stored in TSV format
df = pd.read_csv(original_file, sep='\t')
# Define how we want to change the label name
label_map = {0: 'negative', 1: 'positive'}
# Excute the label change
df.replace({'label': label_map}, inplace=True)
# Write the updated dataset to a new file
df.to_csv(new_file)
# Replace the label name for both the training and test dataset. Then write the
# updated CSV dataset to the current folder.
replace_label(os.path.join(os.path.join(data_dir, 'train.tsv')), 'train.csv')
replace_label(os.path.join(os.path.join(data_dir, 'dev.tsv')), 'dev.csv')
Explanation: The SST-2 dataset is stored in TSV format. The only difference between TSV and CSV is that TSV uses a tab \t character as its delimiter instead of a comma , in the CSV format.
Here are the first 5 lines of the training dataset. label=0 means negative, label=1 means positive.
| sentence | label | | | |
|-------------------------------------------------------------------------------------------|-------|---|---|---|
| hide new secretions from the parental units | 0 | | | |
| contains no wit , only labored gags | 0 | | | |
| that loves its characters and communicates something rather beautiful about human nature | 1 | | | |
| remains utterly satisfied to remain the same throughout | 0 | | | |
| on the worst revenge-of-the-nerds clichรฉs the filmmakers could dredge up | 0 | | | |
Next, we will load the dataset into a Pandas dataframe and change the current label names (0 and 1) to a more human-readable ones (negative and positive) and use them for model training.
End of explanation
spec = model_spec.get('average_word_vec')
Explanation: Quickstart
There are five steps to train a text classification model:
Step 1. Choose a text classification model architecture.
Here we use the average word embedding model architecture, which will produce a small and fast model with decent accuracy.
End of explanation
train_data = DataLoader.from_csv(
filename='train.csv',
text_column='sentence',
label_column='label',
model_spec=spec,
is_training=True)
test_data = DataLoader.from_csv(
filename='dev.csv',
text_column='sentence',
label_column='label',
model_spec=spec,
is_training=False)
Explanation: Model Maker also supports other model architectures such as BERT. If you are interested to learn about other architecture, see the Choose a model architecture for Text Classifier section below.
Step 2. Load the training and test data, then preprocess them according to a specific model_spec.
Model Maker can take input data in the CSV format. We will load the training and test dataset with the human-readable label name that were created earlier.
Each model architecture requires input data to be processed in a particular way. DataLoader reads the requirement from model_spec and automatically executes the necessary preprocessing.
End of explanation
model = text_classifier.create(train_data, model_spec=spec, epochs=10)
Explanation: Step 3. Train the TensorFlow model with the training data.
The average word embedding model use batch_size = 32 by default. Therefore you will see that it takes 2104 steps to go through the 67,349 sentences in the training dataset. We will train the model for 10 epochs, which means going through the training dataset 10 times.
End of explanation
loss, acc = model.evaluate(test_data)
Explanation: Step 4. Evaluate the model with the test data.
After training the text classification model using the sentences in the training dataset, we will use the remaining 872 sentences in the test dataset to evaluate how the model performs against new data it has never seen before.
As the default batch size is 32, it will take 28 steps to go through the 872 sentences in the test dataset.
End of explanation
model.export(export_dir='average_word_vec')
Explanation: Step 5. Export as a TensorFlow Lite model.
Let's export the text classification that we have trained in the TensorFlow Lite format. We will specify which folder to export the model.
By default, the float TFLite model is exported for the average word embedding model architecture.
End of explanation
mb_spec = model_spec.get('mobilebert_classifier')
Explanation: You can download the TensorFlow Lite model file using the left sidebar of Colab. Go into the average_word_vec folder as we specified in export_dir parameter above, right-click on the model.tflite file and choose Download to download it to your local computer.
This model can be integrated into an Android or an iOS app using the NLClassifier API of the TensorFlow Lite Task Library.
See the TFLite Text Classification sample app for more details on how the model is used in a working app.
Note 1: Android Studio Model Binding does not support text classification yet so please use the TensorFlow Lite Task Library.
Note 2: There is a model.json file in the same folder with the TFLite model. It contains the JSON representation of the metadata bundled inside the TensorFlow Lite model. Model metadata helps the TFLite Task Library know what the model does and how to pre-process/post-process data for the model. You don't need to download the model.json file as it is only for informational purpose and its content is already inside the TFLite file.
Note 3: If you train a text classification model using MobileBERT or BERT-Base architecture, you will need to use BertNLClassifier API instead to integrate the trained model into a mobile app.
The following sections walk through the example step by step to show more details.
Choose a model architecture for Text Classifier
Each model_spec object represents a specific model for the text classifier. TensorFlow Lite Model Maker currently supports MobileBERT, averaging word embeddings and BERT-Base models.
| Supported Model | Name of model_spec | Model Description | Model size |
|--------------------------|-------------------------|-----------------------------------------------------------------------------------------------------------------------|---------------------------------------------|
| Averaging Word Embedding | 'average_word_vec' | Averaging text word embeddings with RELU activation. | <1MB |
| MobileBERT | 'mobilebert_classifier' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device applications. | 25MB w/ quantization <br/> 100MB w/o quantization |
| BERT-Base | 'bert_classifier' | Standard BERT model that is widely used in NLP tasks. | 300MB |
In the quick start, we have used the average word embedding model. Let's switch to MobileBERT to train a model with higher accuracy.
End of explanation
train_data = DataLoader.from_csv(
filename='train.csv',
text_column='sentence',
label_column='label',
model_spec=mb_spec,
is_training=True)
test_data = DataLoader.from_csv(
filename='dev.csv',
text_column='sentence',
label_column='label',
model_spec=mb_spec,
is_training=False)
Explanation: Load training data
You can upload your own dataset to work through this tutorial. Upload your dataset by using the left sidebar in Colab.
<img src="https://storage.googleapis.com/download.tensorflow.org/models/tflite/screenshots/model_maker_text_classification.png" alt="Upload File" width="800" hspace="100">
If you prefer not to upload your dataset to the cloud, you can also locally run the library by following the guide.
To keep it simple, we will reuse the SST-2 dataset downloaded earlier. Let's use the DataLoader.from_csv method to load the data.
Please be noted that as we have changed the model architecture, we will need to reload the training and test dataset to apply the new preprocessing logic.
End of explanation
model = text_classifier.create(train_data, model_spec=mb_spec, epochs=3)
Explanation: The Model Maker library also supports the from_folder() method to load data. It assumes that the text data of the same class are in the same subdirectory and that the subfolder name is the class name. Each text file contains one movie review sample. The class_labels parameter is used to specify which the subfolders.
Train a TensorFlow Model
Train a text classification model using the training data.
Note: As MobileBERT is a complex model, each training epoch will takes about 10 minutes on a Colab GPU. Please make sure that you are using a GPU runtime.
End of explanation
model.summary()
Explanation: Examine the detailed model structure.
End of explanation
loss, acc = model.evaluate(test_data)
Explanation: Evaluate the model
Evaluate the model that we have just trained using the test data and measure the loss and accuracy value.
End of explanation
model.export(export_dir='mobilebert/')
Explanation: Export as a TensorFlow Lite model
Convert the trained model to TensorFlow Lite model format with metadata so that you can later use in an on-device ML application. The label file and the vocab file are embedded in metadata. The default TFLite filename is model.tflite.
In many on-device ML application, the model size is an important factor. Therefore, it is recommended that you apply quantize the model to make it smaller and potentially run faster.
The default post-training quantization technique is dynamic range quantization for the BERT and MobileBERT models.
End of explanation
model.export(export_dir='mobilebert/', export_format=[ExportFormat.LABEL, ExportFormat.VOCAB])
Explanation: The TensorFlow Lite model file can be integrated in a mobile app using the BertNLClassifier API in TensorFlow Lite Task Library. Please note that this is different from the NLClassifier API used to integrate the text classification trained with the average word vector model architecture.
The export formats can be one or a list of the following:
ExportFormat.TFLITE
ExportFormat.LABEL
ExportFormat.VOCAB
ExportFormat.SAVED_MODEL
By default, it exports only the TensorFlow Lite model file containing the model metadata. You can also choose to export other files related to the model for better examination. For instance, exporting only the label file and vocab file as follows:
End of explanation
accuracy = model.evaluate_tflite('mobilebert/model.tflite', test_data)
print('TFLite model accuracy: ', accuracy)
Explanation: You can evaluate the TFLite model with evaluate_tflite method to measure its accuracy. Converting the trained TensorFlow model to TFLite format and apply quantization can affect its accuracy so it is recommended to evaluate the TFLite model accuracy before deployment.
End of explanation
new_model_spec = model_spec.get('mobilebert_classifier')
new_model_spec.seq_len = 256
Explanation: Advanced Usage
The create function is the driver function that the Model Maker library uses to create models. The model_spec parameter defines the model specification. The AverageWordVecSpec and BertClassifierSpec classes are currently supported. The create function comprises of the following steps:
Creates the model for the text classifier according to model_spec.
Trains the classifier model. The default epochs and the default batch size are set by the default_training_epochs and default_batch_size variables in the model_spec object.
This section covers advanced usage topics like adjusting the model and the training hyperparameters.
Customize the MobileBERT model hyperparameters
The model parameters you can adjust are:
seq_len: Length of the sequence to feed into the model.
initializer_range: The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
trainable: Boolean that specifies whether the pre-trained layer is trainable.
The training pipeline parameters you can adjust are:
model_dir: The location of the model checkpoint files. If not set, a temporary directory will be used.
dropout_rate: The dropout rate.
learning_rate: The initial learning rate for the Adam optimizer.
tpu: TPU address to connect to.
For instance, you can set the seq_len=256 (default is 128). This allows the model to classify longer text.
End of explanation
new_model_spec = AverageWordVecSpec(wordvec_dim=32)
Explanation: Customize the average word embedding model hyperparameters
You can adjust the model infrastructure like the wordvec_dim and the seq_len variables in the AverageWordVecSpec class.
For example, you can train the model with a larger value of wordvec_dim. Note that you must construct a new model_spec if you modify the model.
End of explanation
new_train_data = DataLoader.from_csv(
filename='train.csv',
text_column='sentence',
label_column='label',
model_spec=new_model_spec,
is_training=True)
Explanation: Get the preprocessed data.
End of explanation
model = text_classifier.create(new_train_data, model_spec=new_model_spec)
Explanation: Train the new model.
End of explanation
model = text_classifier.create(new_train_data, model_spec=new_model_spec, epochs=20)
Explanation: Tune the training hyperparameters
You can also tune the training hyperparameters like epochs and batch_size that affect the model accuracy. For instance,
epochs: more epochs could achieve better accuracy, but may lead to overfitting.
batch_size: the number of samples to use in one training step.
For example, you can train with more epochs.
End of explanation
new_test_data = DataLoader.from_csv(
filename='dev.csv',
text_column='sentence',
label_column='label',
model_spec=new_model_spec,
is_training=False)
loss, accuracy = model.evaluate(new_test_data)
Explanation: Evaluate the newly retrained model with 20 training epochs.
End of explanation
spec = model_spec.get('bert_classifier')
Explanation: Change the Model Architecture
You can change the model by changing the model_spec. The following shows how to change to BERT-Base model.
Change the model_spec to BERT-Base model for the text classifier.
End of explanation |
2,977 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pandas ๆฐๆฎ่ฏปๅ
API
่ฏปๅ | ๅๅ
ฅ
--- | ---
read_csv | to_csv
read_excel | to_excel
read_hdf | to_hdf
read_sql | to_sql
read_json | to_json
read_html | to_html
read_stata | to_stata
read_clipboard | to_clipboard
read_pickle | to_pickle
CVS ๆไปถ่ฏปๅ
csv ๆไปถๅ
ๅฎน
white,read,blue,green,animal
1,5,2,3,cat
2,7,8,5,dog
3,3,6,7,horse
2,2,8,3,duck
4,4,2,1,mouse
Step1: ่ฏปๅๆฒกๆhead็ๆฐๆฎ
1,5,2,3,cat
2,7,8,5,dog
3,3,6,7,horse
2,2,8,3,duck
4,4,2,1,mouse
Step2: ๅฏไปฅๆๅฎheader
Step3: ๅๅปบไธไธชๅ
ทๆ็ญ็บง็ปๆ็DataFrameๅฏน่ฑก๏ผๅฏไปฅๆทปๅ index_col้้กน,ๆฐๆฎๆไปถๆ ผๅผ
colors,status,item1,item2,item3
black,up,3,4,6
black,down,2,6,7
white,up,5,5,5
white,down,3,3,2
red,up,2,2,2
red,down,1,1,4
Step4: Regexp ่งฃๆTXTๆไปถ
ไฝฟ็จๆญฃๅ่กจ่พพๅผๆๅฎsep,ๆฅ่พพๅฐ่งฃๆๆฐๆฎๆไปถ็็ฎ็ใ
ๆญฃๅๅ
็ด | ๅ่ฝ
--- | ---
. | ๆข่ก็ฌฆไปฅๅคๆๆๅ
็ด
\d | ๆฐๅญ
\D | ้ๆฐๅญ
\s | ็ฉบ็ฝๅญ็ฌฆ
\S | ้็ฉบ็ฝๅญ็ฌฆ
\n | ๆข่ก็ฌฆ
\t | ๅถ่กจ็ฌฆ
\uxxxx | ไฝฟ็จๅๅ
ญ่ฟๅถ่กจ็คบideaUnicodeๅญ็ฌฆ
ๆฐๆฎๆไปถ้ๆบไปฅๅถ่กจ็ฌฆๅ็ฉบๆ ผๅ้
white red blue green
1 4 3 2
2 4 6 7
Step5: ่ฏปๅๆๅญๆฏๅ้็ๆฐๆฎ
000end123aaa122
001end125aaa144
Step6: ่ฏปๅๆๆฌๆไปถ่ทณ่ฟไธไบไธๅฟ
่ฆ็่ก
```
log file
this file has been generate by automatic system
white,red,blue,green,animal
12-feb-2015
Step7: ไปTXTๆไปถไธญ่ฏปๅ้จๅๆฐๆฎ
ๅชๆณ่ฏปๆไปถ็ไธ้จๅ๏ผๅฏๆ็กฎๆๅฎ่งฃๆ็่กๅท๏ผ่ฟๆถๅ็จๅฐnrowsๅskiprows้้กน๏ผไปๆๅฎ็่กๅผๅงๅไป่ตทๅง่กๅพๅ่ฏปๅคๅฐ่ก๏ผnorow=i)
Step8: ๅฎไพ ๏ผ
ๅฏนไบไธๅๆฐๆฎ๏ผๆฏ้ไธค่กๅไธไธช็ดฏๅ ่ตทๆฅ๏ผๆๅๆๅๆๅ
ฅๅฐๅ็Seriesๅฏน่ฑกไธญ
Step9: ๅๅ
ฅๆไปถ
to_csv(filenmae)
to_csv(filename,index=False,header=False)
to_csv(filename,na_rep='NaN')
HTMLๆไปถ่ฏปๅ
ๅๅ
ฅHTMLๆไปถ
Step10: ๅๅปบๅคๆ็DataFrame
Step11: HTML่ฏป่กจๆ ผ
Step12: ่ฏปๅxmlๆไปถ
ไฝฟ็จ็็ฌฌไธๆน็ๅบ lxml
Step13: ่ฏปๅExcelๆไปถ
Step14: JSONๆฐๆฎ
Step15: HDF5ๆฐๆฎ
HDFๆไปถ๏ผhierarchical data from)็ญ็บงๆฐๆฎๆ ผๅผ๏ผ็จไบ่ฟๅถๆไปถๅญๅจๆฐๆฎใ
Step16: pickleๆฐๆฎ
Step17: ๆฐๆฎๅบ่ฟๆฅ
ไปฅsqlite3ไธบไพไป็ป | Python Code:
import numpy as np
import pandas as pd
csvframe=pd.read_csv('myCSV_01.csv')
csvframe
# ไนๅฏไปฅ้่ฟread_tableๆฅ่ฏปๅๆฐๆฎ
pd.read_table('myCSV_01.csv',sep=',')
Explanation: Pandas ๆฐๆฎ่ฏปๅ
API
่ฏปๅ | ๅๅ
ฅ
--- | ---
read_csv | to_csv
read_excel | to_excel
read_hdf | to_hdf
read_sql | to_sql
read_json | to_json
read_html | to_html
read_stata | to_stata
read_clipboard | to_clipboard
read_pickle | to_pickle
CVS ๆไปถ่ฏปๅ
csv ๆไปถๅ
ๅฎน
white,read,blue,green,animal
1,5,2,3,cat
2,7,8,5,dog
3,3,6,7,horse
2,2,8,3,duck
4,4,2,1,mouse
End of explanation
pd.read_csv('myCSV_02.csv',header=None)
Explanation: ่ฏปๅๆฒกๆhead็ๆฐๆฎ
1,5,2,3,cat
2,7,8,5,dog
3,3,6,7,horse
2,2,8,3,duck
4,4,2,1,mouse
End of explanation
pd.read_csv('myCSV_02.csv',names=['white','red','blue','green','animal'])
Explanation: ๅฏไปฅๆๅฎheader
End of explanation
pd.read_csv('myCSV_03.csv',index_col=['colors','status'])
Explanation: ๅๅปบไธไธชๅ
ทๆ็ญ็บง็ปๆ็DataFrameๅฏน่ฑก๏ผๅฏไปฅๆทปๅ index_col้้กน,ๆฐๆฎๆไปถๆ ผๅผ
colors,status,item1,item2,item3
black,up,3,4,6
black,down,2,6,7
white,up,5,5,5
white,down,3,3,2
red,up,2,2,2
red,down,1,1,4
End of explanation
pd.read_csv('myCSV_04.csv',sep='\s+')
Explanation: Regexp ่งฃๆTXTๆไปถ
ไฝฟ็จๆญฃๅ่กจ่พพๅผๆๅฎsep,ๆฅ่พพๅฐ่งฃๆๆฐๆฎๆไปถ็็ฎ็ใ
ๆญฃๅๅ
็ด | ๅ่ฝ
--- | ---
. | ๆข่ก็ฌฆไปฅๅคๆๆๅ
็ด
\d | ๆฐๅญ
\D | ้ๆฐๅญ
\s | ็ฉบ็ฝๅญ็ฌฆ
\S | ้็ฉบ็ฝๅญ็ฌฆ
\n | ๆข่ก็ฌฆ
\t | ๅถ่กจ็ฌฆ
\uxxxx | ไฝฟ็จๅๅ
ญ่ฟๅถ่กจ็คบideaUnicodeๅญ็ฌฆ
ๆฐๆฎๆไปถ้ๆบไปฅๅถ่กจ็ฌฆๅ็ฉบๆ ผๅ้
white red blue green
1 4 3 2
2 4 6 7
End of explanation
pd.read_csv('myCSV_05.csv',sep='\D*',header=None,engine='python')
Explanation: ่ฏปๅๆๅญๆฏๅ้็ๆฐๆฎ
000end123aaa122
001end125aaa144
End of explanation
pd.read_table('myCSV_06.csv',sep=',',skiprows=[0,1,3,6])
Explanation: ่ฏปๅๆๆฌๆไปถ่ทณ่ฟไธไบไธๅฟ
่ฆ็่ก
```
log file
this file has been generate by automatic system
white,red,blue,green,animal
12-feb-2015:counting of animals inside the house
1,3,5,2,cat
2,4,8,5,dog
13-feb-2015:counting of animals inside the house
3,3,6,7,horse
2,2,8,3,duck
```
End of explanation
pd.read_csv('myCSV_02.csv',skiprows=[2],nrows=3,header=None)
Explanation: ไปTXTๆไปถไธญ่ฏปๅ้จๅๆฐๆฎ
ๅชๆณ่ฏปๆไปถ็ไธ้จๅ๏ผๅฏๆ็กฎๆๅฎ่งฃๆ็่กๅท๏ผ่ฟๆถๅ็จๅฐnrowsๅskiprows้้กน๏ผไปๆๅฎ็่กๅผๅงๅไป่ตทๅง่กๅพๅ่ฏปๅคๅฐ่ก๏ผnorow=i)
End of explanation
out = pd.Series()
i=0
pieces = pd.read_csv('myCSV_01.csv',chunksize=3)
for piece in pieces:
print piece
out.set_value(i,piece['white'].sum())
i += 1
out
Explanation: ๅฎไพ ๏ผ
ๅฏนไบไธๅๆฐๆฎ๏ผๆฏ้ไธค่กๅไธไธช็ดฏๅ ่ตทๆฅ๏ผๆๅๆๅๆๅ
ฅๅฐๅ็Seriesๅฏน่ฑกไธญ
End of explanation
frame = pd.DataFrame(np.arange(4).reshape((2,2)))
print frame.to_html()
Explanation: ๅๅ
ฅๆไปถ
to_csv(filenmae)
to_csv(filename,index=False,header=False)
to_csv(filename,na_rep='NaN')
HTMLๆไปถ่ฏปๅ
ๅๅ
ฅHTMLๆไปถ
End of explanation
frame = pd.DataFrame(np.random.random((4,4)),
index=['white','black','red','blue'],
columns=['up','down','left','right'])
frame
s = ['<HTML>']
s.append('<HEAD><TITLE>MY DATAFRAME</TITLE></HEAD>')
s.append('<BODY>')
s.append(frame.to_html())
s.append('</BODY></HTML>')
html=''.join(s)
with open('myFrame.html','w') as html_file:
html_file.write(html)
Explanation: ๅๅปบๅคๆ็DataFrame
End of explanation
web_frames = pd.read_html('myFrame.html')
web_frames[0]
# ไปฅ็ฝๅไฝไธบๅๆฐ
ranking = pd.read_html('http://www.meccanismocomplesso.org/en/meccanismo-complesso-sito-2/classifica-punteggio/')
ranking[0]
Explanation: HTML่ฏป่กจๆ ผ
End of explanation
from lxml import objectify
xml = objectify.parse('books.xml')
xml
root =xml.getroot()
root.Book.Author
root.Book.PublishDate
root.getchildren()
[child.tag for child in root.Book.getchildren()]
[child.text for child in root.Book.getchildren()]
def etree2df(root):
column_names=[]
for i in range(0,len(root.getchildren()[0].getchildren())):
column_names.append(root.getchildren()[0].getchildren()[i].tag)
xml_frame = pd.DataFrame(columns=column_names)
for j in range(0,len(root.getchildren())):
obj = root.getchildren()[j].getchildren()
texts = []
for k in range(0,len(column_names)):
texts.append(obj[k].text)
row = dict(zip(column_names,texts))
row_s=pd.Series(row)
row_s.name=j
xml_frame = xml_frame.append(row_s)
return xml_frame
etree2df(root)
Explanation: ่ฏปๅxmlๆไปถ
ไฝฟ็จ็็ฌฌไธๆน็ๅบ lxml
End of explanation
pd.read_excel('data.xlsx')
pd.read_excel('data.xlsx','Sheet2')
frame = pd.DataFrame(np.random.random((4,4)),
index=['exp1','exp2','exp3','exp4'],
columns=['Jan2015','Feb2015','Mar2015','Apr2015'])
frame
frame.to_excel('data2.xlsx')
Explanation: ่ฏปๅExcelๆไปถ
End of explanation
frame = pd.DataFrame(np.arange(16).reshape((4,4)),
index=['white','black','red','blue'],
columns=['up','down','right','left'])
frame.to_json('frame.json')
# ่ฏปๅjson
pd.read_json('frame.json')
Explanation: JSONๆฐๆฎ
End of explanation
from pandas.io.pytables import HDFStore
store = HDFStore('mydata.h5')
store['obj1']=frame
store['obj1']
Explanation: HDF5ๆฐๆฎ
HDFๆไปถ๏ผhierarchical data from)็ญ็บงๆฐๆฎๆ ผๅผ๏ผ็จไบ่ฟๅถๆไปถๅญๅจๆฐๆฎใ
End of explanation
frame.to_pickle('frame.pkl')
pd.read_pickle('frame.pkl')
Explanation: pickleๆฐๆฎ
End of explanation
frame=pd.DataFrame(np.arange(20).reshape((4,5)),
columns=['white','red','blue','black','green'])
frame
from sqlalchemy import create_engine
enegine=create_engine('sqlite:///foo.db')
frame.to_sql('colors',enegine)
pd.read_sql('colors',enegine)
Explanation: ๆฐๆฎๅบ่ฟๆฅ
ไปฅsqlite3ไธบไพไป็ป
End of explanation |
2,978 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Mining
First thing to do is to find the structure of the file. We can find a description in the "dblp.dtd" file and on the website
Step1: Observation of the data
Load the author-> publication dataset and plot the distribution of publications
Step2: We can observe that most of the authors have between 1 and 50 publications. A few author have around 50 and 200 publications which seems reasanable. Howerwer, the maximum is mode than 1000 which is to much. Let's investigate the outliers
Here we can found some very important informations about names
Step3: We will use 190 as a threshold. We won't remove to much authors (0.1%) and above 190 publications it seems too much | Python Code:
from importlib import reload
import xml_parser
reload(xml_parser)
from xml_parser import Xml_parser
#Xml_parser = Xml_parser().collect_data("../pmi_data")
Explanation: Data Mining
First thing to do is to find the structure of the file. We can find a description in the "dblp.dtd" file and on the website:
The children of the root represent the individual data records. There are two types of records: publication records and person records.
The xml files contains: article, inproceedings, proceedings, book, incollection, phdthesis, mastersthesis, www, person, data
A publication record can be:
article: An article from a journal or magazine.
inproceedings: A paper in a conference or workshop proceedings.
proceedings: The proceedings volume of a conference or workshop.
book: An authored monograph or an edited collection of articles.
incollection: A part or chapter in a monograph.
phdthesis: A PhD thesis.
mastersthesis: A Master's thesis.
Most of the publication contain author/editor and title fields, that the ones we are interested in
There are field used to represent the author
<www key="homepages/r/CJvanRijsbergen">
<author>C. J. van Rijsbergen</author>
<author>Cornelis Joost van Rijsbergen</author>
<author>Keith van Rijsbergen</author>
<title>Home Page</title>
<url>http://www.dcs.gla.ac.uk/~keith/</url>
</www>
Those field are called person record. An author can be used called by different name (John Doe, J.Doe, Doe) so those records group together the different representation of an author. We need to collect and build a mapping to map a name to the author id (key of the www field). Thanks to the website, we know that :
Person records always have the key-prefix "homepages/",
the record level tag is always "www", and they always
contain a title element with the text content "Home Page"
Also all the name are unique. If there are 2 homonyms, an id is appened to the name
We collect titles and authors of each publication (if 2 authors participate at the same publication, it gives one publication to each authors).
In the meantime we collect the person description (all name and id). At the end we will be able to merge the publication of J. Doe and John Doe if both map to the same author
Collect the data
To collect the data we use the class xml_parser. The function collect_data() iterates over the file and collects the mapping:
- author_name -> author_id
- author_name -> titles
finally it merges the 2 dicts together to create the:
- author_id -> titles
the function saves the 3 dictionnaries in the same folder as the original data
Uncomment the last line to parse the file, can take a few minutes (around 15min on my computer)
End of explanation
authorID_to_titles = utils.load_pickle("../pmi_data/authorID_to_publications.p")
authorID_to_count = {k:len(v['titles']) for k,v in tqdm(authorID_to_titles.items())}
fig = plt.figure()
ax = fig.add_subplot(111)
data = list(authorID_to_count.values())
binwidth = int((max(data)-min(data))/20)
ax.hist(data,
bins=range(min(data), max(data) + binwidth, binwidth))
plt.show()
print("Max {0}, min {1}".format(max(data), min(data)))
Explanation: Observation of the data
Load the author-> publication dataset and plot the distribution of publications
End of explanation
def get_author_with_more_than(data, max_):
more_than = [k for k, v in data.items() if v >max_]
print("Authors with more than {0}: {1} ({2}%)".format(max_, len(more_than), round(len(more_than)/len(data)*100,4)))
get_author_with_more_than(authorID_to_count, 1010)
get_author_with_more_than(authorID_to_count, 500)
get_author_with_more_than(authorID_to_count, 300)
get_author_with_more_than(authorID_to_count, 200)
get_author_with_more_than(authorID_to_count, 190)
get_author_with_more_than(authorID_to_count, 50)
Explanation: We can observe that most of the authors have between 1 and 50 publications. A few author have around 50 and 200 publications which seems reasanable. Howerwer, the maximum is mode than 1000 which is to much. Let's investigate the outliers
Here we can found some very important informations about names :
- In the case of homonymous names, dblp aims to use the name version without any suffix (number after the name) as a person disambiguation page, and not as the bibliography of an actual person. Any publication listed on such a page has not been assigned to an actual author (i.e., someone with a homonym suffix number) yet.
- Since in the early days of dblp author name versions without a homonym suffix number had been used as normal bibliography pages, too, there can still be found examples of homonymous names where the entity without any suffix is still considered to model the bibliography of an actual person. Removing the last remnants of these examples by giving explicit suffix numbers is currently a work in progress at dblp.
- Currently, you could use the following rule of thumb to distinguish most cases adequately::
- If a person page has a homonymous name variant using the suffix "0001", and the page carries no additional person information like a home page link or an affiliation note => Drop it (disambiguation page)
- Otherwise :
- if the earliest homonym name starts with suffix "0002"
- there are additional pieces of author information given)
- The number of publication is between 0 and 50
=>then the page is likely to be intended to be the profile of an actual person
After investigation the notation of the data is quite messy. They are not consistent with the notification of the id: homepages/c/LeiChen0018, homepages/c/LeiChen-17, homepages/c/LeiChen18
So let's just try to apply the rule: if infos about the author ("note" or "url") keep it
Even by applying those rule it doesn't seems to remove all the disambiguation pages.
To keep it simple we will use a reasonable number of publications as threshold.
Observe the proportion of author above a limit:
End of explanation
authors_to_titles_clean= {author:v['titles'] for author, v in tqdm(authorID_to_titles.items()) if len(v['titles'])<=190}
authorID_to_count = {k:len(titles) for k,titles in tqdm(authors_to_titles_clean.items())}
fig = plt.figure()
ax = fig.add_subplot(111)
data = list(authorID_to_count.values())
binwidth = int((max(data)-min(data))/20)
ax.hist(data,
bins=range(min(data), max(data) + binwidth, binwidth))
plt.show()
print("Max {0}, min {1}".format(max(data), min(data)))
utils.pickle_data(authors_to_titles_clean, "../pmi_data/authorID_to_publications_clean.p")
Explanation: We will use 190 as a threshold. We won't remove to much authors (0.1%) and above 190 publications it seems too much
End of explanation |
2,979 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cams', 'sandbox-3', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: CAMS
Source ID: SANDBOX-3
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:43
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation |
2,980 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Text classification with preprocessed text
Step2: <a id="download"></a>
Download the IMDB dataset
The IMDB movie reviews dataset comes packaged in tfds. It has already been preprocessed so that the reviews (sequences of words) have been converted to sequences of integers, where each integer represents a specific word in a dictionary.
The following code downloads the IMDB dataset to your machine (or uses a cached copy if you've already downloaded it)
Step3: <a id="encoder"></a>
Try the encoder
The dataset info includes the text encoder (a tfds.features.text.SubwordTextEncoder).
Step4: This text encoder will reversibly encode any string
Step5: The encoder encodes the string by breaking it into subwords or characters if the word is not in its dictionary. So the more a string resembles the dataset, the shorter the encoded representation will be.
Step6: Explore the data
Let's take a moment to understand the format of the data. The dataset comes preprocessed
Step7: The info structure contains the encoder/decoder. The encoder can be used to recover the original text
Step8: Prepare the data for training
You will want to create batches of training data for your model. The reviews are all different lengths, so use padded_batch to zero pad the sequences while batching
Step9: Each batch will have a shape of (batch_size, sequence_length) because the padding is dynamic each batch will have a different length
Step10: Build the model
The neural network is created by stacking layersโthis requires two main architectural decisions
Step11: The layers are stacked sequentially to build the classifier
Step12: Train the model
Train the model by passing the Dataset object to the model's fit function. Set the number of epochs.
Step13: Evaluate the model
And let's see how the model performs. Two values will be returned. Loss (a number which represents our error, lower values are better), and accuracy.
Step14: This fairly naive approach achieves an accuracy of about 87%. With more advanced approaches, the model should get closer to 95%.
Create a graph of accuracy and loss over time
model.fit() returns a History object that contains a dictionary with everything that happened during training
Step15: There are four entries | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 Franรงois Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
Explanation: <a href="https://colab.research.google.com/github/suresh/notebooks/blob/master/text_classification_gpu.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2018 The TensorFlow Authors.
End of explanation
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow import keras
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
import numpy as np
print(tf.__version__)
Explanation: Text classification with preprocessed text: Movie reviews
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/keras/text_classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/keras/text_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/keras/text_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/keras/text_classification.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This notebook classifies movie reviews as positive or negative using the text of the review. This is an example of binaryโor two-classโclassification, an important and widely applicable kind of machine learning problem.
We'll use the IMDB dataset that contains the text of 50,000 movie reviews from the Internet Movie Database. These are split into 25,000 reviews for training and 25,000 reviews for testing. The training and testing sets are balanced, meaning they contain an equal number of positive and negative reviews.
This notebook uses tf.keras, a high-level API to build and train models in TensorFlow. For a more advanced text classification tutorial using tf.keras, see the MLCC Text Classification Guide.
Setup
End of explanation
(train_data, test_data), info = tfds.load(
# Use the version pre-encoded with an ~8k vocabulary.
'imdb_reviews/subwords8k',
# Return the train/test datasets as a tuple.
split = (tfds.Split.TRAIN, tfds.Split.TEST),
# Return (example, label) pairs from the dataset (instead of a dictionary).
as_supervised=True,
# Also return the `info` structure.
with_info=True)
Explanation: <a id="download"></a>
Download the IMDB dataset
The IMDB movie reviews dataset comes packaged in tfds. It has already been preprocessed so that the reviews (sequences of words) have been converted to sequences of integers, where each integer represents a specific word in a dictionary.
The following code downloads the IMDB dataset to your machine (or uses a cached copy if you've already downloaded it):
To encode your own text see the Loading text tutorial
End of explanation
encoder = info.features['text'].encoder
print ('Vocabulary size: {}'.format(encoder.vocab_size))
Explanation: <a id="encoder"></a>
Try the encoder
The dataset info includes the text encoder (a tfds.features.text.SubwordTextEncoder).
End of explanation
sample_string = 'Hello TensorFlow.'
encoded_string = encoder.encode(sample_string)
print ('Encoded string is {}'.format(encoded_string))
original_string = encoder.decode(encoded_string)
print ('The original string: "{}"'.format(original_string))
assert original_string == sample_string
Explanation: This text encoder will reversibly encode any string:
End of explanation
for ts in encoded_string:
print ('{} ----> {}'.format(ts, encoder.decode([ts])))
Explanation: The encoder encodes the string by breaking it into subwords or characters if the word is not in its dictionary. So the more a string resembles the dataset, the shorter the encoded representation will be.
End of explanation
for train_example, train_label in train_data.take(1):
print('Encoded text:', train_example[:10].numpy())
print('Label:', train_label.numpy())
Explanation: Explore the data
Let's take a moment to understand the format of the data. The dataset comes preprocessed: each example is an array of integers representing the words of the movie review.
The text of reviews have been converted to integers, where each integer represents a specific word-piece in the dictionary.
Each label is an integer value of either 0 or 1, where 0 is a negative review, and 1 is a positive review.
Here's what the first review looks like:
End of explanation
encoder.decode(train_example)
Explanation: The info structure contains the encoder/decoder. The encoder can be used to recover the original text:
End of explanation
BUFFER_SIZE = 1000
train_batches = (
train_data
.shuffle(BUFFER_SIZE)
.padded_batch(32, train_data.output_shapes))
test_batches = (
test_data
.padded_batch(32, train_data.output_shapes))
Explanation: Prepare the data for training
You will want to create batches of training data for your model. The reviews are all different lengths, so use padded_batch to zero pad the sequences while batching:
End of explanation
for example_batch, label_batch in train_batches.take(2):
print("Batch shape:", example_batch.shape)
print("label shape:", label_batch.shape)
Explanation: Each batch will have a shape of (batch_size, sequence_length) because the padding is dynamic each batch will have a different length:
End of explanation
model = keras.Sequential([
keras.layers.Embedding(encoder.vocab_size, 16),
keras.layers.GlobalAveragePooling1D(),
keras.layers.Dense(1, activation='sigmoid')])
model.summary()
Explanation: Build the model
The neural network is created by stacking layersโthis requires two main architectural decisions:
How many layers to use in the model?
How many hidden units to use for each layer?
In this example, the input data consists of an array of word-indices. The labels to predict are either 0 or 1. Let's build a "Continuous bag of words" style model for this problem:
Caution: This model doesn't use masking, so the zero-padding is used as part of the input, so the padding length may affect the output. To fix this, see the masking and padding guide.
End of explanation
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
Explanation: The layers are stacked sequentially to build the classifier:
The first layer is an Embedding layer. This layer takes the integer-encoded vocabulary and looks up the embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: (batch, sequence, embedding).
Next, a GlobalAveragePooling1D layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible.
This fixed-length output vector is piped through a fully-connected (Dense) layer with 16 hidden units.
The last layer is densely connected with a single output node. Using the sigmoid activation function, this value is a float between 0 and 1, representing a probability, or confidence level.
Hidden units
The above model has two intermediate or "hidden" layers, between the input and output. The number of outputs (units, nodes, or neurons) is the dimension of the representational space for the layer. In other words, the amount of freedom the network is allowed when learning an internal representation.
If a model has more hidden units (a higher-dimensional representation space), and/or more layers, then the network can learn more complex representations. However, it makes the network more computationally expensive and may lead to learning unwanted patternsโpatterns that improve performance on training data but not on the test data. This is called overfitting, and we'll explore it later.
Loss function and optimizer
A model needs a loss function and an optimizer for training. Since this is a binary classification problem and the model outputs a probability (a single-unit layer with a sigmoid activation), we'll use the binary_crossentropy loss function.
This isn't the only choice for a loss function, you could, for instance, choose mean_squared_error. But, generally, binary_crossentropy is better for dealing with probabilitiesโit measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and the predictions.
Later, when we are exploring regression problems (say, to predict the price of a house), we will see how to use another loss function called mean squared error.
Now, configure the model to use an optimizer and a loss function:
End of explanation
history = model.fit(train_batches,
epochs=10,
validation_data=test_batches,
validation_steps=30)
Explanation: Train the model
Train the model by passing the Dataset object to the model's fit function. Set the number of epochs.
End of explanation
loss, accuracy = model.evaluate(test_batches)
print("Loss: ", loss)
print("Accuracy: ", accuracy)
Explanation: Evaluate the model
And let's see how the model performs. Two values will be returned. Loss (a number which represents our error, lower values are better), and accuracy.
End of explanation
history_dict = history.history
history_dict.keys()
Explanation: This fairly naive approach achieves an accuracy of about 87%. With more advanced approaches, the model should get closer to 95%.
Create a graph of accuracy and loss over time
model.fit() returns a History object that contains a dictionary with everything that happened during training:
End of explanation
import matplotlib.pyplot as plt
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.show()
Explanation: There are four entries: one for each monitored metric during training and validation. We can use these to plot the training and validation loss for comparison, as well as the training and validation accuracy:
End of explanation |
2,981 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Diagram bifurkacyjny dla rรณwnania logistycznego $x \to a x (1-x)$
Rรณwnanie logistyczne jest niezwykle prostym rรณwnaniem iteracyjnym wykazujฤ
cym zaskakujฤ
co zลoลผone zachowanie. Jego wลasnoลci sฤ
od lat siedemdziesiฤ
tych przedmiotem powaลผnych prac matematycznych. Pomimo tego wciฤ
ลผ wiele wลasnoลci jest niezbadanych i zachowanie siฤ rozwiฤ
zaล tego rรณwnania jest dostฤpne tylko do analizy numerycznej.
Poniลผszy przykลad wykorzystuje pyCUDA do szybkiego obliczenia tak zwanego diagramu bifurkacyjnego rรณwnania logistycznego. Uzyskanie takiego diagramu wymaga jednoczesnej symulacji wielu rรณwnaล z rรณลผnymi warunkami poczฤ
tkowymi i rรณลผnymi parametrami. Jest to idealne zadanie dla komputera rรณwnolegลego.
Sposรณb implementacji
Pierwszฤ
implementacja naszego algorytmu bฤdzie zastosowanie szablonu jฤ
dra zwanego
ElementwiseKernel
Jest to prosty sposรณb na wykonanie tej samej operacji na duลผym wektorze danych.
Step1: Jฤ
dro Elementwise
Zdefiniujemy sobie jฤ
dro, ktรณre dla wektora stanรณw poczฤ
tkowych, element po elementcie wykona iteracje rรณwania logistycznego. Poniewaลผ bฤdziemy chcieli wykonaฤ powyลผsze iteracje dla rรณลผnych parametrรณw $a$, zdefiniujemy nasze jฤ
dro tak by braลo zarรณwno wektor wartoลci paramteru $a$ jak i wektor wartoลci poczฤ
tkowych. Poniewaลผ bฤdziemy mieli tฤ
samฤ
wartoลฤ parametru $a$ dla wielu wartoลci poczฤ
tkowych to wykorzystamy uลผytecznฤ
w tym przypadku funkcjฤ numpy
Step3: Algorytm z pฤtlฤ
wewnฤ
trz jฤ
dra CUDA
Napiszmy teraz algorytm, ktรณry bฤdzie iterowaล rรณwnanie Niter razy wewnฤ
trz jednego wywoลania jฤ
dra CUDA.
Step4: Porรณwnanie z wersjฤ
CPU
Dla porรณwnania napiszemy prosty program, ktรณry oblicza iteracje rรณwnania logistycznego na CPU. Zatosujemy jฤzyk cython, ktรณry umoลผliwia automatyczne skompilowanie funkcji do wydajnego kodu, ktรณrego wydajnoลฤ jest porรณwnywalna z kodem napisanym w jฤzyku C lub podobnym.
W wyniku dziaลania programu widzimy, ลผe nasze jฤ
dro wykonuje obliczenia znacznie szybciej.
Step5: Wizualizacja wynikรณw | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pycuda.gpuarray as gpuarray
from pycuda.curandom import rand as curand
from pycuda.compiler import SourceModule
import pycuda.driver as cuda
try:
ctx.pop()
ctx.detach()
except:
print ("No CTX!")
cuda.init()
device = cuda.Device(0)
ctx = device.make_context()
print (device.name(), device.compute_capability(),device.total_memory()/1024.**3,"GB")
print ("a tak wogรณle to mamy tu:",cuda.Device.count(), " urzฤ
dzenia")
Explanation: Diagram bifurkacyjny dla rรณwnania logistycznego $x \to a x (1-x)$
Rรณwnanie logistyczne jest niezwykle prostym rรณwnaniem iteracyjnym wykazujฤ
cym zaskakujฤ
co zลoลผone zachowanie. Jego wลasnoลci sฤ
od lat siedemdziesiฤ
tych przedmiotem powaลผnych prac matematycznych. Pomimo tego wciฤ
ลผ wiele wลasnoลci jest niezbadanych i zachowanie siฤ rozwiฤ
zaล tego rรณwnania jest dostฤpne tylko do analizy numerycznej.
Poniลผszy przykลad wykorzystuje pyCUDA do szybkiego obliczenia tak zwanego diagramu bifurkacyjnego rรณwnania logistycznego. Uzyskanie takiego diagramu wymaga jednoczesnej symulacji wielu rรณwnaล z rรณลผnymi warunkami poczฤ
tkowymi i rรณลผnymi parametrami. Jest to idealne zadanie dla komputera rรณwnolegลego.
Sposรณb implementacji
Pierwszฤ
implementacja naszego algorytmu bฤdzie zastosowanie szablonu jฤ
dra zwanego
ElementwiseKernel
Jest to prosty sposรณb na wykonanie tej samej operacji na duลผym wektorze danych.
End of explanation
import numpy as np
Nx = 1024
Na = 1024
a = np.linspace(3.255,4,Na).astype(np.float32)
a = np.repeat(a,Nx)
a_gpu = gpuarray.to_gpu(a)
x_gpu = curand((Na*Nx,))
from pycuda.elementwise import ElementwiseKernel
iterate = ElementwiseKernel(
"float *a, float *x",
"x[i] = a[i]*x[i]*(1.0f-x[i])",
"iterate")
%%time
Niter = 1000
for i in range(Niter):
iterate(a_gpu,x_gpu)
ctx.synchronize()
a,x = a_gpu.get(),x_gpu.get()
plt.figure(num=1, figsize=(10, 6))
every = 10
plt.plot(a[::every],x[::every],'.',markersize=1)
plt.plot([3.83,3.83],[0,1])
Explanation: Jฤ
dro Elementwise
Zdefiniujemy sobie jฤ
dro, ktรณre dla wektora stanรณw poczฤ
tkowych, element po elementcie wykona iteracje rรณwania logistycznego. Poniewaลผ bฤdziemy chcieli wykonaฤ powyลผsze iteracje dla rรณลผnych parametrรณw $a$, zdefiniujemy nasze jฤ
dro tak by braลo zarรณwno wektor wartoลci paramteru $a$ jak i wektor wartoลci poczฤ
tkowych. Poniewaลผ bฤdziemy mieli tฤ
samฤ
wartoลฤ parametru $a$ dla wielu wartoลci poczฤ
tkowych to wykorzystamy uลผytecznฤ
w tym przypadku funkcjฤ numpy:
a = np.repeat(a,Nx)
End of explanation
import pycuda.gpuarray as gpuarray
from pycuda.curandom import rand as curand
from pycuda.compiler import SourceModule
import pycuda.driver as cuda
try:
ctx.pop()
ctx.detach()
except:
print( "No CTX!")
cuda.init()
device = cuda.Device(0)
ctx = device.make_context()
mod = SourceModule(
__global__ void logistic_iterations(float *a,float *x,int Niter)
{
int idx = threadIdx.x + blockDim.x*blockIdx.x;
float a_ = a[idx];
float x_ = x[idx];
int i;
for (i=0;i<Niter;i++){
x_ = a_*x_*(1-x_);
}
x[idx] = x_;
}
)
logistic_iterations = mod.get_function("logistic_iterations")
block_size=128
Nx = 10240
Na = 1024*2
blocks = Nx*Na//block_size
a = np.linspace(3.255,4,Na).astype(np.float32)
a = np.repeat(a,Nx)
a_gpu = gpuarray.to_gpu(a)
x_gpu = curand((Na*Nx,))
%%time
logistic_iterations(a_gpu,x_gpu, np.int32(10000),block=(block_size,1,1), grid=(blocks,1,1))
ctx.synchronize()
a,x = a_gpu.get(),x_gpu.get()
plt.figure(num=1, figsize=(9, 8))
every = 100
plt.plot(a[::every],x[::every],'.',markersize=1,alpha=1)
plt.plot([3.83,3.83],[0,1])
H, xedges, yedges = np.histogram2d(a,x,bins=(1024,1024))
plt.figure(num=1, figsize=(10,10))
plt.imshow(1-np.log(H.T+5e-1),origin='lower',cmap='gray')
Explanation: Algorytm z pฤtlฤ
wewnฤ
trz jฤ
dra CUDA
Napiszmy teraz algorytm, ktรณry bฤdzie iterowaล rรณwnanie Niter razy wewnฤ
trz jednego wywoลania jฤ
dra CUDA.
End of explanation
%load_ext Cython
%%cython
def logistic_cpu(double a = 3.56994):
cdef double x
cdef int i
x = 0.1
for i in range(1000*1024*1024):
x = a*x*(1.0-x)
return x
%%time
logistic_cpu(1.235)
print("OK")
Explanation: Porรณwnanie z wersjฤ
CPU
Dla porรณwnania napiszemy prosty program, ktรณry oblicza iteracje rรณwnania logistycznego na CPU. Zatosujemy jฤzyk cython, ktรณry umoลผliwia automatyczne skompilowanie funkcji do wydajnego kodu, ktรณrego wydajnoลฤ jest porรณwnywalna z kodem napisanym w jฤzyku C lub podobnym.
W wyniku dziaลania programu widzimy, ลผe nasze jฤ
dro wykonuje obliczenia znacznie szybciej.
End of explanation
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
a1,a2 = 3,3.56994567
Nx = 1024
Na = 1024
a = np.linspace(a1,a2,Na).astype(np.float32)
a = np.repeat(a,Nx)
a_gpu = gpuarray.to_gpu(a)
x_gpu = curand((Na*Nx,))
x = x_gpu.get()
fig = plt.figure()
every = 1
Niter = 10000
for i in range(Niter):
if i%every==0:
plt.cla()
plt.xlim(a1,a2)
plt.ylim(0,1)
fig.suptitle("iteracja: %05d"%i)
plt.plot(a,x,'.',markersize=1)
plt.savefig("/tmp/%05d.png"%i)
if i>10:
every=2
if i>30:
every=10
if i>100:
every=50
if i>1000:
every=500
iterate(a_gpu,x_gpu)
ctx.synchronize()
a,x = a_gpu.get(),x_gpu.get()
%%sh
cd /tmp
time convert -delay 20 -loop 0 *.png anim_double.gif && rm *.png
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
block_size=128
Nx = 1024*5
Na = 1024*3
blocks = Nx*Na//block_size
nframes = 22
for i,(a1,a2) in enumerate(zip(np.linspace(3,3.77,nframes),np.linspace(4,3.83,nframes))):
a = np.linspace(a1,a2,Na).astype(np.float32)
a = np.repeat(a,Nx)
a_gpu = gpuarray.to_gpu(a)
x_gpu = curand((Na*Nx,))
x = x_gpu.get()
logistic_iterations(a_gpu,x_gpu, np.int32(10000),block=(block_size,1,1), grid=(blocks,1,1))
ctx.synchronize()
a,x = a_gpu.get(),x_gpu.get()
H, xedges, yedges = np.histogram2d(a,x,bins=(np.linspace(a1,a2,1024),np.linspace(0,1,1024)))
fig, ax = plt.subplots(figsize=[10,7])
ax.imshow(1-np.log(H.T+5e-1),origin='lower',cmap='gray',extent=[a1,a2,0,1])
#plt.xlim(a1,a2)
#plt.ylim(0,1)
ax.set_aspect(7/10*(a2-a1))
#fig.set_size_inches(8, 5)
fig.savefig("/tmp/zoom%05d.png"%i)
plt.close(fig)
%%sh
cd /tmp
time convert -delay 30 -loop 0 *.png anim_zoom.gif && rm *.png
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
block_size=128
Nx = 1024*5
Na = 1024*3
blocks = Nx*Na//block_size
a1,a2 = 1,4
x1,x2 = 0., 1
a = np.linspace(a1,a2,Na).astype(np.float32)
a = np.repeat(a,Nx)
a_gpu = gpuarray.to_gpu(a)
x_gpu = curand((Na*Nx,))
x = x_gpu.get()
logistic_iterations(a_gpu,x_gpu, np.int32(10000),block=(block_size,1,1), grid=(blocks,1,1))
ctx.synchronize()
a,x = a_gpu.get(),x_gpu.get()
H, xedges, yedges = np.histogram2d(a,x,bins=(np.linspace(a1,a2,1024),np.linspace(x1,x2,1024)))
fig, ax = plt.subplots(figsize=[10,7])
ax.imshow(1-np.log(H.T+5e-1),origin='lower',cmap='gray',extent=[a1,a2,x1,x2])
#plt.xlim(a1,a2)
#plt.ylim(0,1)
ax.set_aspect(7/10*(a2-a1)/(x2-x1))
#fig.set_size_inches(8, 5)
fig.savefig("/tmp/zoom.png")
plt.close(fig)
Explanation: Wizualizacja wynikรณw
End of explanation |
2,982 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparing Corpus Visualizers on Yellowbrick
Step1: UMAP vs T-SNE
Uniform Manifold Approximation and Projection (UMAP) is a dimension reduction technique that can be used for visualisation similarly to t-SNE, but also for general non-linear dimension reduction. The algorithm is founded on three assumptions about the data
The data is uniformly distributed on a Riemannian manifold;
The Riemannian metric is locally constant (or can be approximated as such);
The manifold is locally connected.
From these assumptions it is possible to model the manifold with a fuzzy topological structure. The embedding is found by searching for a low dimensional projection of the data that has the closest possible equivalent fuzzy topological structure.
Step2: Writing a Function to quickly Visualize Corpus
Which can then be used for rapid comparison
Step3: Quickly Comparing Plots by Controlling
The Dimensionality Reduction technique used
The Encoding Technique used
The dataset to be visualized
Whether to differentiate Labels or not
Set the alpha parameter
Set the metric for UMAP | Python Code:
##### Import all the necessary Libraries
from yellowbrick.text import TSNEVisualizer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import CountVectorizer
from yellowbrick.text import UMAPVisualizer
from yellowbrick.datasets import load_hobbies
Explanation: Comparing Corpus Visualizers on Yellowbrick
End of explanation
corpus = load_hobbies()
Explanation: UMAP vs T-SNE
Uniform Manifold Approximation and Projection (UMAP) is a dimension reduction technique that can be used for visualisation similarly to t-SNE, but also for general non-linear dimension reduction. The algorithm is founded on three assumptions about the data
The data is uniformly distributed on a Riemannian manifold;
The Riemannian metric is locally constant (or can be approximated as such);
The manifold is locally connected.
From these assumptions it is possible to model the manifold with a fuzzy topological structure. The embedding is found by searching for a low dimensional projection of the data that has the closest possible equivalent fuzzy topological structure.
End of explanation
def visualize(dim_reduction,encoding,corpus,labels = True,alpha=0.7,metric=None):
if 'tfidf' in encoding.lower():
encode = TfidfVectorizer()
if 'count' in encoding.lower():
encode = CountVectorizer()
docs = encode.fit_transform(corpus.data)
if labels is True:
labels = corpus.target
else:
labels = None
if 'umap' in dim_reduction.lower():
if metric is None:
viz = UMAPVisualizer()
else:
viz = UMAPVisualizer(metric=metric)
if 't-sne' in dim_reduction.lower():
viz = TSNEVisualizer(alpha = alpha)
viz.fit(docs,labels)
viz.show()
Explanation: Writing a Function to quickly Visualize Corpus
Which can then be used for rapid comparison
End of explanation
visualize('t-sne','tfidf',corpus)
visualize('t-sne','count',corpus,alpha = 0.5)
visualize('t-sne','tfidf',corpus,labels =False)
visualize('umap','tfidf',corpus)
visualize('umap','tfidf',corpus,labels = False)
visualize('umap','count',corpus,metric= 'cosine')
Explanation: Quickly Comparing Plots by Controlling
The Dimensionality Reduction technique used
The Encoding Technique used
The dataset to be visualized
Whether to differentiate Labels or not
Set the alpha parameter
Set the metric for UMAP
End of explanation |
2,983 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Running faster your code
[Discrete signal energy](https
Step2: Now, using Numpy's array multiplication and sum
Step4: Another example to see that vectorization not only involves pure computation
Step5: Another example extracted from this HPC tutorial
Step6: 2 Use in-place operations
Step7: 3 Maximize locality in memory acess
Step8: 4 Delegate in C
When you want to speed-up your code or simply when you need to reuse C code, it is possible to use it from Python. There are several alternatives
Step9: 4.1 Cython
Python with C data types. Another interesting link.
Working flow
Step10: Defining C types
Step11: 4.2 Python-C
Python-C-API is the most flexible and efficient alternative, but also the hardest to code.
The C code to reuse in Python
Step12: The module
Step13: Module compilation
Step14: However, remember | Python Code:
import numpy as np
def non_vectorized_dot_product(x, y):
Return the sum of x[i] * y[j] for all pairs of indices i, j.
Example:
>>> my_dot_product(np.arange(20), np.arange(20))
result = 0
for i in range(len(x)):
result += x[i] * y[i]
return result
signal = np.random.random(1000)
#print(signal)
%timeit non_vectorized_dot_product(signal, signal)
non_vectorized_dot_product(signal, signal)
Explanation: Running faster your code
[Discrete signal energy](https://en.wikipedia.org/wiki/Energy_(signal_processing) can be computed as a particular case of the dot product when both signals are the same:
$$ ~\ E_{s} \ \ = \ \ \langle x(n), x(n)\rangle \ \ = \sum_{n}{|x(n)|^2} = \sum_{n}{x(n)y(n)}$$
Let's see how to speed up this operation.
1 Vectorize
End of explanation
%timeit np.sum(signal*signal)
np.sum(signal*signal)
Explanation: Now, using Numpy's array multiplication and sum:
End of explanation
# https://softwareengineering.stackexchange.com/questions/254475/how-do-i-move-away-from-the-for-loop-school-of-thought
def cleanup(x, missing=-1, value=0):
Return an array that's the same as x, except that where x ==
missing, it has value instead.
>>> cleanup(np.arange(-3, 3), value=10)
... # doctest: +NORMALIZE_WHITESPACE
array([-3, -2, 10, 0, 1, 2])
result = []
for i in range(len(x)):
if x[i] == missing:
result.append(value)
else:
result.append(x[i])
return np.array(result)
array = np.arange(-8,8)
print(array)
print(cleanup(array, value=10, missing=0))
array = np.arange(-1000,1000)
%timeit cleanup(array, value=10, missing=0)
print(array[995:1006])
print(cleanup(array, value=10, missing=0)[995:1006])
# http://www.secnetix.de/olli/Python/list_comprehensions.hawk
# https://docs.python.org/3/library/functions.html#zip
value = [10]*2000
%timeit [xv if c else yv for (c,xv,yv) in zip(array == 0, value, array)]
print([xv if c else yv for (c,xv,yv) in zip(array == 0, value, array)][995:1006])
# https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.where.html
%timeit np.where(array == 0, 10, array)
print(np.where(array == 0, 10, array)[995:1006])
Explanation: Another example to see that vectorization not only involves pure computation:
End of explanation
from math import sin
import numpy as np
arr = np.arange(1000000)
%timeit [sin(i)**2 for i in arr]
%timeit np.sin(arr)**2
Explanation: Another example extracted from this HPC tutorial:
End of explanation
a = np.random.random(500000)
print(a[0:10])
b = np.copy(a)
%timeit global a; a = 10*a
a = 10*a
print(a[0:10])
a = np.copy(b)
print(a[0:10])
%timeit global a ; a *= 10
a *= 10
print(a[0:10])
Explanation: 2 Use in-place operations
End of explanation
a = np.random.rand(100,50)
b = np.copy(a)
def mult(x, val):
for i in range(x.shape[0]):
for j in range(x.shape[1]):
x[i][j] /= val
%timeit -n 1 -r 1 mult(a, 10)
a = np.copy(b)
def mult2(x, val):
for j in range(x.shape[1]):
for i in range(x.shape[0]):
x[i][j] /= val
%timeit -n 1 -r 1 mult2(a, 10)
# http://www.scipy-lectures.org/advanced/optimizing/
# https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.sum.html
c = np.zeros((1000, 1000), order='C')
%timeit c.sum(axis=0)
c.sum(axis=0).shape
%timeit c.sum(axis=1)
c.sum(axis=1).shape
Explanation: 3 Maximize locality in memory acess
End of explanation
!cat sum_array_lib.py
# Please, restart the kernel to ensure that the module sum_array_lib is re-loaded
!rm -f sum_array_lib.cpython*.so
import sum_array_lib
import array as arr
a = arr.array('d', [i for i in range(100000)])
#a = [1 for i in range(100000)]
%timeit sum_array_lib.sum_array(a, len(a))
sum = sum_array_lib.sum_array(a, len(a))
print(sum)
Explanation: 4 Delegate in C
When you want to speed-up your code or simply when you need to reuse C code, it is possible to use it from Python. There are several alternatives:
Cython: A superset of Python to allow you call C functions and load Python variables with C ones.
SWIG (Simplified Wrapper Interface Generator): A software development tool to connect C/C++ programs with other languages (included Python).
Ctypes: A Python package that can be used to call shared libraries (.ddl/.so/.dylib) from Python.
Python-C-API: A low-level interface between (compiled) C code and Python.
A function to optimize:
End of explanation
!cp sum_array_lib.py sum_array_lib.pyx
!cat sum_array_lib.pyx
!cat Cython/basic/setup.py
!rm -f sum_array_lib.cpython*.so
!python Cython/basic/setup.py build_ext --inplace
# Please, restart the kernel to ensure that the module sum_array_lib is re-loaded
import sum_array_lib
import array as arr
a = arr.array('d', [i for i in range(100000)])
#a = [1.1 for i in range(100000)]
%timeit sum_array_lib.sum_array(a, len(a))
sum = sum_array_lib.sum_array(a, len(a))
print(sum)
Explanation: 4.1 Cython
Python with C data types. Another interesting link.
Working flow:
Cython compiler C compiler
.pyx -----------------> .c --------------> .so
Installation
$ pip install Cython
Compilation of pure Python code:
End of explanation
!cat Cython/cdef/sum_array_lib.pyx
!cat Cython/cdef/setup.py
# Please, restart the kernel to ensure that the module sum_array_lib is re-loaded
!rm sum_array_lib.cpython*.so
!python Cython/cdef/setup.py build_ext --inplace
# Please, restart the kernel to ensure that the module sum_array_lib is re-loaded
import array as arr
import sum_array_lib
#import numpy as np
#a = np.arange(100000)
a = arr.array('d', [i for i in range(100000)])
%timeit sum_array_lib.sum_array(a, len(a))
print(sum)
Explanation: Defining C types:
End of explanation
!cat sum_array_lib.c
!cat sum_array.c
!gcc -O3 sum_array.c -o sum_array
!./sum_array
Explanation: 4.2 Python-C
Python-C-API is the most flexible and efficient alternative, but also the hardest to code.
The C code to reuse in Python
End of explanation
!cat sum_array_module.c
Explanation: The module
End of explanation
!cat setup.py
!python setup.py build_ext --inplace
import sum_array_module
import numpy as np
a = np.arange(100000)
%timeit sum_array_module.sumArray(a)
print(sum)
Explanation: Module compilation
End of explanation
%timeit np.sum(a)
print(sum)
Explanation: However, remember: vectorize when possible!
End of explanation |
2,984 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
$\newcommand{\xv}{\mathbf{x}}
\newcommand{\Xv}{\mathbf{X}}
\newcommand{\piv}{\mathbf{\pi}}
\newcommand{\yv}{\mathbf{y}}
\newcommand{\Yv}{\mathbf{Y}}
\newcommand{\zv}{\mathbf{z}}
\newcommand{\av}{\mathbf{a}}
\newcommand{\Wv}{\mathbf{W}}
\newcommand{\wv}{\mathbf{w}}
\newcommand{\gv}{\mathbf{g}}
\newcommand{\Hv}{\mathbf{H}}
\newcommand{\dv}{\mathbf{d}}
\newcommand{\Vv}{\mathbf{V}}
\newcommand{\vv}{\mathbf{v}}
\newcommand{\tv}{\mathbf{t}}
\newcommand{\Tv}{\mathbf{T}}
\newcommand{\Sv}{\mathbf{S}}
\newcommand{\zv}{\mathbf{z}}
\newcommand{\Zv}{\mathbf{Z}}
\newcommand{\Norm}{\mathcal{N}}
\newcommand{\muv}{\boldsymbol{\mu}}
\newcommand{\sigmav}{\boldsymbol{\sigma}}
\newcommand{\phiv}{\boldsymbol{\phi}}
\newcommand{\Phiv}{\boldsymbol{\Phi}}
\newcommand{\Sigmav}{\boldsymbol{\Sigma}}
\newcommand{\Lambdav}{\boldsymbol{\Lambda}}
\newcommand{\half}{\frac{1}{2}}
\newcommand{\argmax}[1]{\underset{#1}{\operatorname{argmax}}}
\newcommand{\argmin}[1]{\underset{#1}{\operatorname{argmin}}}
\newcommand{\dimensionbar}[1]{\underset{#1}{\operatorname{|}}}
$
Gaussian or Normal Distributions
First, Why Gaussians?
How would you like to model the probability distribution of a typical cluster of your data?
If, and that's a big if, you believe
the data samples from a particular class have attribute values that
tend to be close to a particular value, that is, that the samples
cluster about a central point in the sample space, then pick a
probabilistic model that has a peak over that central point and falls
towards zero as you move away from that point.
How do we construct such a model? Well, let's try for two
characteristics
Step1: The red dotted line is at $\mu = 5.5$.
Humm...meets our criteria, but has problems---goes to infinity at the
center and we cannot control the width of the central area where samples
may appear.
Can take care of first issue by using the distance as an exponent, so
that when it is zero, the result is 1. Let's try a base of 2.
$$
p(\xv) = \frac{1}{2^{||\xv - \muv||}}
$$
Now, let's see...how do we do a calculation with a scalar base and vector exponent? For example, we want
$$
2^{[2,3,4]} = [2^2, 2^3, 2^4]
$$
Step2: Nope. Maybe we have to use a numpy array.
Step3: Hey! That's it.
Step4: Solves the infinity problem, but it still falls off too fast. Want to
change the distance to a function that changes more slowly at first,
when you are close to the center. How about the square function?
$$
p(\xv) = \frac{1}{2^{||\xv - \muv||^2}}
$$
Step5: Yeah. That's a nice shape. Now we can vary the width by scaling the
squared distance.
$$
p(\xv) = \frac{1}{2^{0.1\,||\xv - \muv||^2}}
$$
Step6: There. That's good enough. We could be happy with this. Just pick
the center and scale factor that best matches the sample
distributions. But, let's make one more change that won't affect the
shape of our model, but will simplify later calculations. We will
soon see that logarithms come into play when we try to fit our model
to a bunch of samples. What is the logarithm of $2^{0.1\,|\xv -
\muv|^2}$, or, more simply, the logarithm of $2^z$? If we are talking
base 10 logs, $\log 2^z = z \log 2$. Since we are free to pick the
base...hey, how about using $e$ and using natural logarithms? Then
$\ln e^z = z \ln e = z$. So much simpler!
Step7: The scale factor 0.1 is a bit counterintuitive. The smaller the
value, the more spread out our model is. So, let's divide by the
scale factor rather than multiply by it, and let's call it $\sigma$.
Let's also put it inside the square function, so $\sigma$ is directly
scaling the distance, rather than the squared distance.
$$
p(\xv) = e^{-\left (\frac{||\xv - \muv||}{\sigma}\right )^2}
$$
or
$$
p(\xv) = e^{-\frac{||\xv - \muv||^2}{\sigma^2}}
$$
Speaking of dividing, and this won't surprise you, since we will be
taking derivatives of this function with respect to parameters like
$\mu$, let's multiply by $\frac{1}{2}$ so that when we bring the
exponent 2 down it will cancel with $\frac{1}{2}$.
$$
p(\xv) = e^{-\frac{1}{2}\frac{||\xv - \muv||^2}{\sigma^2}}
$$
One remaining problem we have with our "probabilistic" model is that
it is not a true probability distribution, which must
- have values between 0 and 1, $0 \le p(x) \le 1$, and
- have values that sum to 1 over the range of possible $x$ values, $\int_{-\infty}^{+\infty} p(x) dx = 1$.
We have satisfied the first requirement, but not the second. We can fix
this by calculating the value of the integral and dividing by that
value, which is called the normalizing constant. The value of the
integral turns out to be $\sqrt{2\pi\sigma^2}$. See Evolution of the Normal Distribution.
So, finally, we have the definition
$$
p(\xv) = \frac{1}{\sqrt{2\pi\sigma^2}} e^{-\frac{1}{2}\frac{||\xv - \muv||^2}{\sigma^2}}
$$
and, TA DA..., we have arrived at the Normal, or Gaussian, probability
distribution (technically the density function) with mean $\muv$ and
standard deviation $\sigma$, and thus variance $\sigma^2$. Check out
the Wikipedia entry.
Now you know a bit about why the Normal distribution is so prevalent.
For additional insight and history, read Chapter 7
Step9: Now how would you check our definition of $p(x)$ in python? First, we need a function to calculate $p(x)$ given $\mu$ and $\Sigma$, or $p(x|\mu, \Sigma)$.
Step10: Let's check the shapes of matrices in that last calculation.
diffv = X - mu.T
| NxD Dx1 |
| |
| 1xD
|
NxD
normConstant * np.exp(-0.5 * np.sum(np.dot(diffv, sigmaI) * diffv, axis=1))[ | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
xs = np.linspace(-5,10,1000)
mu = 5.5
plt.plot(xs, 1/np.sqrt((xs-mu)**2))
plt.ylim(0,20)
plt.plot([mu, mu], [0, 20], 'r--',lw=2)
plt.xlabel('$x$')
plt.ylabel('$p(x)$');
Explanation: $\newcommand{\xv}{\mathbf{x}}
\newcommand{\Xv}{\mathbf{X}}
\newcommand{\piv}{\mathbf{\pi}}
\newcommand{\yv}{\mathbf{y}}
\newcommand{\Yv}{\mathbf{Y}}
\newcommand{\zv}{\mathbf{z}}
\newcommand{\av}{\mathbf{a}}
\newcommand{\Wv}{\mathbf{W}}
\newcommand{\wv}{\mathbf{w}}
\newcommand{\gv}{\mathbf{g}}
\newcommand{\Hv}{\mathbf{H}}
\newcommand{\dv}{\mathbf{d}}
\newcommand{\Vv}{\mathbf{V}}
\newcommand{\vv}{\mathbf{v}}
\newcommand{\tv}{\mathbf{t}}
\newcommand{\Tv}{\mathbf{T}}
\newcommand{\Sv}{\mathbf{S}}
\newcommand{\zv}{\mathbf{z}}
\newcommand{\Zv}{\mathbf{Z}}
\newcommand{\Norm}{\mathcal{N}}
\newcommand{\muv}{\boldsymbol{\mu}}
\newcommand{\sigmav}{\boldsymbol{\sigma}}
\newcommand{\phiv}{\boldsymbol{\phi}}
\newcommand{\Phiv}{\boldsymbol{\Phi}}
\newcommand{\Sigmav}{\boldsymbol{\Sigma}}
\newcommand{\Lambdav}{\boldsymbol{\Lambda}}
\newcommand{\half}{\frac{1}{2}}
\newcommand{\argmax}[1]{\underset{#1}{\operatorname{argmax}}}
\newcommand{\argmin}[1]{\underset{#1}{\operatorname{argmin}}}
\newcommand{\dimensionbar}[1]{\underset{#1}{\operatorname{|}}}
$
Gaussian or Normal Distributions
First, Why Gaussians?
How would you like to model the probability distribution of a typical cluster of your data?
If, and that's a big if, you believe
the data samples from a particular class have attribute values that
tend to be close to a particular value, that is, that the samples
cluster about a central point in the sample space, then pick a
probabilistic model that has a peak over that central point and falls
towards zero as you move away from that point.
How do we construct such a model? Well, let's try for two
characteristics:
- The model's value will decrease with the distance from the central point, and
- its value will always be greater than 0.
If $\xv$ is a sample and $\muv$ is the central point, we can achieve this with
$$
p(\xv) = \frac{1}{||\xv - \muv||}
$$
where $||\xv - \muv||$ is the distance between $\xv$ and $\muv$.
Let's try making a plot of this for $\mu = 5.5$.
End of explanation
2**[2,3,4]
Explanation: The red dotted line is at $\mu = 5.5$.
Humm...meets our criteria, but has problems---goes to infinity at the
center and we cannot control the width of the central area where samples
may appear.
Can take care of first issue by using the distance as an exponent, so
that when it is zero, the result is 1. Let's try a base of 2.
$$
p(\xv) = \frac{1}{2^{||\xv - \muv||}}
$$
Now, let's see...how do we do a calculation with a scalar base and vector exponent? For example, we want
$$
2^{[2,3,4]} = [2^2, 2^3, 2^4]
$$
End of explanation
2**np.array([2,3,4])
Explanation: Nope. Maybe we have to use a numpy array.
End of explanation
plt.plot(xs, 1/2**np.sqrt((xs-mu)**2))
plt.plot([mu, mu], [0, 1], 'r--',lw=3)
plt.xlabel('$x$')
plt.ylabel('$p(x)$');
Explanation: Hey! That's it.
End of explanation
plt.plot(xs, 1/2**(xs-mu)**2)
plt.plot([mu, mu], [0, 1], 'r--',lw=3)
plt.xlabel('$x$')
plt.ylabel('$p(x)$');
Explanation: Solves the infinity problem, but it still falls off too fast. Want to
change the distance to a function that changes more slowly at first,
when you are close to the center. How about the square function?
$$
p(\xv) = \frac{1}{2^{||\xv - \muv||^2}}
$$
End of explanation
plt.plot(xs, 1/2**(0.1 * (xs-mu)**2))
plt.plot([mu, mu], [0, 1], 'r--',lw=3)
plt.xlabel('$x$')
plt.ylabel('$p(x)$');
Explanation: Yeah. That's a nice shape. Now we can vary the width by scaling the
squared distance.
$$
p(\xv) = \frac{1}{2^{0.1\,||\xv - \muv||^2}}
$$
End of explanation
plt.plot(xs, np.exp(-0.1 * (xs-mu)**2))
plt.plot([mu, mu], [0, 1], 'r--',lw=3)
plt.xlabel('$x$')
plt.ylabel('$p(x)$');
Explanation: There. That's good enough. We could be happy with this. Just pick
the center and scale factor that best matches the sample
distributions. But, let's make one more change that won't affect the
shape of our model, but will simplify later calculations. We will
soon see that logarithms come into play when we try to fit our model
to a bunch of samples. What is the logarithm of $2^{0.1\,|\xv -
\muv|^2}$, or, more simply, the logarithm of $2^z$? If we are talking
base 10 logs, $\log 2^z = z \log 2$. Since we are free to pick the
base...hey, how about using $e$ and using natural logarithms? Then
$\ln e^z = z \ln e = z$. So much simpler! :-)
So, our model is now
$$
p(\xv) = \frac{1}{e^{0.1\,||\xv - \muv||^2}}
$$
which can also be written as
$$
p(\xv) = e^{-0.1\,||\xv - \muv||^2}
$$
End of explanation
from ipywidgets import interact
maxSamples = 100
nSets = 10000
values = np.random.uniform(0,1,(maxSamples,nSets))
@interact(nSamples=(1,maxSamples))
def sumOfN(nSamples=1):
sums = np.sum(values[:nSamples,:],axis=0)
plt.hist(sums, 20, facecolor='green')
Explanation: The scale factor 0.1 is a bit counterintuitive. The smaller the
value, the more spread out our model is. So, let's divide by the
scale factor rather than multiply by it, and let's call it $\sigma$.
Let's also put it inside the square function, so $\sigma$ is directly
scaling the distance, rather than the squared distance.
$$
p(\xv) = e^{-\left (\frac{||\xv - \muv||}{\sigma}\right )^2}
$$
or
$$
p(\xv) = e^{-\frac{||\xv - \muv||^2}{\sigma^2}}
$$
Speaking of dividing, and this won't surprise you, since we will be
taking derivatives of this function with respect to parameters like
$\mu$, let's multiply by $\frac{1}{2}$ so that when we bring the
exponent 2 down it will cancel with $\frac{1}{2}$.
$$
p(\xv) = e^{-\frac{1}{2}\frac{||\xv - \muv||^2}{\sigma^2}}
$$
One remaining problem we have with our "probabilistic" model is that
it is not a true probability distribution, which must
- have values between 0 and 1, $0 \le p(x) \le 1$, and
- have values that sum to 1 over the range of possible $x$ values, $\int_{-\infty}^{+\infty} p(x) dx = 1$.
We have satisfied the first requirement, but not the second. We can fix
this by calculating the value of the integral and dividing by that
value, which is called the normalizing constant. The value of the
integral turns out to be $\sqrt{2\pi\sigma^2}$. See Evolution of the Normal Distribution.
So, finally, we have the definition
$$
p(\xv) = \frac{1}{\sqrt{2\pi\sigma^2}} e^{-\frac{1}{2}\frac{||\xv - \muv||^2}{\sigma^2}}
$$
and, TA DA..., we have arrived at the Normal, or Gaussian, probability
distribution (technically the density function) with mean $\muv$ and
standard deviation $\sigma$, and thus variance $\sigma^2$. Check out
the Wikipedia entry.
Now you know a bit about why the Normal distribution is so prevalent.
For additional insight and history, read Chapter 7: The Central
Gaussian, or Normal, Distribution of Probability Theory:
The Logic of Science by E.T. Jaynes, 1993. It starts with this
quotation from Augustus de Morgan (yes, that de Morgan) from 1838:
"My own impression...is that the mathematical results have outrun
their interpretation and that some simple explanation of the force and meaning of the
celebrated integral...will one day be found...which will at once render useless
all the works hitherto written."
Before wrestling with python, we need to define the multivariate
Normal distribution. Let's go to two dimensions, to make sure we develop code to handle multidimensional data, not just scalars. Now our hill we
have been drawing will be a mound up above a two-dimensional base
plane. We will define $\xv$ and $\muv$
to be two-dimensional column vectors. What will $\sigma$ be? Well, we
need scale factors for the two dimensions to stretch or shrink the
mound in the directions of the two base-plane axes. We also need
another scale factor to allow the mound to be stretched in directions
not parallel to an axis.
Remember, the Normal distribution is all about squared distance from
the mean. In two dimensions, the difference vector is $\dv = \xv -
\muv = (d_1,d_2)$. The squared distance is therefore $||\dv||^2 =
d_1^2 + 2 d_1 d_2 + d_2^2$. Now we see where the three scale factors
go: $s_1 d_1^2 + 2 s_2 d_1 d_2 + s_3 d_2^2$. This can be written in
matrix form if we collect the scale factors in the matrix
$$
\Sigmav = \begin{bmatrix}
s_1 & s_2\
s_2 & s_3
\end{bmatrix}
$$
so that
$$
s_1 d_1^2 + 2 s_2 d_1 d_2 + s_3 d_2^2 =
\dv^T \Sigmav \dv
$$
because
$$
\begin{align}
\dv^T \Sigmav \dv
& =
\begin{bmatrix}
d_1 & d_2
\end{bmatrix}
\begin{bmatrix}
s_1 & s_2\
s_2 & s_3
\end{bmatrix}
\begin{bmatrix}
d_1\
d_2
\end{bmatrix}\
& =
\begin{bmatrix}
d_1 s_1 + d_2 s_2 & d_1 s_2 + d_2 s_3
\end{bmatrix}
\begin{bmatrix}
d_1\
d_2
\end{bmatrix}\
&=
(d_1 s_1 + d_2 s_2) d_1 + (d_1 s_2 + d_2 s_3) d_2 \
&=
s_1 d_1^2 + 2 s_2 d_1 d_2 + s_3 d_2^2
\end{align}
$$
Again, it is more intuitive to use scale factors that divide the
distance components rather than multiply them. In the
multidimensional world, this means that instead of multiplying by
$\Sigmav$ we will multiply by $\Sigmav^{-1}$.
The normalizing constant is a bit more complicated. It involves the
determinant of $\Sigmav$, which is the sum of its eigenvalues and can
be thought of as a generalized scale factor. Skim through
the Wikipedia entry on determinants. The multivariate $D$-dimensional Normal distribution is
$$
p(\xv) = \frac{1}{(2\pi)^{d/2} |\Sigmav |^{1/2}}
e^{-\frac{1}{2} (\xv-\muv)^T \Sigmav^{-1} (\xv - \muv)}
$$
where mean $\muv$ is a $D$-dimensional column vector and covariance
matrix $\Sigmav$ is a $D\times D$ symmetric matrix.
The Normal distribution is also called the Gaussian distribution. (When did Gauss live?)
In addition to the above reasons for concocting this distribution, it has a number of interesting analytical properties. One is the Central Limit Theorem, which states that the sum of many choices of $N$ random variables tends to a Normal distribution as $N \rightarrow \infty$.
Let's play with this theorem with some fancy shmansy python using the new ipython notebook interact feature to explore the distribution of sums as the number of samples varies.
End of explanation
def normald(X, mu, sigma):
normald:
X contains samples, one per row, N x D.
mu is mean vector, D x 1.
sigma is covariance matrix, D x D.
D = X.shape[1]
detSigma = sigma if D == 1 else np.linalg.det(sigma)
if detSigma == 0:
raise np.linalg.LinAlgError('normald(): Singular covariance matrix')
sigmaI = 1.0/sigma if D == 1 else np.linalg.inv(sigma)
normConstant = 1.0 / np.sqrt((2*np.pi)**D * detSigma)
diffv = X - mu.T # change column vector mu to be row vector
return normConstant * np.exp(-0.5 * np.sum(np.dot(diffv, sigmaI) * diffv, axis=1))[:,np.newaxis]
normald?
Explanation: Now how would you check our definition of $p(x)$ in python? First, we need a function to calculate $p(x)$ given $\mu$ and $\Sigma$, or $p(x|\mu, \Sigma)$.
End of explanation
np.array([[1,2,3]]).shape
X = np.array([[1,2],[3,5],[2.1,1.9]])
mu = np.array([[2],[2]])
Sigma = np.array([[1,0],[0,1]])
print(X)
print(mu)
print(Sigma)
normald(X, mu, Sigma)
Explanation: Let's check the shapes of matrices in that last calculation.
diffv = X - mu.T
| NxD Dx1 |
| |
| 1xD
|
NxD
normConstant * np.exp(-0.5 * np.sum(np.dot(diffv, sigmaI) * diffv, axis=1))[:,newaxis]
1x1 NxD DxD | NxD | | |
| | | |
NxD NxD | |
| |
N |
Nx1
So we get $N$ answers, one for each sample.
End of explanation |
2,985 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
6.86x - Introduction to ML Packages (Part 2)
This tutorial is designed to provide a short introduction to deep learning with PyTorch.
You can start studying this tutorial as you work through unit 3 of the course.
For more resources, check out the PyTorch tutorials! There are many more in-depth examples available there.
Source code for this notebook hosted at
Step1: Tensors
Tensors are PyTorch's equivalent of NumPy ndarrays.
Step2: PyTorch tensors and NumPy ndarrays even share the same memory handles, so you can switch between the two types essentially for free
Step3: Like NumPy, there are a zillion different operations you can do with tensors. Best thing to do is to go to https
Step4: In-place operations exist to, generally denoted by a trailing '' (e.g. my_tensor.my_inplace_function).
Step5: Manipulate dimensions...
Step6: If you have a GPU...
Step7: And many more!
A Quick Note about Batching
In most ML applications we do mini-batch stochastic gradient descent instead of pure stochastic gradient descent.
Mini-batch SGD is a step between full gradient descent and stochastic gradient descent by computing the average gradient over a small number of examples.
In a nutshell, given n examples
Step8: Autograd
Step9: At first the 'grad' parameter is None
Step10: Let's do an operation. Take y = e^x.
Step11: To run the gradient computing magic, call '.backward()' on a variable.
Step12: For all dependent variables {x_1, ..., x_n} that were used to compute y, dy/x_i is computed and stored in the x_i.grad field.
Here dy/dx = e^x = y. Let's see!
Step13: Important! Remember to zero gradients before subsequent calls to backwards.
Step14: Also important! Under the hood PyTorch stores all the stuff required to compute gradients (call stack, cached values, etc). If you want to save a variable just to keep it around (say for logging or plotting) remember to call .item() to get the python value and free the PyTorch machinery memory.
You can stop auto-grad from running in the background by using the torch.no_grad() context manager.
python
with torch.no_grad()
Step15: We'll train a one layer neural net to classify this dataset. Let's define the parameter sizes
Step16: And now run a few steps of SGD!
Step17: torch.nn
The nn package is where all of the cool neural network stuff is. Layers, loss functions, etc.
Let's dive in.
Layers
Before we manually defined our linear layers. PyTorch has them for you as sub-classes of nn.Module.
Step18: A note on convolution sizes
Step19: Updating is now as easy as
Step20: Testing loops are similar.
Step21: MNIST
Just going to run mnist! | Python Code:
# Start by importing torch
import torch
Explanation: 6.86x - Introduction to ML Packages (Part 2)
This tutorial is designed to provide a short introduction to deep learning with PyTorch.
You can start studying this tutorial as you work through unit 3 of the course.
For more resources, check out the PyTorch tutorials! There are many more in-depth examples available there.
Source code for this notebook hosted at: https://github.com/varal7/ml-tutorial
PyTorch
PyTorch is a flexible scientific computing package targetted towards gradient-based deep learning. Its low-level API closely follows NumPy. However, there are a several key additions:
GPU support!
Automatic differentiation!
Deep learning modules!
Data loading!
And other generally useful goodies.
If you don't have GPU enabled hardward, don't worry. Like NumPy, PyTorch runs pre-compiled, highly efficient C code to handle all intensive backend functions.
Go to pytorch.org to download the correct package for your computing environment.
End of explanation
# Construct a bunch of ones
some_ones = torch.ones(2, 2)
print(some_ones)
# Construct a bunch of zeros
some_zeros = torch.zeros(2, 2)
print(some_zeros)
# Construct some normally distributed values
some_normals = torch.randn(2, 2)
print(some_normals)
Explanation: Tensors
Tensors are PyTorch's equivalent of NumPy ndarrays.
End of explanation
torch_tensor = torch.randn(5, 5)
numpy_ndarray = torch_tensor.numpy()
back_to_torch = torch.from_numpy(numpy_ndarray)
Explanation: PyTorch tensors and NumPy ndarrays even share the same memory handles, so you can switch between the two types essentially for free:
End of explanation
# Create two tensors
a = torch.randn(5, 5)
b = torch.randn(5, 5)
print(a)
print(b)
# Indexing by i,j
another_tensor = a[2, 2]
print(another_tensor)
# The above returns a tensor type! To get the python value:
python_value = a[2, 2].item()
print(python_value)
# Getting a whole row or column or range
first_row = a[0, :]
first_column = a[:, 0]
combo = a[2:4, 2:4]
print(combo)
# Addition
c = a + b
# Elementwise multiplication: c_ij = a_ij * b_ij
c = a * b
# Matrix multiplication: c_ik = a_ij * b_jk
c = a.mm(b)
# Matrix vector multiplication
c = a.matmul(b[:, 0])
a = torch.randn(5, 5)
print(a.size())
vec = a[:, 0]
print(vec.size())
# Matrix multiple 5x5 * 5x5 --> 5x5
aa = a.mm(a)
# matrix vector 5x5 * 5 --> 5
v1 = a.matmul(vec)
print(v1)
vec_as_matrix = vec.view(5, 1)
v2 = a.mm(vec_as_matrix)
print(v2)
Explanation: Like NumPy, there are a zillion different operations you can do with tensors. Best thing to do is to go to https://pytorch.org/docs/stable/tensors.html if you know you want to do something to a tensor but don't know how!
We can cover a few major ones here:
In the Numpy tutorial, we have covered the basics of Numpy, numpy arrays, element-wise operations, matrices operations and generating random matrices.
In this section, we'll cover indexing, slicing and broadcasting, which are useful concepts that will be reused in Pandas and PyTorch.
End of explanation
# Add one to all elements
a.add_(1)
# Divide all elements by 2
a.div_(2)
# Set all elements to 0
a.zero_()
Explanation: In-place operations exist to, generally denoted by a trailing '' (e.g. my_tensor.my_inplace_function).
End of explanation
# Add a dummy dimension, e.g. (n, m) --> (n, m, 1)
a = torch.randn(10, 10)
# At the end
print(a.unsqueeze(-1).size())
# At the beginning
print(a.unsqueeze(0).size())
# In the middle
print(a.unsqueeze(1).size())
# What you give you can take away
print(a.unsqueeze(0).squeeze(0).size())
# View things differently, i.e. flat
print(a.view(100, 1).size())
# Or not flat
print(a.view(50, 2).size())
# Copy data across a new dummy dimension!
a = torch.randn(2)
a = a.unsqueeze(-1)
print(a)
print(a.expand(2, 3))
Explanation: Manipulate dimensions...
End of explanation
# Check if you have it
do_i_have_cuda = torch.cuda.is_available()
if do_i_have_cuda:
print('Using fancy GPUs')
# One way
a = a.cuda()
a = a.cpu()
# Another way
device = torch.device('cuda')
a = a.to(device)
device = torch.device('cpu')
a = a.to(device)
else:
print('CPU it is!')
Explanation: If you have a GPU...
End of explanation
# Batched matrix multiply
a = torch.randn(10, 5, 5)
b = torch.randn(10, 5, 5)
# The same as for i in 1 ... 10, c_i = a[i].mm(b[i])
c = a.bmm(b)
print(c.size())
Explanation: And many more!
A Quick Note about Batching
In most ML applications we do mini-batch stochastic gradient descent instead of pure stochastic gradient descent.
Mini-batch SGD is a step between full gradient descent and stochastic gradient descent by computing the average gradient over a small number of examples.
In a nutshell, given n examples:
- Full GD: dL/dw = average over all n examples. One step per n examples.
- SGD: dL/dw = point estimate over a single example. n steps per n examples.
- Mini-batch SGD: dL/dw = average over m << n examples. n / m steps per n examples.
Advantages of mini-batch SGD include a more stable gradient estimate and computational efficiency on modern hardware (exploiting parallelism gives sub-linear to constant time complexity, especially on GPU).
In PyTorch, batched tensors are represented as just another dimension. Most of the deep learning modules assume batched tensors as input (even if the batch size is just 1).
End of explanation
# A tensor that will remember gradients
x = torch.randn(1, requires_grad=True)
print(x)
Explanation: Autograd: Automatic Differentiation!
Along with the flexible deep learning modules (to follow) this is the best part of using a package PyTorch.
What is autograd? It automatically computes gradients. All those complicated functions you might be using for your model need gradients for back-propagation. Autograd does this auto-magically! (Sorry, you still need to do this by hand for homework 4.)
Let's warmup.
End of explanation
print(x.grad)
Explanation: At first the 'grad' parameter is None:
End of explanation
y = x.exp()
Explanation: Let's do an operation. Take y = e^x.
End of explanation
y.backward()
Explanation: To run the gradient computing magic, call '.backward()' on a variable.
End of explanation
print(x.grad, y)
Explanation: For all dependent variables {x_1, ..., x_n} that were used to compute y, dy/x_i is computed and stored in the x_i.grad field.
Here dy/dx = e^x = y. Let's see!
End of explanation
# Compute another thingy with x.
z = x * 2
z.backward()
# Should be 2! But it will be 2 + e^x.
print(x.grad)
x_a = torch.randn(1, requires_grad=True)
x_b = torch.randn(1, requires_grad=True)
x = x_a * x_b
x1 = x ** 2
x2 = 1 / x1
x3 = x2.exp()
x4 = 1 + x3
x5 = x4.log()
x6 = x5 ** (1/3)
x6.backward()
print(x_a.grad)
print(x_b.grad)
x = torch.randn(1, requires_grad=True)
y = torch.tanh(x)
y.backward()
print(x.grad)
Explanation: Important! Remember to zero gradients before subsequent calls to backwards.
End of explanation
# Set our random seeds
import random
import numpy as np
def set_seed(seed):
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
if torch.cuda.is_available():
torch.cuda.manual_seed(seed)
# Get ourselves a simple dataset
from sklearn.datasets import make_classification
set_seed(7)
X, Y = make_classification(n_features=2, n_redundant=0, n_informative=1, n_clusters_per_class=1)
print('Number of examples: %d' % X.shape[0])
print('Number of features: %d' % X.shape[1])
# Take a peak
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.scatter(X[:, 0], X[:, 1], marker='o', c=Y, s=25, edgecolor='k')
plt.show()
# Convert data to PyTorch
X, Y = torch.from_numpy(X), torch.from_numpy(Y)
# Gotcha: "Expected object of scalar type Float but got scalar type Double"
# If you see this it's because numpy defaults to Doubles whereas pytorch has floats.
X, Y = X.float(), Y.float()
Explanation: Also important! Under the hood PyTorch stores all the stuff required to compute gradients (call stack, cached values, etc). If you want to save a variable just to keep it around (say for logging or plotting) remember to call .item() to get the python value and free the PyTorch machinery memory.
You can stop auto-grad from running in the background by using the torch.no_grad() context manager.
python
with torch.no_grad():
do_all_my_things()
Manual Neural Net + Autograd SGD Example (read this while studying unit 3)
Before we move on to the full PyTorch wrapper library, let's do a simple NN SGD example by hand.
We'll train a one hidden layer feed forward NN on a toy dataset.
End of explanation
# Define dimensions
num_feats = 2
hidden_size = 100
num_outputs = 1
# Learning rate
eta = 0.1
num_steps = 1000
Explanation: We'll train a one layer neural net to classify this dataset. Let's define the parameter sizes:
End of explanation
# Input to hidden weights
W1 = torch.randn(hidden_size, num_feats, requires_grad=True)
b1 = torch.zeros(hidden_size, requires_grad=True)
# Hidden to output
W2 = torch.randn(num_outputs, hidden_size, requires_grad=True)
b2 = torch.zeros(num_outputs, requires_grad=True)
# Group parameters
parameters = [W1, b1, W2, b2]
# Get random order
indices = torch.randperm(X.size(0))
# Keep running average losses for a learning curve?
avg_loss = []
# Run!
for step in range(num_steps):
# Get example
i = indices[step % indices.size(0)]
x_i, y_i = X[i], Y[i]
# Run example
hidden = torch.relu(W1.matmul(x_i) + b1)
y_hat = torch.sigmoid(W2.matmul(hidden) + b2)
# Compute loss binary cross entropy: -(y_i * log(y_hat) + (1 - y_i) * log(1 - y_hat))
# Epsilon for numerical stability
eps = 1e-6
loss = -(y_i * (y_hat + eps).log() + (1 - y_i) * (1 - y_hat + eps).log())
# Add to our running average learning curve. Don't forget .item()!
if step == 0:
avg_loss.append(loss.item())
else:
old_avg = avg_loss[-1]
new_avg = (loss.item() + old_avg * len(avg_loss)) / (len(avg_loss) + 1)
avg_loss.append(new_avg)
# Zero out all previous gradients
for param in parameters:
# It might start out as None
if param.grad is not None:
# In place
param.grad.zero_()
# Backward pass
loss.backward()
# Update parameters
for param in parameters:
# In place!
param.data = param.data - eta * param.grad
plt.plot(range(num_steps), avg_loss)
plt.ylabel('Loss')
plt.xlabel('Step')
plt.show()
Explanation: And now run a few steps of SGD!
End of explanation
import torch.nn as nn
# Linear layer: in_features, out_features
linear = nn.Linear(10, 10)
print(linear)
# Convolution layer: in_channels, out_channels, kernel_size, stride
conv = nn.Conv2d(1, 20, 5, 1)
print(conv)
# RNN: num_inputs, num_hidden, num_layers
rnn = nn.RNN(10, 10, 1)
print(rnn)
print(linear.weight)
print([k for k,v in conv.named_parameters()])
# Make our own model!
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# 1 input channel to 20 feature maps of 5x5 kernel. Stride 1.
self.conv1 = nn.Conv2d(1, 20, 5, 1)
# 20 input channels to 50 feature maps of 5x5 kernel. Stride 1.
self.conv2 = nn.Conv2d(20, 50, 5, 1)
# Full connected of final 4x4 image to 500 features
self.fc1 = nn.Linear(4*4*50, 500)
# From 500 to 10 classes
self.fc2 = nn.Linear(500, 10)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.max_pool2d(x, 2, 2)
x = F.relu(self.conv2(x))
x = F.max_pool2d(x, 2, 2)
x = x.view(-1, 4*4*50)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return F.log_softmax(x, dim=1)
# Initialize it
model = Net()
Explanation: torch.nn
The nn package is where all of the cool neural network stuff is. Layers, loss functions, etc.
Let's dive in.
Layers
Before we manually defined our linear layers. PyTorch has them for you as sub-classes of nn.Module.
End of explanation
import torch.optim as optim
# Initialize with model parameters
optimizer = optim.SGD(model.parameters(), lr=0.01)
Explanation: A note on convolution sizes:
Running a kernel over the image reduces the image height/length by kernel_size - 1.
Running a max pooling over the image reduces the image heigh/length by a factor of the kernel size.
So starting from a 28 x 28 image:
Run 5x5 conv --> 24 x 24
Apply 2x2 max pool --> 12 x 12
Run 5x5 conv --> 8 x 8
Apply 2x2 max pool --> 4 x 4
Optimizers
PyTorch handles all the optimizing too. There are several algorithms you can learn about later. Here's SGD:
End of explanation
import tqdm
import torch.nn.functional as F
def train(model, train_loader, optimizer, epoch):
# For things like dropout
model.train()
# Avg loss
total_loss = 0
# Iterate through dataset
for data, target in tqdm.tqdm(train_loader):
# Zero grad
optimizer.zero_grad()
# Forward pass
output = model(data)
# Negative log likelihood loss function
loss = F.nll_loss(output, target)
# Backward pass
loss.backward()
total_loss += loss.item()
# Update
optimizer.step()
# Print average loss
print("Train Epoch: {}\t Loss: {:.6f}".format(epoch, total_loss / len(train_loader)))
Explanation: Updating is now as easy as:
python
loss = loss_fn()
optimizer.zero_grad()
loss.backward()
optimizer.step()
Full train and test loops
Let's look at a full train loop now.
End of explanation
def test(model, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
Explanation: Testing loops are similar.
End of explanation
from torchvision import datasets, transforms
# See the torch DataLoader for more details.
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=32, shuffle=True)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=False,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=32, shuffle=True)
for epoch in range(1, 10 + 1):
train(model, train_loader, optimizer, epoch)
test(model, test_loader)
Explanation: MNIST
Just going to run mnist!
End of explanation |
2,986 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python --> IPython --> (IPython) Notebook
What, Why ?
Python = language interpreter
IPython = enhanced interaction, shortcuts, ..
IPython notebook = html frontend + protocol
Jupyter notebook = language independent interface
multiple backends
Step1: Das ist eine Demozelle
$ \int_{a}^{b} f(x) dx $
Step2: computing backend (e.g. IPython) + Notebook
--> Importance for us (and scientists) ?
Jim Gray et al. "Scientific Data Management in the coming Decade" (2005)
"The goal is a smart notebook that empowers scientists to explore the worldโs data. Science data centers with computational resources to explore huge data archives will be central to enabling such notebooks. Because data is so large, and IO bandwidth is not keeping pace, moving code to data will be essential to performance. Consequently, science centers will remain the core vehicle and federations will likely be secondary. Science centers will provide both the archives and the institutional infrastructure to develop these peta-scale archives and the algorithms and tools to analyze them."
"Smart notebooks" + scientific toolboxes + (meta-)data --> open science
The Notebook Interface
Step4: Remark
Step6: From his Blog post (https
Step7: Rich syntactic cells
Step8: Interactive widgets | Python Code:
## Let's have a short look at the IPython web site:
from IPython.display import display, Image, HTML
HTML('<iframe src=http://ipython.org width=1000 height=400> </iframe>')
Explanation: Python --> IPython --> (IPython) Notebook
What, Why ?
Python = language interpreter
IPython = enhanced interaction, shortcuts, ..
IPython notebook = html frontend + protocol
Jupyter notebook = language independent interface
multiple backends: python, julia, R, Haskell, Ruby, bash, ....
End of explanation
40 + 90
Explanation: Das ist eine Demozelle
$ \int_{a}^{b} f(x) dx $
End of explanation
HTML('<iframe src=http://nbviewer.ipython.org/github/ipython/ipython-in-depth/blob/master/examples/Notebook/What%20is%20the%20IPython%20Notebook.ipynb width=1000 height=600></iframe>')
Explanation: computing backend (e.g. IPython) + Notebook
--> Importance for us (and scientists) ?
Jim Gray et al. "Scientific Data Management in the coming Decade" (2005)
"The goal is a smart notebook that empowers scientists to explore the worldโs data. Science data centers with computational resources to explore huge data archives will be central to enabling such notebooks. Because data is so large, and IO bandwidth is not keeping pace, moving code to data will be essential to performance. Consequently, science centers will remain the core vehicle and federations will likely be secondary. Science centers will provide both the archives and the institutional infrastructure to develop these peta-scale archives and the algorithms and tools to analyze them."
"Smart notebooks" + scientific toolboxes + (meta-)data --> open science
The Notebook Interface
End of explanation
HTML(<iframe
src=http://www.nature.com/news/my-digital-toolbox-climate-scientist-damien-irving-on-python-libraries-1.16805
width=1000 height=600>
</iframe>)
Explanation: Remark: This presentation is an interactive notebook document !!Generation of presentations from notebooks:
this notebook:
- !jupyter nbconvert --to slides/html/pdf (via latex) ..
Researchers View: "Toolbox"
End of explanation
HTML(<iframe
src=http://www.nature.com/news/interactive-notebooks-sharing-the-code-1.16261
width=1000 height=400>
</iframe>)
Explanation: From his Blog post (https://drclimate.wordpress.com/2014/10/30/software-installation-explained/):
...
The software installation problem is a source of frustration for all of us and is a key roadblock
on the path to open science, so itโs great that solutions like Binstar are starting to pop up.
...
--> siehe die TGIF Vortrรคge von Carsten zu conda und binstar ..
Generating Publications (or addons containing reproducable code) from Notebooks:
Nature has a news item about IPython notebook
http://www.nature.com/news/interactive-notebooks-sharing-the-code-1.16261 accompanied by live sample notebook: http://www.nature.com/news/ipython-interactive-demo-7.21492
- reproducable academic publications:
https://github.com/ipython/ipython/wiki/A-gallery-of-interesting-IPython-Notebooks#reproducible-academic-publications
End of explanation
%lsmagic
from IPython.display import HTML, SVG
HTML('''
<table style="border: 2px solid black;">
''' +
''.join(['<tr>' +
''.join(['<td>{row},{col}</td>'.format(
row=row, col=col
) for col in range(5)]) +
'</tr>' for row in range(5)]) +
'''
</table>
''')
Explanation: Rich syntactic cells:
This is rich text with links,
equations:
$$\hat{f}(\xi) = \int_{-\infty}^{+\infty} f(x)\,
\mathrm{e}^{-i \xi x}$$
code with syntax highlighting:
python
print("Hello world!")
and images:
Some example cells
End of explanation
from ipywidgets import interact
import matplotlib.pyplot as plt
import networkx as nx
%matplotlib inline
# wrap a few graph generation functions so they have the same signature
def random_lobster(n, m, k, p):
return nx.random_lobster(n, p, p / m)
def powerlaw_cluster(n, m, k, p):
return nx.powerlaw_cluster_graph(n, m, p)
def erdos_renyi(n, m, k, p):
return nx.erdos_renyi_graph(n, p)
def newman_watts_strogatz(n, m, k, p):
return nx.newman_watts_strogatz_graph(n, k, p)
def plot_random_graph(n, m, k, p, generator):
g = generator(n, m, k, p)
nx.draw(g)
plt.show()
interact(plot_random_graph, n=(2,30), m=(1,10), k=(1,10), p=(0.0, 1.0, 0.001),
generator={'lobster': random_lobster,'power law': powerlaw_cluster,
'Newman-Watts-Strogatz': newman_watts_strogatz,u'Erdลs-Rรฉnyi': erdos_renyi,});
Explanation: Interactive widgets: Example
End of explanation |
2,987 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Goal-Based Data Collection
A practical approach to robot reinforcement learning is to first collect a large batch of real or simulated robot interaction data,
using some data collection policy, and then learn from this data to perform various tasks, using offline learning algorithms.
In this notebook, we will demonstrate how to collect diverse dataset for a simple robotics manipulation task
using the algorithms detailed in the following paper
Step1: Then, we define the training schedule for the agent. improve_steps dictates the number of samples in the final data-set.
Step2: In this example, we will be using the goal-based algorithm for data-collection. Therefore, we populate
the TD3GoalBasedAgentParameters class with our desired algorithm specific parameters.
The goal-based data collected is based on TD3, using this class you can change the TD3 specific parameters as well.
A detailed description of the goal-based and TD3 algorithm specific parameters can be found in
agents/td3_exp_agent.py and agents/td3_agent.py respectively.
Step3: Next, we'll define the networks' architecture and parameters as they appear in the paper.
Step4: The last thing we need to define is the environment parameters for the manipulation task.
This environment is a 7DoF Franka Panda robotic arm with a closed gripper and cartesian
position control of the end-effector. The robot is positioned on a table, and a cube object with colored sides is placed in
front of it.
Step5: Finally, we create the graph manager and call graph_manager.improve() in order to start the data collection.
Step6: Once the data collection is complete, the data-set will saved to path specified by agent_params.algorithm.replay_buffer_save_path.
At this point, the data can be used to learn any downstream task you define on that environment.
The script below shows a visualization of the data-set. The dots represent a position of the cube on the table as seen in the data-set, and the color corresponds to the color of the face at the top. The number at the top signifies that number of dots a plot contains for a certain color.
First we load the data-set from disk. Note that this can take several minutes to complete.
Step7: Now we can run the visualization script | Python Code:
from rl_coach.agents.td3_exp_agent import TD3GoalBasedAgentParameters
from rl_coach.architectures.embedder_parameters import InputEmbedderParameters
from rl_coach.architectures.layers import Dense, Conv2d, BatchnormActivationDropout, Flatten
from rl_coach.base_parameters import EmbedderScheme
from rl_coach.core_types import TrainingSteps, EnvironmentEpisodes, EnvironmentSteps
from rl_coach.environments.robosuite_environment import RobosuiteGoalBasedExpEnvironmentParameters, \
OptionalObservations
from rl_coach.filters.filter import NoInputFilter, NoOutputFilter
from rl_coach.graph_managers.basic_rl_graph_manager import BasicRLGraphManager
from rl_coach.graph_managers.graph_manager import ScheduleParameters
from rl_coach.architectures.head_parameters import RNDHeadParameters
from rl_coach.schedules import LinearSchedule
Explanation: Goal-Based Data Collection
A practical approach to robot reinforcement learning is to first collect a large batch of real or simulated robot interaction data,
using some data collection policy, and then learn from this data to perform various tasks, using offline learning algorithms.
In this notebook, we will demonstrate how to collect diverse dataset for a simple robotics manipulation task
using the algorithms detailed in the following paper:
Efficient Self-Supervised Data Collection for Offline Robot Learning.
The implementation is based on the Robosuite simulator, which should be installed before running this notebook. Follow the instructions in the Coach readme here.
Presets with predefined parameters for all three algorithms shown in the paper can be found here:
Random Agent: presets/RoboSuite_CubeExp_Random.py
Intrinsic Reward Agent: presets/RoboSuite_CubeExp_TD3_Intrinsic_Reward.py
Goal-Based Agent: presets/RoboSuite_CubeExp_TD3_Goal_Based.py
You can run those presets using the command line:
coach -p RoboSuite_CubeExp_TD3_Goal_Based
Preliminaries
First, get the required imports and other general settings we need for this notebook.
End of explanation
####################
# Graph Scheduling #
####################
schedule_params = ScheduleParameters()
schedule_params.improve_steps = TrainingSteps(300000)
schedule_params.steps_between_evaluation_periods = TrainingSteps(300000)
schedule_params.evaluation_steps = EnvironmentEpisodes(0)
schedule_params.heatup_steps = EnvironmentSteps(1000)
Explanation: Then, we define the training schedule for the agent. improve_steps dictates the number of samples in the final data-set.
End of explanation
#########
# Agent #
#########
agent_params = TD3GoalBasedAgentParameters()
agent_params.algorithm.use_non_zero_discount_for_terminal_states = False
agent_params.algorithm.identity_goal_sample_rate = 0.04
agent_params.exploration.noise_schedule = LinearSchedule(1.5, 0.5, 300000)
agent_params.algorithm.rnd_sample_size = 2000
agent_params.algorithm.rnd_batch_size = 500
agent_params.algorithm.rnd_optimization_epochs = 4
agent_params.algorithm.td3_training_ratio = 1.0
agent_params.algorithm.identity_goal_sample_rate = 0.0
agent_params.algorithm.env_obs_key = 'camera'
agent_params.algorithm.agent_obs_key = 'obs-goal'
agent_params.algorithm.replay_buffer_save_steps = 25000
agent_params.algorithm.replay_buffer_save_path = './Resources'
agent_params.input_filter = NoInputFilter()
agent_params.output_filter = NoOutputFilter()
Explanation: In this example, we will be using the goal-based algorithm for data-collection. Therefore, we populate
the TD3GoalBasedAgentParameters class with our desired algorithm specific parameters.
The goal-based data collected is based on TD3, using this class you can change the TD3 specific parameters as well.
A detailed description of the goal-based and TD3 algorithm specific parameters can be found in
agents/td3_exp_agent.py and agents/td3_agent.py respectively.
End of explanation
# Camera observation pre-processing network scheme
camera_obs_scheme = [
Conv2d(32, 8, 4),
BatchnormActivationDropout(activation_function='relu'),
Conv2d(64, 4, 2),
BatchnormActivationDropout(activation_function='relu'),
Conv2d(64, 3, 1),
BatchnormActivationDropout(activation_function='relu'),
Flatten(),
Dense(256),
BatchnormActivationDropout(activation_function='relu')
]
# Actor
actor_network = agent_params.network_wrappers['actor']
actor_network.input_embedders_parameters = {
'measurements': InputEmbedderParameters(scheme=EmbedderScheme.Empty),
agent_params.algorithm.agent_obs_key: InputEmbedderParameters(scheme=camera_obs_scheme, activation_function='none')
}
actor_network.middleware_parameters.scheme = [Dense(300), Dense(200)]
actor_network.learning_rate = 1e-4
# Critic
critic_network = agent_params.network_wrappers['critic']
critic_network.input_embedders_parameters = {
'action': InputEmbedderParameters(scheme=EmbedderScheme.Empty),
'measurements': InputEmbedderParameters(scheme=EmbedderScheme.Empty),
agent_params.algorithm.agent_obs_key: InputEmbedderParameters(scheme=camera_obs_scheme, activation_function='none')
}
critic_network.middleware_parameters.scheme = [Dense(400), Dense(300)]
critic_network.learning_rate = 1e-4
# RND
agent_params.network_wrappers['predictor'].input_embedders_parameters = \
{agent_params.algorithm.env_obs_key: InputEmbedderParameters(scheme=EmbedderScheme.Empty,
input_rescaling={'image': 1.0},
flatten=False)}
agent_params.network_wrappers['constant'].input_embedders_parameters = \
{agent_params.algorithm.env_obs_key: InputEmbedderParameters(scheme=EmbedderScheme.Empty,
input_rescaling={'image': 1.0},
flatten=False)}
agent_params.network_wrappers['predictor'].heads_parameters = [RNDHeadParameters(is_predictor=True)]
Explanation: Next, we'll define the networks' architecture and parameters as they appear in the paper.
End of explanation
###############
# Environment #
###############
env_params = RobosuiteGoalBasedExpEnvironmentParameters(level='CubeExp')
env_params.robot = 'Panda'
env_params.custom_controller_config_fpath = '../rl_coach/environments/robosuite/osc_pose.json'
env_params.base_parameters.optional_observations = OptionalObservations.CAMERA
env_params.base_parameters.render_camera = 'frontview'
env_params.base_parameters.camera_names = 'agentview'
env_params.base_parameters.camera_depths = False
env_params.base_parameters.horizon = 200
env_params.base_parameters.ignore_done = False
env_params.base_parameters.use_object_obs = True
env_params.frame_skip = 1
env_params.base_parameters.control_freq = 2
env_params.base_parameters.camera_heights = 84
env_params.base_parameters.camera_widths = 84
env_params.extra_parameters = {'hard_reset': False}
Explanation: The last thing we need to define is the environment parameters for the manipulation task.
This environment is a 7DoF Franka Panda robotic arm with a closed gripper and cartesian
position control of the end-effector. The robot is positioned on a table, and a cube object with colored sides is placed in
front of it.
End of explanation
graph_manager = BasicRLGraphManager(agent_params=agent_params, env_params=env_params, schedule_params=schedule_params)
graph_manager.improve()
Explanation: Finally, we create the graph manager and call graph_manager.improve() in order to start the data collection.
End of explanation
import joblib
print('Loading data-set (this can take several minutes)...')
rb_path = os.path.join('./Resources', 'RB_TD3GoalBasedAgent.joblib.bz2')
episodes = joblib.load(rb_path)
print('Done')
Explanation: Once the data collection is complete, the data-set will saved to path specified by agent_params.algorithm.replay_buffer_save_path.
At this point, the data can be used to learn any downstream task you define on that environment.
The script below shows a visualization of the data-set. The dots represent a position of the cube on the table as seen in the data-set, and the color corresponds to the color of the face at the top. The number at the top signifies that number of dots a plot contains for a certain color.
First we load the data-set from disk. Note that this can take several minutes to complete.
End of explanation
import os
import numpy as np
from collections import OrderedDict
from enum import IntEnum
from pylab import subplot
from gym.envs.robotics.rotations import quat2euler, mat2euler, quat2mat
import matplotlib.pyplot as plt
class CubeColor(IntEnum):
YELLOW = 0
CYAN = 1
WHITE = 2
RED = 3
GREEN = 4
BLUE = 5
UNKNOWN = 6
x_range = [-0.3, 0.3]
y_range = [-0.3, 0.3]
COLOR_MAP = OrderedDict([
(int(CubeColor.YELLOW), 'yellow'),
(int(CubeColor.CYAN), 'cyan'),
(int(CubeColor.WHITE), 'white'),
(int(CubeColor.RED), 'red'),
(int(CubeColor.GREEN), 'green'),
(int(CubeColor.BLUE), 'blue'),
(int(CubeColor.UNKNOWN), 'black'),
])
# Mapping between (subset of) euler angles to top face color, based on the initial cube rotation
COLOR_ROTATION_MAP = OrderedDict([
(CubeColor.YELLOW, (0, 2, [np.array([0, 0]),
np.array([np.pi, np.pi]), np.array([-np.pi, -np.pi]),
np.array([-np.pi, np.pi]), np.array([np.pi, -np.pi])])),
(CubeColor.CYAN, (0, 2, [np.array([0, np.pi]), np.array([0, -np.pi]),
np.array([np.pi, 0]), np.array([-np.pi, 0])])),
(CubeColor.WHITE, (1, 2, [np.array([-np.pi / 2])])),
(CubeColor.RED, (1, 2, [np.array([np.pi / 2])])),
(CubeColor.GREEN, (0, 2, [np.array([np.pi / 2, 0])])),
(CubeColor.BLUE, (0, 2, [np.array([-np.pi / 2, 0])])),
])
def get_cube_top_color(cube_quat, atol):
euler = mat2euler(quat2mat(cube_quat))
for color, (start_dim, end_dim, xy_rotations) in COLOR_ROTATION_MAP.items():
if any(list(np.allclose(euler[start_dim:end_dim], xy_rotation, atol=atol) for xy_rotation in xy_rotations)):
return color
return CubeColor.UNKNOWN
def pos2cord(x, y):
x = max(min(x, x_range[1]), x_range[0])
y = max(min(y, y_range[1]), y_range[0])
x = int(((x - x_range[0])/(x_range[1] - x_range[0]))*99)
y = int(((y - y_range[0])/(y_range[1] - y_range[0]))*99)
return x, y
pos_idx = 25
quat_idx = 28
positions = []
colors = []
print('Extracting cube positions and colors...')
for episode in episodes:
for transition in episode:
x, y = transition.state['measurements'][pos_idx:pos_idx+2]
positions.append([x, y])
angle = quat2euler(transition.state['measurements'][quat_idx:quat_idx+4])
colors.append(int(get_cube_top_color(transition.state['measurements'][quat_idx:quat_idx+4], np.pi / 4)))
x_cord, y_cord = pos2cord(x, y)
x, y = episode[-1].next_state['measurements'][pos_idx:pos_idx+2]
positions.append([x, y])
colors.append(int(get_cube_top_color(episode[-1].next_state['measurements'][quat_idx:quat_idx+4], np.pi / 4)))
x_cord, y_cord = pos2cord(x, y)
print('Done')
fig = plt.figure(figsize=(15.0, 5.0))
axes = []
for j in range(6):
axes.append(subplot(1, 6, j + 1))
xy = np.array(positions)[np.array(colors) == list(COLOR_MAP.keys())[j]]
axes[-1].scatter(xy[:, 1], xy[:, 0], c=COLOR_MAP[j], alpha=0.01, edgecolors='black')
plt.xlim(y_range)
plt.ylim(x_range)
plt.xticks([])
plt.yticks([])
axes[-1].set_aspect('equal', adjustable='box')
title = 'N=' + str(xy.shape[0])
plt.title(title)
for ax in axes:
ax.invert_yaxis()
plt.show()
Explanation: Now we can run the visualization script:
End of explanation |
2,988 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Obtain the SELU parameters for arbitrary fixed points
Author
Step2: Function to obtain the parameters for the SELU with arbitrary fixed point (mean variance)
Step4: Adjust the SELU function and Dropout to your new parameters
Step5: For completeness | Python Code:
import numpy as np
from scipy.special import erf,erfc
from sympy import Symbol, solve, nsolve
Explanation: Obtain the SELU parameters for arbitrary fixed points
Author: Guenter Klambauer, 2017
tested under Python 3.5
End of explanation
def getSeluParameters(fixedpointMean=0,fixedpointVar=1):
Finding the parameters of the SELU activation function. The function returns alpha and lambda for the desired fixed point.
import sympy
from sympy import Symbol, solve, nsolve
aa = Symbol('aa')
ll = Symbol('ll')
nu = fixedpointMean
tau = fixedpointVar
mean = 0.5*ll*(nu + np.exp(-nu**2/(2*tau))*np.sqrt(2/np.pi)*np.sqrt(tau) + \
nu*erf(nu/(np.sqrt(2*tau))) - aa*erfc(nu/(np.sqrt(2*tau))) + \
np.exp(nu+tau/2)*aa*erfc((nu+tau)/(np.sqrt(2*tau))))
var = 0.5*ll**2*(np.exp(-nu**2/(2*tau))*np.sqrt(2/np.pi*tau)*nu + (nu**2+tau)* \
(1+erf(nu/(np.sqrt(2*tau)))) + aa**2 *erfc(nu/(np.sqrt(2*tau))) \
- aa**2 * 2 *np.exp(nu+tau/2)*erfc((nu+tau)/(np.sqrt(2*tau)))+ \
aa**2*np.exp(2*(nu+tau))*erfc((nu+2*tau)/(np.sqrt(2*tau))) ) - mean**2
eq1 = mean - nu
eq2 = var - tau
res = nsolve( (eq2, eq1), (aa,ll), (1.67,1.05))
return float(res[0]),float(res[1])
### To recover the parameters of the SELU with mean zero and unit variance
getSeluParameters(0,1)
### To obtain new parameters for mean zero and variance 2
myFixedPointMean = -0.1
myFixedPointVar = 2.0
myAlpha, myLambda = getSeluParameters(myFixedPointMean,myFixedPointVar)
getSeluParameters(myFixedPointMean,myFixedPointVar)
Explanation: Function to obtain the parameters for the SELU with arbitrary fixed point (mean variance)
End of explanation
def selu(x):
with ops.name_scope('elu') as scope:
alpha = myAlpha
scale = myLambda
return scale*tf.where(x>=0.0, x, alpha*tf.nn.elu(x))
def dropout_selu(x, rate, alpha= -myAlpha*myLambda, fixedPointMean=myFixedPointMean, fixedPointVar=myFixedPointVar,
noise_shape=None, seed=None, name=None, training=False):
Dropout to a value with rescaling.
def dropout_selu_impl(x, rate, alpha, noise_shape, seed, name):
keep_prob = 1.0 - rate
x = ops.convert_to_tensor(x, name="x")
if isinstance(keep_prob, numbers.Real) and not 0 < keep_prob <= 1:
raise ValueError("keep_prob must be a scalar tensor or a float in the "
"range (0, 1], got %g" % keep_prob)
keep_prob = ops.convert_to_tensor(keep_prob, dtype=x.dtype, name="keep_prob")
keep_prob.get_shape().assert_is_compatible_with(tensor_shape.scalar())
alpha = ops.convert_to_tensor(alpha, dtype=x.dtype, name="alpha")
keep_prob.get_shape().assert_is_compatible_with(tensor_shape.scalar())
if tensor_util.constant_value(keep_prob) == 1:
return x
noise_shape = noise_shape if noise_shape is not None else array_ops.shape(x)
random_tensor = keep_prob
random_tensor += random_ops.random_uniform(noise_shape, seed=seed, dtype=x.dtype)
binary_tensor = math_ops.floor(random_tensor)
ret = x * binary_tensor + alpha * (1-binary_tensor)
a = tf.sqrt(fixedPointVar / (keep_prob *((1-keep_prob) * tf.pow(alpha-fixedPointMean,2) + fixedPointVar)))
b = fixedPointMean - a * (keep_prob * fixedPointMean + (1 - keep_prob) * alpha)
ret = a * ret + b
ret.set_shape(x.get_shape())
return ret
with ops.name_scope(name, "dropout", [x]) as name:
return utils.smart_cond(training,
lambda: dropout_selu_impl(x, rate, alpha, noise_shape, seed, name),
lambda: array_ops.identity(x))
import tensorflow as tf
import numpy as np
from __future__ import absolute_import, division, print_function
import numbers
from tensorflow.contrib import layers
from tensorflow.python.framework import ops
from tensorflow.python.framework import tensor_shape
from tensorflow.python.framework import tensor_util
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import random_ops
from tensorflow.python.ops import array_ops
from tensorflow.python.layers import utils
x = tf.Variable(tf.random_normal([10000],mean=myFixedPointMean, stddev=np.sqrt(myFixedPointVar)))
w = selu(x)
y = dropout_selu(w,0.2,training=True)
init = tf.global_variables_initializer()
gpu_options = tf.GPUOptions(allow_growth=True)
with tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) as sess:
sess.run(init)
z,zz, zzz = sess.run([x, w, y])
#print(z)
#print(zz)
print("mean/var should be at:", myFixedPointMean, "/", myFixedPointVar)
print("Input data mean/var: ", "{:.12f}".format(np.mean(z)), "/", "{:.12f}".format(np.var(z)))
print("After selu: ", "{:.12f}".format(np.mean(zz)), "/", "{:.12f}".format(np.var(zz)))
print("After dropout mean/var", "{:.12f}".format(np.mean(zzz)), "/", "{:.12f}".format(np.var(zzz)))
Explanation: Adjust the SELU function and Dropout to your new parameters
End of explanation
myAlpha = -np.sqrt(2/np.pi) / (np.exp(0.5) * erfc(1/np.sqrt(2))-1 )
myLambda = (1-np.sqrt(np.exp(1))*erfc(1/np.sqrt(2))) * \
np.sqrt( 2*np.pi/ (2 + np.pi -2*np.sqrt(np.exp(1))*(2+np.pi)*erfc(1/np.sqrt(2)) + \
np.exp(1)*np.pi*erfc(1/np.sqrt(2))**2 + 2*np.exp(2)*erfc(np.sqrt(2))))
print("Alpha parameter of the SELU: ", myAlpha)
print("Lambda parameter of the SELU: ", myLambda)
Explanation: For completeness: These are the correct expressions for mean zero and unit variance
End of explanation |
2,989 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Notebook Title
Notebook description.
Question 1
Question 1 text.
Step1: Question 2
Question 2 text. | Python Code:
print("Student 1 answers question 1.")
print("Student 2 answers question 1.")
print("Student 3 answers question 1.")
Explanation: Notebook Title
Notebook description.
Question 1
Question 1 text.
End of explanation
print("Student 1 answers question 2.")
print("Student 3 answers question 2.")
print("Student 4 answers question 2.")
Explanation: Question 2
Question 2 text.
End of explanation |
2,990 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step 1
Step1: So we have a general idea of what things look like. Let's convert the readings to numpy arrays.
Step2: And for our purposes we have focused and non-focused. Let's pool all the math tasks together. And have relax as the other category.
Step3: Ok, now let's try out an SVM on one of our subjects.
Step4: Split each power reading into it's own column
Step5: So this is pretty poor performance. To be fair we haven't tried very hard to optimize things. But we already know working with the raw spectrum and processing that is how the literature proceeds. Don't even know how one gets these power readings (something about the main frequency bands, blah blah blah)
So lets fast fourier transform the raw signal and get our own poewr spectrum. And as per the literature review team's findings, we should average them out and log bin them.
Step6: Wow. Excellent score. Not bad for only one real input
Step7: As expected though, we don't do well when combining all our test subjects. We're going to need to train the app on individuals.
Step8: LOL
that settles it then...
Step9: def cross_val_svm (X,y,n,kern='rbf') | Python Code:
import json
import pandas as pd
import tensorflow as tf
import numpy as np
df = pd.read_csv("kaggle_data/eeg-data.csv")
df.head()
df.loc[df.label=='math1']
Explanation: Step 1: Pull in Data, preprocess it.
Here I'm using example data from the BioSENSE research group at UC Berekely, collected using a Neurosky device. Simpler than our device, as it only has one sensor at fp2, in comparison to our four. But the essence of the code should be the same.
End of explanation
df.raw_values = df.raw_values.map(json.loads)
df.eeg_power = df.eeg_power.map(json.loads)
Explanation: So we have a general idea of what things look like. Let's convert the readings to numpy arrays.
End of explanation
df.label.unique()
relaxed = df[df.label == 'relax']
focused = df[(df.label == 'math1') |
(df.label == 'math2') |
(df.label == 'math3') |
(df.label == 'math4') |
(df.label == 'math5') |
(df.label == 'math6') |
(df.label == 'math7') |
(df.label == 'math8') |
(df.label == 'math9') |
(df.label == 'math10') |
(df.label == 'math11') |
(df.label == 'math12')]
print(len(relaxed))
print(len(focused))
Explanation: And for our purposes we have focused and non-focused. Let's pool all the math tasks together. And have relax as the other category.
End of explanation
df_grouped = pd.concat([relaxed,focused])
len(df_grouped)
df_grouped[df_grouped['id']==24]
df_clean = df_grouped[['id','eeg_power', 'raw_values', 'label']]
df_clean.loc[:,'label'][df_clean.label != 'relax'] = 'focus'
df_clean
df_one_subject = df_clean[df_clean['id']==1]
len(df_one_subject)
X = df_one_subject.drop(['label','raw_values'],1)
y = df_one_subject['label']
len(X)
Explanation: Ok, now let's try out an SVM on one of our subjects.
End of explanation
eegpower_series = pd.Series(X['eeg_power'])
eeg_cols=pd.DataFrame(eegpower_series.tolist())
eeg_cols['id'] = X['id'].values
eeg_cols = eeg_cols.drop('id',1)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(eeg_cols,y,test_size=0.1)
X_train
from sklearn.model_selection import cross_val_score
from sklearn import svm
def cross_val_svm (X,y,n,kern='rbf'):
clf = svm.SVC(kernel=kern)
scores = cross_val_score(clf, X, y, cv=n)
return scores
cross_val_svm(X_train,y_train,4)
Explanation: Split each power reading into it's own column
End of explanation
from scipy import stats
from scipy.interpolate import interp1d
import itertools
def spectrum (vector):
#get power spectrum from array of raw EEG reading
fourier = np.fft.fft(vector)
pow_spec = np.abs(fourier)**2
pow_spec = pow_spec[:len(pow_spec)//2] #look this up j.i.c.
return pow_spec
def binned (pspectra, n):
#compress an array of power spectra into vectors of length n'''
l = len(pspectra)
array = np.zeros([l,n])
for i,ps in enumerate(pspectra):
x = np.arange(1,len(ps)+1)
f = interp1d(x,ps)#/np.sum(ps))
array[i] = f(np.arange(1, n+1))
index = np.argwhere(array[:,0]==-1)
array = np.delete(array,index,0)
return array
def feature_vector (readings, bins=100): # A function we apply to each group of power spectr
bins = binned(list(map(spectrum, readings)), bins)
return np.log10(np.mean(bins, 0))
def grouper(n, iterable, fillvalue=None):
#"grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx"
args = [iter(iterable)] * n
return itertools.zip_longest(*args, fillvalue=fillvalue)
def vectors (df):
return [feature_vector(group) for group in list(grouper(3, df.raw_values.tolist()))[:-1]]
raw_reads = df_one_subject.raw_values[:3]
raw_reads
df_one_subject
data = vectors(df_one_subject[df_one_subject.label=='relax'])
data
data2 = vectors(df_one_subject[df_one_subject.label=='focus'])
data2
def vectors_labels (list1, list2):
def label (l):
return lambda x: l
X = list1 + list2
y = list(map(label(0), list1)) + list(map(label(1), list2))
return X, y
X,y =vectors_labels(data,data2)
y
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.1)
np.mean(cross_val_svm(X_train,y_train,7))
Explanation: So this is pretty poor performance. To be fair we haven't tried very hard to optimize things. But we already know working with the raw spectrum and processing that is how the literature proceeds. Don't even know how one gets these power readings (something about the main frequency bands, blah blah blah)
So lets fast fourier transform the raw signal and get our own poewr spectrum. And as per the literature review team's findings, we should average them out and log bin them.
End of explanation
from sklearn import preprocessing
X_train = preprocessing.scale(X_train)
cross_val_svm(X_train,y_train,7).mean()
data = vectors(df_clean[df_clean.label=='focus'])
data2 = vectors(df_clean[df_clean.label=='focus'])
X,y = vectors_labels(data,data2)
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.2)
np.mean(cross_val_svm(X_train,y_train,5))
Explanation: Wow. Excellent score. Not bad for only one real input
End of explanation
X_train = preprocessing.scale(X_train)
cross_val_svm(X_train,y_train,5).mean()
Explanation: As expected though, we don't do well when combining all our test subjects. We're going to need to train the app on individuals.
End of explanation
from sklearn.ensemble import RandomForestClassifier
rf= RandomForestClassifier(n_estimators = 200,max_depth = 10)
data = vectors(df_clean[df_clean.label=='focus'])
data2 = vectors(df_clean[df_clean.label=='focus'])
X,y = vectors_labels(data,data2)
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.2)
np.mean(cross_val_score(rf,X_train,y_train))
Explanation: LOL
that settles it then...
End of explanation
def subject_scores (subject,kern):
f = focused[focused['id']==subject]
r = relaxed[relaxed['id']==subject]
X,y = vectors_labels(vectors(f),vectors(r))
X=preprocessing.scale(X)
return cross_val_svm(X,y,7,kern).mean()
for s in range(1,31):
print("Subject ",s, " score is:", subject_scores(s,'linear'))
import matplotlib.pyplot as plt
scores = []
for s in range(1,31):
scores.append(subject_scores(s,'linear'))
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(scores)
print("Average score is: ", np.mean(scores))
print("Standard deviation is: ", np.std(scores))
scores = []
for s in range(1,31):
scores.append(subject_scores(s,'rbf'))
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(scores)
print("Average score is: ", np.mean(scores))
print("Standard deviation is: ", np.std(scores))
scores = []
for s in range(1,31):
scores.append(subject_scores(s,'poly'))
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(scores)
print("Average score is: ", np.mean(scores))
print("Standard deviation is: ", np.std(scores))
scores = []
for s in range(1,31):
scores.append(subject_scores(s,'sigmoid'))
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(scores)
print("Average score is: ", np.mean(scores))
print("Standard deviation is: ", np.std(scores))
Explanation: def cross_val_svm (X,y,n,kern='rbf'):
End of explanation |
2,991 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have the following torch tensor: | Problem:
import numpy as np
import pandas as pd
import torch
t, idx = load_data()
assert type(t) == torch.Tensor
assert type(idx) == np.ndarray
idx = 1 - idx
idxs = torch.from_numpy(idx).long().unsqueeze(1)
# or torch.from_numpy(idxs).long().view(-1,1)
result = t.gather(1, idxs).squeeze(1) |
2,992 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this notebook, we duplicate the neural network created in the TensorFlow example in keras. I found the model in keras to be conceptually cleaner.
Step1: Set up the parameters for the model. Nothing too exciting here.
Step2: Below we build the neural network. This is the same network used in the TensorFlow example.
Step3: Now we just run the neural network. Note that one epoch here is a run through the entire set of training data, so it takes a while. | Python Code:
from __future__ import print_function
from __future__ import division
import numpy as np
np.random.seed(1337) # for reproducibility
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.utils import np_utils
Explanation: In this notebook, we duplicate the neural network created in the TensorFlow example in keras. I found the model in keras to be conceptually cleaner.
End of explanation
batch_size = 500
nb_classes = 10
nb_epoch = 1
# input image dimensions
img_rows, img_cols = 28, 28
# number of convolutional filters to use
nb_filters = 32
nb2_filters = 64
# size of pooling area for max pooling
nb_pool = 2
# convolution kernel size
nb_conv = 5
# the data, shuffled and split between tran and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255 # normalize the data
X_test /= 255 # normalize the data
# convert class vectors to binary class matrices
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
Explanation: Set up the parameters for the model. Nothing too exciting here.
End of explanation
model = Sequential()
model.add(Convolution2D(nb_filters, nb_conv, nb_conv,
border_mode='valid',
input_shape=(1, img_rows, img_cols)))
model.add(Activation('relu'))
model.add(Convolution2D(nb2_filters, nb_conv, nb_conv))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(nb_pool, nb_pool)))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adadelta')
Explanation: Below we build the neural network. This is the same network used in the TensorFlow example.
End of explanation
model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
show_accuracy=True, verbose=1, validation_data=(X_test, Y_test))
score = model.evaluate(X_test, Y_test, show_accuracy=True, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])
Explanation: Now we just run the neural network. Note that one epoch here is a run through the entire set of training data, so it takes a while.
End of explanation |
2,993 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example 4 - stripy gradients on the sphere
SSRFPACK is a Fortran 77 software package that constructs a smooth interpolatory or approximating surface to data values associated with arbitrarily distributed points on the surface of a sphere. It employs automatically selected tension factors to preserve shape properties of the data and avoid overshoot and undershoot associated with steep gradients.
Notebook contents
Analytic function and derivatives
Evaluating accuracy
The next example is Ex5-Smoothing
Define a computational mesh
Use the (usual) icosahedron with face points included.
Step1: Analytic function
Define a relatively smooth function that we can interpolate from the coarse mesh to the fine mesh and analyse
Step2: Derivatives of solution compared to analytic values
The gradient_lonlat method of the sTriangulation takes a data array reprenting values on the mesh vertices and returns the lon and lat derivatives. There is an equivalent gradient_xyz method which returns the raw derivatives in Cartesian form. Although this is generally less useful, if you are computing the slope (for example) that can be computed in either coordinate system and may cross the pole, consider using the Cartesian form. | Python Code:
import stripy as stripy
mesh = stripy.spherical_meshes.icosahedral_mesh(refinement_levels=4, include_face_points=True)
print(mesh.npoints)
Explanation: Example 4 - stripy gradients on the sphere
SSRFPACK is a Fortran 77 software package that constructs a smooth interpolatory or approximating surface to data values associated with arbitrarily distributed points on the surface of a sphere. It employs automatically selected tension factors to preserve shape properties of the data and avoid overshoot and undershoot associated with steep gradients.
Notebook contents
Analytic function and derivatives
Evaluating accuracy
The next example is Ex5-Smoothing
Define a computational mesh
Use the (usual) icosahedron with face points included.
End of explanation
import numpy as np
def analytic(lons, lats, k1, k2):
return np.cos(k1*lons) * np.sin(k2*lats)
def analytic_ddlon(lons, lats, k1, k2):
return -k1 * np.sin(k1*lons) * np.sin(k2*lats) / np.cos(lats)
def analytic_ddlat(lons, lats, k1, k2):
return k2 * np.cos(k1*lons) * np.cos(k2*lats)
analytic_sol = analytic(mesh.lons, mesh.lats, 5.0, 2.0)
analytic_sol_ddlon = analytic_ddlon(mesh.lons, mesh.lats, 5.0, 2.0)
analytic_sol_ddlat = analytic_ddlat(mesh.lons, mesh.lats, 5.0, 2.0)
%matplotlib inline
import gdal
import cartopy
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(10, 10), facecolor="none")
ax = plt.subplot(111, projection=ccrs.Orthographic(central_longitude=0.0, central_latitude=0.0, globe=None))
ax.coastlines(color="lightgrey")
ax.set_global()
lons0 = np.degrees(mesh.lons)
lats0 = np.degrees(mesh.lats)
ax.scatter(lons0, lats0,
marker="o", s=10.0, transform=ccrs.Geodetic(), c=analytic_sol, cmap=plt.cm.RdBu)
pass
Explanation: Analytic function
Define a relatively smooth function that we can interpolate from the coarse mesh to the fine mesh and analyse
End of explanation
stripy_ddlon, stripy_ddlat = mesh.gradient_lonlat(analytic_sol)
import lavavu
lv = lavavu.Viewer(border=False, background="#FFFFFF", resolution=[1000,600], near=-10.0)
nodes = lv.points("nodes", pointsize=3.0, pointtype="shiny", colour="#448080", opacity=0.75)
nodes.vertices(mesh.points)
tris = lv.triangles("triangles", wireframe=False, colour="#77ff88", opacity=1.0)
tris.vertices(mesh.points)
tris.indices(mesh.simplices)
tris.values(analytic_sol, label="original")
tris.values(stripy_ddlon, label="ddlon")
tris.values(stripy_ddlat, label="ddlat")
tris.values(stripy_ddlon-analytic_sol_ddlon, label="ddlonerr")
tris.values(stripy_ddlat-analytic_sol_ddlat, label="ddlaterr")
tris.colourmap("#990000 #FFFFFF #000099")
cb = tris.colourbar()
# view the pole
lv.translation(0.0, 0.0, -3.0)
lv.rotation(-20, 0.0, 0.0)
lv.control.Panel()
lv.control.Range('specular', range=(0,1), step=0.1, value=0.4)
lv.control.Checkbox(property='axis')
lv.control.ObjectList()
tris.control.List(["original", "ddlon", "ddlat", "ddlonerr", "ddlaterr"], property="colourby", value="orginal", command="redraw", label="Display:")
lv.control.show()
Explanation: Derivatives of solution compared to analytic values
The gradient_lonlat method of the sTriangulation takes a data array reprenting values on the mesh vertices and returns the lon and lat derivatives. There is an equivalent gradient_xyz method which returns the raw derivatives in Cartesian form. Although this is generally less useful, if you are computing the slope (for example) that can be computed in either coordinate system and may cross the pole, consider using the Cartesian form.
End of explanation |
2,994 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook explores the effect of using optimization routine and data extrapolation to zero for S(Q)
We are going to load a data pattern and background Pattern of $Mg_2SiO_4$. The data is not optimal since it was not corrected for self absorption or oblique x-ray incidence on the detector. A way to try to correct for this artifially is using an optimization method described in Eggert et al. (2002). This is very useful for the data analysis of total scattering experiments from a sample loaded in a diamond anvil cell were the background might change with compression and therefore almost never is perfect.
Step1: 1. Effect on S(Q)
We are going to compare three different S(Q) pattern
Step2: The two plots clearly show that the optimization on a not extrapolated S(Q) results in an artificial lower intensity of the first sharp diffraction peak. Pointing to that extrapolation is needed for a sensible data analysis.
2. Effect on F(r) and g(r)
In this section we going to compare F(r) and g(r) for 4 different data analysis methods | Python Code:
%matplotlib inline
import os
import sys
import matplotlib.pyplot as plt
sys.path.insert(1, os.path.join(os.getcwd(), '../../'))
from glassure.core.calc import calculate_fr, calculate_sq, optimize_sq, calculate_gr
from glassure.core.utility import extrapolate_to_zero_poly, convert_density_to_atoms_per_cubic_angstrom
from glassure.core import Pattern
import numpy as np
Explanation: This notebook explores the effect of using optimization routine and data extrapolation to zero for S(Q)
We are going to load a data pattern and background Pattern of $Mg_2SiO_4$. The data is not optimal since it was not corrected for self absorption or oblique x-ray incidence on the detector. A way to try to correct for this artifially is using an optimization method described in Eggert et al. (2002). This is very useful for the data analysis of total scattering experiments from a sample loaded in a diamond anvil cell were the background might change with compression and therefore almost never is perfect.
End of explanation
data_pattern = Pattern.from_file('../tests/data/Mg2SiO4_ambient.xy')
bkg_pattern = Pattern.from_file('../tests/data/Mg2SiO4_ambient_bkg.xy')
sample_pattern = data_pattern - bkg_pattern
composition = {'Mg': 2, 'Si':1, 'O':4}
density = 2.9
atomic_density = convert_density_to_atoms_per_cubic_angstrom(composition, density)
sq = calculate_sq(sample_pattern.limit(0,20), density, composition)
sq_opt = optimize_sq(sq, 1.4, 10, atomic_density)
sq_extr= extrapolate_to_zero_poly(sq, 1.5, replace=True)
sq_extr_opt = optimize_sq(sq_extr, 1.4, 10, atomic_density)
plt.figure(figsize=(12, 15))
plt.subplot(2,1,1)
plt.plot(*sq.data, label='raw')
plt.plot(*sq_opt.data, label='opt')
plt.plot(*sq_extr_opt.data, label='extra_opt')
plt.xlabel('Q $(\AA^{-1})$')
plt.ylabel('S(Q)')
plt.legend()
plt.subplot(2,1,2)
plt.plot(*sq.data, label='raw')
plt.plot(*sq_opt.data, label='opt')
plt.plot(*sq_extr_opt.data, label='extra_opt')
plt.xlabel('Q $(\AA^{-1})$')
plt.ylabel('S(Q)')
plt.xlim(0, 7)
plt.legend(loc='best')
Explanation: 1. Effect on S(Q)
We are going to compare three different S(Q) pattern:
- "raw": pattern which is just the collected diffraction data subtracted by its background
- "opt": pattern optimized for an $r_{cutoff}$ of 1.5 and using 10 iterations
- "extr_opt": raw pattern which was extrapolated to zero using a polynomial function and then optimized by the same parameters as "opt"
End of explanation
fr = calculate_fr(sq, use_modification_fcn=True)
fr_extr = calculate_fr(sq_extr, use_modification_fcn=True)
fr_opt = calculate_fr(sq_opt, use_modification_fcn=True)
fr_extr_opt = calculate_fr(sq_extr_opt, use_modification_fcn=True)
gr = calculate_gr(fr, density, composition)
gr_extr = calculate_gr(fr_extr, density, composition)
gr_opt = calculate_gr(fr_opt, density, composition)
gr_extr_opt = calculate_gr(fr_extr_opt, density, composition)
plt.figure(figsize=(12,8))
plt.subplot(1, 2, 1)
plt.plot(*fr.data, label='raw', color='k', ls='-')
plt.plot(*fr_extr.data, label='raw_extr', color='r', ls='-')
plt.plot(*fr_opt.data, label='opt', color='k', ls='--')
plt.plot(*fr_extr_opt.data, label='extr_opt', color='r', ls='--')
plt.xlim(0,5)
plt.legend(loc='best')
plt.xlabel('r $(\AA)$')
plt.ylabel('F(r)')
plt.subplot(1, 2, 2)
plt.plot(*gr.data, label='raw', color='k', ls='-')
plt.plot(*gr_extr.data, label='raw_extr', color='r', ls='-')
plt.plot(*gr_opt.data, label='opt', color='k', ls='--')
plt.plot(*gr_extr_opt.data, label='extr_opt', color='r', ls='--')
plt.ylim(-0.2, 2)
plt.xlim(0, 5)
plt.legend(loc='best')
plt.xlabel('r $(\AA)$')
plt.ylabel('g(r)')
Explanation: The two plots clearly show that the optimization on a not extrapolated S(Q) results in an artificial lower intensity of the first sharp diffraction peak. Pointing to that extrapolation is needed for a sensible data analysis.
2. Effect on F(r) and g(r)
In this section we going to compare F(r) and g(r) for 4 different data analysis methods:
"raw": using S(Q) from the original data without any modification
"raw_extr": using "raw" S(Q) which was extrapolated to zero Q using a polynomial function
"opt": using S(Q) optimized for an $r_{cutoff}$ of 1.5 and using 10 iterations
"extr_opt": using "opt" S(Q) which additionally was extrapolated to zero Q using a polynomial function and then optimized by the same parameters as "opt"
End of explanation |
2,995 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
image processing
Average Hash
์ด๋ฏธ์ง๋ฅผ ๋น๊ต ๊ฐ๋ฅํ ํด์ ๊ฐ์ผ๋ก ๋ํ๋ธ ๊ฒ
ํด์ ํจ์ MD5, SHA256 ๋ฑ์ ์ด์ฉํด ๋ฐ์ดํฐ ๊ฐ์ ๊ฐ๋จํ ํด์ ๊ฐ์ผ๋ก ๋ณํํ ์ ์์
์ด๋ฏธ์ง๊ฐ ๋น์ทํ์ง ๋ฑ์ ๊ฒ์ถํ ๋๋ ํด์ํจ์๋ฅผ ์ฌ์ฉํ๋ฉด ์๋จ. ํด์๋ ํฌ๊ธฐ ์กฐ์ , ์์กฐ ๋ณด์ , ์์ถ ํ์ ๋ณ๊ฒฝ ๋ฑ์ผ๋ก ํด์๊ฐ์ด ๋ฌ๋ผ์ง
Step2: Caltech 101 ๋ฐ์ดํฐ
ํด๋ฐ ๊ฑฐ๋ฆฌ
Step3: CNN
์ผ์ ํ ํฌ๊ธฐ๋ก ๋ฆฌ์ฌ์ด์ฆํ ํ, 24๋นํธ RGB ํ์์ผ๋ก ๋ณํ -> Numpy ๋ฐฐ์ด๋ก ์ ์ฅ | Python Code:
from PIL import Image
import numpy as np
def average_hash(fname, size = 16):
img = Image.open(fname)
img = img.convert('L') # 1์ ์ง์ ํ๋ฉด ์ด์งํ, RGB, RGBA, CMYK ๋ฑ์ ๋ชจ๋๋ ์ง์
img = img.resize((size, size), Image.ANTIALIAS)
pixel_data = img.getdata()
pixels = np.array(pixel_data)
pixels = pixels.reshape((size, size))
avg = pixels.mean()
diff = 1 * (pixels > avg)
return diff
def np2hash(ahash):
bhash = []
for nl in ahash.tolist():
s1 = [str(i)for i in nl]
s2 = ''.join(s1)
i = int(s2, 2)
bhash.append('%04x' % i)
return ''.join(bhash)
ahash = average_hash('eiffel_tower.jpeg')
print(ahash)
print(np2hash(ahash))
Explanation: image processing
Average Hash
์ด๋ฏธ์ง๋ฅผ ๋น๊ต ๊ฐ๋ฅํ ํด์ ๊ฐ์ผ๋ก ๋ํ๋ธ ๊ฒ
ํด์ ํจ์ MD5, SHA256 ๋ฑ์ ์ด์ฉํด ๋ฐ์ดํฐ ๊ฐ์ ๊ฐ๋จํ ํด์ ๊ฐ์ผ๋ก ๋ณํํ ์ ์์
์ด๋ฏธ์ง๊ฐ ๋น์ทํ์ง ๋ฑ์ ๊ฒ์ถํ ๋๋ ํด์ํจ์๋ฅผ ์ฌ์ฉํ๋ฉด ์๋จ. ํด์๋ ํฌ๊ธฐ ์กฐ์ , ์์กฐ ๋ณด์ , ์์ถ ํ์ ๋ณ๊ฒฝ ๋ฑ์ผ๋ก ํด์๊ฐ์ด ๋ฌ๋ผ์ง
End of explanation
import os, re
search_dir = "./image/101_ObjectCategories/"
cache_dir = "./image/cache_avhash"
if not os.path.exists(cache_dir):
os.mkdir(cache_dir)
def average_hash(fname, size = 16):
fname2 = fname[len(search_dir):]
# image cache
cache_file = cache_dir + "/" + fname2.replace('/','_') + '.csv'
if not os.path.exists(cache_file):
img = Image.open(fname)
img = img.convert('L').resize((size, size), Image.ANTIALIAS)
pixels = np.array(img.getdata()).reshape((size,size))
avg = pixels.mean()
px = 1 * (pixels > avg)
np.savetxt(cache_file, px, fmt="%.0f", delimiter=",")
else:
px = np.loadtxt(cache_file, delimiter=",")
return px
def hamming_dist(a, b):
aa = a.reshape(1, -1)
ab = b.reshape(1, -1)
dist = (aa != ab).sum()
return dist
def enum_all_files(path):
for root, dirs, files in os.walk(path):
for f in files:
fname = os.path.join(root, f)
if re.search(r'\.(jpg|jpeg|pnp)$', fname):
yield fname
def find_image(fname, rate):
src = average_hash(fname)
for fname in enum_all_files(search_dir):
dst = average_hash(fname)
diff_r = hamming_dist(src, dst) / 256
if diff_r < rate:
yield (diff_r, fname)
srcfile = search_dir + "/chair/image_0016.jpg"
html = ""
sim = list(find_image(srcfile, 0.25))
sim = sorted(sim, key = lambda x: x[0])
for r, f in sim:
print(r, ">", f)
s = '<div style="float:left;"><h3>[ ์ฐจ์ด : ' + str(r) + '-' + os.path.basename(f) + ']</h3>' + \
'<p><a herf="' + f + '"><img src="' + f + '" width=400>' + '</a></p></div>'
html += s
html = <html><head><meta charset="utf8">/head>
<body><h3> ์๋ ์ด๋ฏธ์ง </h3><p>
<img src = '{0}' width=400></p>{1}</body></html>.format(srcfile, html)
with open("./avgash-search-output.html", "w", encoding="utf-8") as f:
f.write(html)
print("ok")
Explanation: Caltech 101 ๋ฐ์ดํฐ
ํด๋ฐ ๊ฑฐ๋ฆฌ : ๊ฐ์ ๋ฌธ์์๋ฅผ ๊ฐ์ง 2๊ฐ์ ๋ฌธ์์ด์์ ๋์ํ๋ ์์น์ ์๋ ๋ฌธ์ ์ค ๋ค๋ฅธ ๊ฒ์ ๊ฐ์
256๊ธ์์ ํด์๊ฐ ์ค ์ผ๋ง๋ ๋ค๋ฅธ์ง ์ฐพ๊ณ ์ด ๊ธฐ๋ฐ์ผ๋ก ์ด๋ฏธ์ง ์ฐจ์ด๋ฅผ ๊ตฌ๋ถ
End of explanation
from PIL import Image
import os, glob
import numpy as np
from sklearn.model_selection import train_test_split
# ๋ถ๋ฅ ๋์ ์นดํ
๊ณ ๋ฆฌ ์ ํ
caltech_dir
Explanation: CNN
์ผ์ ํ ํฌ๊ธฐ๋ก ๋ฆฌ์ฌ์ด์ฆํ ํ, 24๋นํธ RGB ํ์์ผ๋ก ๋ณํ -> Numpy ๋ฐฐ์ด๋ก ์ ์ฅ
End of explanation |
2,996 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
This notebook shows how to plot and analyze a phase diagram.
Written using
Step1: Generating the phase diagram
To generate a phase diagram, we obtain entries from the Materials Project and call the PhaseDiagram class in pymatgen.
Step2: Plotting the phase diagram
To plot a phase diagram, we send our phase diagram object into the PDPlotter class.
Step3: Calculating energy above hull and other phase equilibria properties | Python Code:
from pymatgen.ext.matproj import MPRester
from pymatgen.analysis.phase_diagram import PhaseDiagram, PDPlotter
%matplotlib inline
Explanation: Introduction
This notebook shows how to plot and analyze a phase diagram.
Written using:
- pymatgen==2021.2.8
End of explanation
#This initializes the REST adaptor. You may need to put your own API key in as an arg.
a = MPRester()
#Entries are the basic unit for thermodynamic and other analyses in pymatgen.
#This gets all entries belonging to the Ca-C-O system.
entries = a.get_entries_in_chemsys(['Ca', 'C', 'O'])
#With entries, you can do many sophisticated analyses, like creating phase diagrams.
pd = PhaseDiagram(entries)
Explanation: Generating the phase diagram
To generate a phase diagram, we obtain entries from the Materials Project and call the PhaseDiagram class in pymatgen.
End of explanation
#Let's show all phases, including unstable ones
plotter = PDPlotter(pd, show_unstable=0.2, backend="matplotlib")
plotter.show()
Explanation: Plotting the phase diagram
To plot a phase diagram, we send our phase diagram object into the PDPlotter class.
End of explanation
import collections
data = collections.defaultdict(list)
for e in entries:
decomp, ehull = pd.get_decomp_and_e_above_hull(e)
data["Materials ID"].append(e.entry_id)
data["Composition"].append(e.composition.reduced_formula)
data["Ehull"].append(ehull)
data["Decomposition"].append(" + ".join(["%.2f %s" % (v, k.composition.formula) for k, v in decomp.items()]))
from pandas import DataFrame
df = DataFrame(data, columns=["Materials ID", "Composition", "Ehull", "Decomposition"])
print(df.head(30))
Explanation: Calculating energy above hull and other phase equilibria properties
End of explanation |
2,997 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Decoding in time-frequency space data using the Common Spatial Pattern (CSP)
The time-frequency decomposition is estimated by iterating over raw data that
has been band-passed at different frequencies. This is used to compute a
covariance matrix over each epoch or a rolling time-window and extract the CSP
filtered signals. A linear discriminant classifier is then applied to these
signals.
Step1: Set parameters and read data
Step2: Loop through frequencies, apply classifier and save scores
Step3: Plot frequency results
Step4: Loop through frequencies and time, apply classifier and save scores
Step5: Plot time-frequency results | Python Code:
# Authors: Laura Gwilliams <[email protected]>
# Jean-Remi King <[email protected]>
# Alex Barachant <[email protected]>
# Alexandre Gramfort <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
from mne import Epochs, create_info, events_from_annotations
from mne.io import concatenate_raws, read_raw_edf
from mne.datasets import eegbci
from mne.decoding import CSP
from mne.time_frequency import AverageTFR
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.model_selection import StratifiedKFold, cross_val_score
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import LabelEncoder
Explanation: Decoding in time-frequency space data using the Common Spatial Pattern (CSP)
The time-frequency decomposition is estimated by iterating over raw data that
has been band-passed at different frequencies. This is used to compute a
covariance matrix over each epoch or a rolling time-window and extract the CSP
filtered signals. A linear discriminant classifier is then applied to these
signals.
End of explanation
event_id = dict(hands=2, feet=3) # motor imagery: hands vs feet
subject = 1
runs = [6, 10, 14]
raw_fnames = eegbci.load_data(subject, runs)
raw = concatenate_raws([read_raw_edf(f, preload=True) for f in raw_fnames])
# Extract information from the raw file
sfreq = raw.info['sfreq']
events, _ = events_from_annotations(raw, event_id=dict(T1=2, T2=3))
raw.pick_types(meg=False, eeg=True, stim=False, eog=False, exclude='bads')
# Assemble the classifier using scikit-learn pipeline
clf = make_pipeline(CSP(n_components=4, reg=None, log=True, norm_trace=False),
LinearDiscriminantAnalysis())
n_splits = 5 # how many folds to use for cross-validation
cv = StratifiedKFold(n_splits=n_splits, shuffle=True)
# Classification & Time-frequency parameters
tmin, tmax = -.200, 2.000
n_cycles = 10. # how many complete cycles: used to define window size
min_freq = 5.
max_freq = 25.
n_freqs = 8 # how many frequency bins to use
# Assemble list of frequency range tuples
freqs = np.linspace(min_freq, max_freq, n_freqs) # assemble frequencies
freq_ranges = list(zip(freqs[:-1], freqs[1:])) # make freqs list of tuples
# Infer window spacing from the max freq and number of cycles to avoid gaps
window_spacing = (n_cycles / np.max(freqs) / 2.)
centered_w_times = np.arange(tmin, tmax, window_spacing)[1:]
n_windows = len(centered_w_times)
# Instantiate label encoder
le = LabelEncoder()
Explanation: Set parameters and read data
End of explanation
# init scores
freq_scores = np.zeros((n_freqs - 1,))
# Loop through each frequency range of interest
for freq, (fmin, fmax) in enumerate(freq_ranges):
# Infer window size based on the frequency being used
w_size = n_cycles / ((fmax + fmin) / 2.) # in seconds
# Apply band-pass filter to isolate the specified frequencies
raw_filter = raw.copy().filter(fmin, fmax, n_jobs=1, fir_design='firwin',
skip_by_annotation='edge')
# Extract epochs from filtered data, padded by window size
epochs = Epochs(raw_filter, events, event_id, tmin - w_size, tmax + w_size,
proj=False, baseline=None, preload=True)
epochs.drop_bad()
y = le.fit_transform(epochs.events[:, 2])
X = epochs.get_data()
# Save mean scores over folds for each frequency and time window
freq_scores[freq] = np.mean(cross_val_score(estimator=clf, X=X, y=y,
scoring='roc_auc', cv=cv,
n_jobs=1), axis=0)
Explanation: Loop through frequencies, apply classifier and save scores
End of explanation
plt.bar(freqs[:-1], freq_scores, width=np.diff(freqs)[0],
align='edge', edgecolor='black')
plt.xticks(freqs)
plt.ylim([0, 1])
plt.axhline(len(epochs['feet']) / len(epochs), color='k', linestyle='--',
label='chance level')
plt.legend()
plt.xlabel('Frequency (Hz)')
plt.ylabel('Decoding Scores')
plt.title('Frequency Decoding Scores')
Explanation: Plot frequency results
End of explanation
# init scores
tf_scores = np.zeros((n_freqs - 1, n_windows))
# Loop through each frequency range of interest
for freq, (fmin, fmax) in enumerate(freq_ranges):
# Infer window size based on the frequency being used
w_size = n_cycles / ((fmax + fmin) / 2.) # in seconds
# Apply band-pass filter to isolate the specified frequencies
raw_filter = raw.copy().filter(fmin, fmax, n_jobs=1, fir_design='firwin',
skip_by_annotation='edge')
# Extract epochs from filtered data, padded by window size
epochs = Epochs(raw_filter, events, event_id, tmin - w_size, tmax + w_size,
proj=False, baseline=None, preload=True)
epochs.drop_bad()
y = le.fit_transform(epochs.events[:, 2])
# Roll covariance, csp and lda over time
for t, w_time in enumerate(centered_w_times):
# Center the min and max of the window
w_tmin = w_time - w_size / 2.
w_tmax = w_time + w_size / 2.
# Crop data into time-window of interest
X = epochs.copy().crop(w_tmin, w_tmax).get_data()
# Save mean scores over folds for each frequency and time window
tf_scores[freq, t] = np.mean(cross_val_score(estimator=clf, X=X, y=y,
scoring='roc_auc', cv=cv,
n_jobs=1), axis=0)
Explanation: Loop through frequencies and time, apply classifier and save scores
End of explanation
# Set up time frequency object
av_tfr = AverageTFR(create_info(['freq'], sfreq), tf_scores[np.newaxis, :],
centered_w_times, freqs[1:], 1)
chance = np.mean(y) # set chance level to white in the plot
av_tfr.plot([0], vmin=chance, title="Time-Frequency Decoding Scores",
cmap=plt.cm.Reds)
Explanation: Plot time-frequency results
End of explanation |
2,998 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DKRZ CMIP6 submission form for ESGF data publication
General Information (to be completed based on official CMIP6 references)
Data to be submitted for ESGF data publication must follow the rules outlined in the CMIP6 Archive Design <br /> (https
Step1: Start submission procedure
The submission is based on this interactive document consisting of "cells" you can modify and then evaluate
evaluation of cells is done by selecting the cell and then press the keys "Shift" + "Enter"
<br /> please evaluate the following cell to initialize your form
Step2: please provide information on the contact person for this CORDEX data submission request
Step3: Type of submission
please specify the type of this data submission
Step4: Requested general information
... to be finalized as soon as CMIP6 specification is finalized ....
Please provide model and institution info as well as an example of a file name
institution
The value of this field has to equal the value of the optional NetCDF attribute 'institution'
(long version) in the data files if the latter is used.
Step5: institute_id
The value of this field has to equal the value of the global NetCDF attribute 'institute_id'
in the data files and must equal the 4th directory level. It is needed before the publication
process is started in order that the value can be added to the relevant CORDEX list of CV1
if not yet there. Note that 'institute_id' has to be the first part of 'model_id'
Step6: model_id
The value of this field has to be the value of the global NetCDF attribute 'model_id'
in the data files. It is needed before the publication process is started in order that
the value can be added to the relevant CORDEX list of CV1 if not yet there.
Note that it must be composed by the 'institute_id' follwed by the RCM CORDEX model name,
separated by a dash. It is part of the file name and the directory structure.
Step7: experiment_id and time_period
Experiment has to equal the value of the global NetCDF attribute 'experiment_id'
in the data files. Time_period gives the period of data for which the publication
request is submitted. If you intend to submit data from multiple experiments you may
add one line for each additional experiment or send in additional publication request sheets.
Step8: Example file name
Please provide an example file name of a file in your data collection,
this name will be used to derive the other
Step9: information on the grid_mapping
the NetCDF/CF name of the data grid ('rotated_latitude_longitude', 'lambert_conformal_conic', etc.),
i.e. either that of the native model grid, or 'latitude_longitude' for the regular -XXi grids
Step10: Does the grid configuration exactly follow the specifications in ADD2 (Table 1)
in case the native grid is 'rotated_pole'? If not, comment on the differences; otherwise write 'yes' or 'N/A'. If the data is not delivered on the computational grid it has to be noted here as well.
Step11: Please provide information on quality check performed on the data you plan to submit
Please answer 'no', 'QC1', 'QC2-all', 'QC2-CORDEX', or 'other'.
'QC1' refers to the compliancy checker that can be downloaded at http
Step12: Terms of use
Please give the terms of use that shall be asigned to the data.
The options are 'unrestricted' and 'non-commercial only'.
For the full text 'Terms of Use' of CORDEX data refer to
http
Step13: Information on directory structure and data access path
(and other information needed for data transport and data publication)
If there is any directory structure deviation from the CORDEX standard please specify here.
Otherwise enter 'compliant'. Please note that deviations MAY imply that data can not be accepted.
Step14: Give the path where the data reside, for example
Step15: Exclude variable list
In each CORDEX file there may be only one variable which shall be published and searchable at the ESGF portal (target variable). In order to facilitate publication, all non-target variables are included in a list used by the publisher to avoid publication. A list of known non-target variables is [time, time_bnds, lon, lat, rlon ,rlat ,x ,y ,z ,height, plev, Lambert_Conformal, rotated_pole]. Please enter other variables into the left field if applicable (e.g. grid description variables), otherwise write 'N/A'.
Step16: Uniqueness of tracking_id and creation_date
In case any of your files is replacing a file already published, it must not have the same tracking_id nor
the same creation_date as the file it replaces.
Did you make sure that that this is not the case ?
Reply 'yes'; otherwise adapt the new file versions.
Step17: Variable list
list of variables submitted -- please remove the ones you do not provide
Step18: Check your submission before submission
Step19: Save your form
your form will be stored (the form name consists of your last name plut your keyword)
Step20: officially submit your form
the form will be submitted to the DKRZ team to process
you also receive a confirmation email with a reference to your online form for future modifications | Python Code:
from dkrz_forms import form_widgets
form_widgets.show_status('form-submission')
Explanation: DKRZ CMIP6 submission form for ESGF data publication
General Information (to be completed based on official CMIP6 references)
Data to be submitted for ESGF data publication must follow the rules outlined in the CMIP6 Archive Design <br /> (https://...)
Thus file names have to follow the pattern:<br />
VariableName_Domain_GCMModelName_CMIP6ExperimentName_CMIP5EnsembleMember_RCMModelName_RCMVersionID_Frequency[_StartTime-EndTime].nc <br />
Example: tas_AFR-44_MPI-M-MPI-ESM-LR_rcp26_r1i1p1_MPI-CSC-REMO2009_v1_mon_yyyymm-yyyymm.nc
The directory structure in which these files are stored follow the pattern:<br />
activity/product/Domain/Institution/
GCMModelName/CMIP5ExperimentName/CMIP5EnsembleMember/
RCMModelName/RCMVersionID/Frequency/VariableName <br />
Example: CORDEX/output/AFR-44/MPI-CSC/MPI-M-MPI-ESM-LR/rcp26/r1i1p1/MPI-CSC-REMO2009/v1/mon/tas/tas_AFR-44_MPI-M-MPI-ESM-LR_rcp26_r1i1p1_MPI-CSC-REMO2009_v1_mon_yyyymm-yyyymm.nc
Notice: If your model is not yet registered, please contact contact ....
This 'data submission form' is used to improve initial information exchange between data providers and the data center. The form has to be filled before the publication process can be started. In case you have questions pleas contact the individual data center:
o DKRZ: [email protected]
End of explanation
# initialize your CORDEX submission form template
from dkrz_forms import form_handler
from dkrz_forms import checks
Explanation: Start submission procedure
The submission is based on this interactive document consisting of "cells" you can modify and then evaluate
evaluation of cells is done by selecting the cell and then press the keys "Shift" + "Enter"
<br /> please evaluate the following cell to initialize your form
End of explanation
my_email = "..." # example: sf.email = "[email protected]"
my_first_name = "..." # example: sf.first_name = "Harold"
my_last_name = "..." # example: sf.last_name = "Mitty"
my_keyword = "..." # example: sf.keyword = "mymodel_myrunid"
sf = form_handler.init_form("CORDEX",my_first_name,my_last_name,my_email,my_keyword)
Explanation: please provide information on the contact person for this CORDEX data submission request
End of explanation
sf.submission_type = "..." # example: sf.submission_type = "initial_version"
Explanation: Type of submission
please specify the type of this data submission:
- "initial_version" for first submission of data
- "new _version" for a re-submission of previousliy submitted data
- "retract" for the request to retract previously submitted data
End of explanation
sf.institution = "..." # example: sf.institution = "Alfred Wegener Institute"
Explanation: Requested general information
... to be finalized as soon as CMIP6 specification is finalized ....
Please provide model and institution info as well as an example of a file name
institution
The value of this field has to equal the value of the optional NetCDF attribute 'institution'
(long version) in the data files if the latter is used.
End of explanation
sf.institute_id = "..." # example: sf.institute_id = "AWI"
Explanation: institute_id
The value of this field has to equal the value of the global NetCDF attribute 'institute_id'
in the data files and must equal the 4th directory level. It is needed before the publication
process is started in order that the value can be added to the relevant CORDEX list of CV1
if not yet there. Note that 'institute_id' has to be the first part of 'model_id'
End of explanation
sf.model_id = "..." # example: sf.model_id = "AWI-HIRHAM5"
Explanation: model_id
The value of this field has to be the value of the global NetCDF attribute 'model_id'
in the data files. It is needed before the publication process is started in order that
the value can be added to the relevant CORDEX list of CV1 if not yet there.
Note that it must be composed by the 'institute_id' follwed by the RCM CORDEX model name,
separated by a dash. It is part of the file name and the directory structure.
End of explanation
sf.experiment_id = "..." # example: sf.experiment_id = "evaluation"
# ["value_a","value_b"] in case of multiple experiments
sf.time_period = "..." # example: sf.time_period = "197901-201412"
# ["time_period_a","time_period_b"] in case of multiple values
Explanation: experiment_id and time_period
Experiment has to equal the value of the global NetCDF attribute 'experiment_id'
in the data files. Time_period gives the period of data for which the publication
request is submitted. If you intend to submit data from multiple experiments you may
add one line for each additional experiment or send in additional publication request sheets.
End of explanation
sf.example_file_name = "..." # example: sf.example_file_name = "tas_AFR-44_MPI-M-MPI-ESM-LR_rcp26_r1i1p1_MPI-CSC-REMO2009_v1_mon_yyyymm-yyyymm.nc"
# Please run this cell as it is to check your example file name structure
# to_do: implement submission_form_check_file function - output result (attributes + check_result)
form_handler.cordex_file_info(sf,sf.example_file_name)
Explanation: Example file name
Please provide an example file name of a file in your data collection,
this name will be used to derive the other
End of explanation
sf.grid_mapping_name = "..." # example: sf.grid_mapping_name = "rotated_latitude_longitude"
Explanation: information on the grid_mapping
the NetCDF/CF name of the data grid ('rotated_latitude_longitude', 'lambert_conformal_conic', etc.),
i.e. either that of the native model grid, or 'latitude_longitude' for the regular -XXi grids
End of explanation
sf.grid_as_specified_if_rotated_pole = "..." # example: sf.grid_as_specified_if_rotated_pole = "yes"
Explanation: Does the grid configuration exactly follow the specifications in ADD2 (Table 1)
in case the native grid is 'rotated_pole'? If not, comment on the differences; otherwise write 'yes' or 'N/A'. If the data is not delivered on the computational grid it has to be noted here as well.
End of explanation
sf.data_qc_status = "..." # example: sf.data_qc_status = "QC2-CORDEX"
sf.data_qc_comment = "..." # any comment of quality status of the files
Explanation: Please provide information on quality check performed on the data you plan to submit
Please answer 'no', 'QC1', 'QC2-all', 'QC2-CORDEX', or 'other'.
'QC1' refers to the compliancy checker that can be downloaded at http://cordex.dmi.dk.
'QC2' refers to the quality checker developed at DKRZ.
If your answer is 'other' give some informations.
End of explanation
sf.terms_of_use = "..." # example: sf.terms_of_use = "unrestricted"
Explanation: Terms of use
Please give the terms of use that shall be asigned to the data.
The options are 'unrestricted' and 'non-commercial only'.
For the full text 'Terms of Use' of CORDEX data refer to
http://cordex.dmi.dk/joomla/images/CORDEX/cordex_terms_of_use.pdf
End of explanation
sf.directory_structure = "..." # example: sf.directory_structure = "compliant"
Explanation: Information on directory structure and data access path
(and other information needed for data transport and data publication)
If there is any directory structure deviation from the CORDEX standard please specify here.
Otherwise enter 'compliant'. Please note that deviations MAY imply that data can not be accepted.
End of explanation
sf.data_path = "..." # example: sf.data_path = "mistral.dkrz.de:/mnt/lustre01/work/bm0021/k204016/CORDEX/archive/"
sf.data_information = "..." # ...any info where data can be accessed and transfered to the data center ... "
Explanation: Give the path where the data reside, for example:
blizzard.dkrz.de:/scratch/b/b364034/. If not applicable write N/A and give data access information in the data_information string
End of explanation
sf.exclude_variables_list = "..." # example: sf.exclude_variables_list=["bnds", "vertices"]
Explanation: Exclude variable list
In each CORDEX file there may be only one variable which shall be published and searchable at the ESGF portal (target variable). In order to facilitate publication, all non-target variables are included in a list used by the publisher to avoid publication. A list of known non-target variables is [time, time_bnds, lon, lat, rlon ,rlat ,x ,y ,z ,height, plev, Lambert_Conformal, rotated_pole]. Please enter other variables into the left field if applicable (e.g. grid description variables), otherwise write 'N/A'.
End of explanation
sf.uniqueness_of_tracking_id = "..." # example: sf.uniqueness_of_tracking_id = "yes"
Explanation: Uniqueness of tracking_id and creation_date
In case any of your files is replacing a file already published, it must not have the same tracking_id nor
the same creation_date as the file it replaces.
Did you make sure that that this is not the case ?
Reply 'yes'; otherwise adapt the new file versions.
End of explanation
sf.variable_list_day = [
"clh","clivi","cll","clm","clt","clwvi",
"evspsbl","evspsblpot",
"hfls","hfss","hurs","huss","hus850",
"mrfso","mrro","mrros","mrso",
"pr","prc","prhmax","prsn","prw","ps","psl",
"rlds","rlus","rlut","rsds","rsdt","rsus","rsut",
"sfcWind","sfcWindmax","sic","snc","snd","snm","snw","sund",
"tas","tasmax","tasmin","tauu","tauv","ta200","ta500","ta850","ts",
"uas","ua200","ua500","ua850",
"vas","va200","va500","va850","wsgsmax",
"zg200","zg500","zmla"
]
sf.variable_list_mon = [
"clt",
"evspsbl",
"hfls","hfss","hurs","huss","hus850",
"mrfso","mrro","mrros","mrso",
"pr","psl",
"rlds","rlus","rlut","rsds","rsdt","rsus","rsut",
"sfcWind","sfcWindmax","sic","snc","snd","snm","snw","sund",
"tas","tasmax","tasmin","ta200",
"ta500","ta850",
"uas","ua200","ua500","ua850",
"vas","va200","va500","va850",
"zg200","zg500"
]
sf.variable_list_sem = [
"clt",
"evspsbl",
"hfls","hfss","hurs","huss","hus850",
"mrfso","mrro","mrros","mrso",
"pr","psl",
"rlds","rlus","rlut","rsds","rsdt","rsus","rsut",
"sfcWind","sfcWindmax","sic","snc","snd","snm","snw","sund",
"tas","tasmax","tasmin","ta200","ta500","ta850",
"uas","ua200","ua500","ua850",
"vas","va200","va500","va850",
"zg200","zg500"
]
sf.variable_list_fx = [
"areacella",
"mrsofc",
"orog",
"rootd",
"sftgif","sftlf"
]
Explanation: Variable list
list of variables submitted -- please remove the ones you do not provide:
End of explanation
# simple consistency check report for your submission form
res = form_handler.check_submission(sf)
sf.sub['status_flag_validity'] = res['valid_submission']
form_handler.DictTable(res)
Explanation: Check your submission before submission
End of explanation
form_handler.form_save(sf)
#evaluate this cell if you want a reference to the saved form emailed to you
# (only available if you access this form via the DKRZ form hosting service)
form_handler.email_form_info()
# evaluate this cell if you want a reference (provided by email)
# (only available if you access this form via the DKRZ hosting service)
form_handler.email_form_info(sf)
Explanation: Save your form
your form will be stored (the form name consists of your last name plut your keyword)
End of explanation
form_handler.email_form_info(sf)
form_handler.form_submission(sf)
Explanation: officially submit your form
the form will be submitted to the DKRZ team to process
you also receive a confirmation email with a reference to your online form for future modifications
End of explanation |
2,999 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
01 SEP 2017
Step1: Finally starting to understand this problem. So ResourceExhaustedError isn't system memory (or at least not only) but graphics memory. The card (obviously) cannot handle a batch size of 64. But batch size must be a multiple of chunk length, which here is 64.. so I have to find a way to reduce the chunk length down to something my system can handle
Step2: That looks successful, now to redo the whole thing with the _c8 versions | Python Code:
%matplotlib inline
import importlib
import os, sys; sys.path.insert(1, os.path.join('../utils'))
import utils2; importlib.reload(utils2)
from utils2 import *
from scipy.optimize import fmin_l_bfgs_b
from scipy.misc import imsave
from keras import metrics
from vgg16_avg import VGG16_Avg
from bcolz_array_iterator import BcolzArrayIterator
limit_mem()
path = '../data/'
dpath = path
rn_mean = np.array([123.68, 116.779, 103.939], dtype=np.float32)
preproc = lambda x: (x - rn_mean)[:, :, :, ::-1]
deproc = lambda x,s: np.clip(x.reshape(s)[:, :, :, ::-1] + rn_mean, 0, 255)
arr_lr = bcolz.open(dpath+'trn_resized_72.bc')
arr_hr = bcolz.open(path+'trn_resized_288.bc')
parms = {'verbose': 0, 'callbacks': [TQDMNotebookCallback(leave_inner=True)]}
parms = {'verbose': 0, 'callbacks': [TQDMNotebookCallback(leave_inner=True)]}
def conv_block(x, filters, size, stride=(2,2), mode='same', act=True):
x = Convolution2D(filters, size, size, subsample=stride, border_mode=mode)(x)
x = BatchNormalization(mode=2)(x)
return Activation('relu')(x) if act else x
def res_block(ip, nf=64):
x = conv_block(ip, nf, 3, (1,1))
x = conv_block(x, nf, 3, (1,1), act=False)
return merge([x, ip], mode='sum')
def up_block(x, filters, size):
x = keras.layers.UpSampling2D()(x)
x = Convolution2D(filters, size, size, border_mode='same')(x)
x = BatchNormalization(mode=2)(x)
return Activation('relu')(x)
def get_model(arr):
inp=Input(arr.shape[1:])
x=conv_block(inp, 64, 9, (1,1))
for i in range(4): x=res_block(x)
x=up_block(x, 64, 3)
x=up_block(x, 64, 3)
x=Convolution2D(3, 9, 9, activation='tanh', border_mode='same')(x)
outp=Lambda(lambda x: (x+1)*127.5)(x)
return inp,outp
inp,outp=get_model(arr_lr)
shp = arr_hr.shape[1:]
vgg_inp=Input(shp)
vgg= VGG16(include_top=False, input_tensor=Lambda(preproc)(vgg_inp))
for l in vgg.layers: l.trainable=False
def get_outp(m, ln): return m.get_layer(f'block{ln}_conv2').output
vgg_content = Model(vgg_inp, [get_outp(vgg, o) for o in [1,2,3]])
vgg1 = vgg_content(vgg_inp)
vgg2 = vgg_content(outp)
def mean_sqr_b(diff):
dims = list(range(1,K.ndim(diff)))
return K.expand_dims(K.sqrt(K.mean(diff**2, dims)), 0)
w=[0.1, 0.8, 0.1]
def content_fn(x):
res = 0; n=len(w)
for i in range(n): res += mean_sqr_b(x[i]-x[i+n]) * w[i]
return res
m_sr = Model([inp, vgg_inp], Lambda(content_fn)(vgg1+vgg2))
m_sr.compile('adam', 'mae')
def train(bs, niter=10):
targ = np.zeros((bs, 1))
bc = BcolzArrayIterator(arr_hr, arr_lr, batch_size=bs)
for i in range(niter):
hr,lr = next(bc)
m_sr.train_on_batch([lr[:bs], hr[:bs]], targ)
its = len(arr_hr)//16; its
arr_lr.chunklen, arr_hr.chunklen
%time train(64, 18000)
Explanation: 01 SEP 2017
End of explanation
arr_lr_c8 = bcolz.carray(arr_lr, chunklen=8, rootdir=path+'trn_resized_72_c8.bc')
arr_lr_c8.flush()
arr_hr_c8 = bcolz.carray(arr_hr, chunklen=8, rootdir=path+'trn_resized_288_c8.bc')
arr_hr_c8.flush()
arr_lr_c8.chunklen, arr_hr_c8.chunklen
Explanation: Finally starting to understand this problem. So ResourceExhaustedError isn't system memory (or at least not only) but graphics memory. The card (obviously) cannot handle a batch size of 64. But batch size must be a multiple of chunk length, which here is 64.. so I have to find a way to reduce the chunk length down to something my system can handle: no more than 8.
End of explanation
arr_lr_c8 = bcolz.open(path+'trn_resized_72_c8.bc')
arr_hr_c8 = bcolz.open(path+'trn_resized_288_c8.bc')
inp,outp=get_model(arr_lr_c8)
shp = arr_hr_c8.shape[1:]
vgg_inp=Input(shp)
vgg= VGG16(include_top=False, input_tensor=Lambda(preproc)(vgg_inp))
for l in vgg.layers: l.trainable=False
vgg_content = Model(vgg_inp, [get_outp(vgg, o) for o in [1,2,3]])
vgg1 = vgg_content(vgg_inp)
vgg2 = vgg_content(outp)
m_sr = Model([inp, vgg_inp], Lambda(content_fn)(vgg1+vgg2))
m_sr.compile('adam', 'mae')
def train(bs, niter=10):
targ = np.zeros((bs, 1))
bc = BcolzArrayIterator(arr_hr_c8, arr_lr_c8, batch_size=bs)
for i in range(niter):
hr,lr = next(bc)
m_sr.train_on_batch([lr[:bs], hr[:bs]], targ)
%time train(8, 18000) # not sure what exactly the '18000' is for
arr_lr.shape, arr_hr.shape, arr_lr_c8.shape, arr_hr_c8.shape
# 19439//8 = 2429
%time train(8, 2430)
Explanation: That looks successful, now to redo the whole thing with the _c8 versions:
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.