Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
6,600 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to data analytics with pandas
Quentin Caudron
<br />
<img src="images/pydata.png" width="200px" />
<br />@QuentinCaudron
Notebooks
Step1: Systems check
Do you have a working Python installation, with the pandas package ?
Step2: Note
Step3: Note
Step4: Note
Step5: We have an index, and three columns
Step6: Definitely a string. We'll note this as something to fix after we finish looking around.
Note
Step7: Note
Step8: What else can we find out ?
Step9: Looks like we also have some missing data - we have 671 rows, but the coffees column only has 658 entries.
Note
Step10: Note
Step11: The contributor column makes sense as object, because we expect strings there; but surely the timestamp should be a timestamp-type, and coffees should be numerical ?
Let's inspect what's in the timestamp column.
Step12: It looks like the timestamp field was read from CSV as a string. That makes sense - CSV files are very basic. We'll have pandas interpret these strings as datetimes for us automatically.
Note
Step13: #### The coffees column contains NaNs.
Step14: The coffees column is of type float.
Step15: Let's have pandas parse the timestamp strings to datetime objects.
Step16: So where do we stand ?
Step17: Note
Step18: pandas is plotting the coffees against the index, which is just a series of integers.
Note
Step19: We have some very uneven spacing in places. We might start by cutting off the last few points of this time-series, which is missing a lot of data.
We'll inspect the last few points of this time-series.
Step20: After mid-March, things start getting spaced rather erratically.
Let's cut off the tail of the time-series, anything after 2013-03-01.
Step21: Note
Step22: 1. Contributions to the time-series
Who are our main contributors ?
Step23: Note
Step24: On which weekdays were contributions made ?
Step25: Can we replace these integers with actual weekdays ?
Step26: Let's group by these weekdays.
Step27: Note
Step28: 2. Weekday trends
First, we'll set our timestamps to the dataframe's index
Step29: Let's add some rows at midnight on every day.
Step30: Note
Step31: Note
Step32: Note
Step33: We're now ready to resample the time-series at a daily frequency.
Step34: Let's begin by figuring out how many coffees are made on any given day.
Step35: Note
Step36: Let's order this series and then plot it.
Step37: Wednesdays was seminar day...
3. Coffee per person
We can now pull in data on how many people were in the department.
Step38: Let's join the datasets.
Step39: Note
Step40: We can now plot this column.
Step41: Those are strange plateaus. We'll pull in another dataset, telling us when the machine was broken.
Step42: Note
Step43: A quick trick to plot this as a time-series...
Step44: Note
Step45: We'll bring in this numerical representation of status column into our dataframe too.
Step46: Let's plot both the coffees per person and the numerical status.
Step47: We see a strong weekday-weekend effect. Resampling weekly will fix that. | Python Code:
%%HTML
<style>
.rendered_html {
font-size: 0.7em;
}
.CodeMirror-scroll {
font-size: 1.2em;
}
.rendered_html table, .rendered_html th, .rendered_html tr, .rendered_html td, .rendered_html h2, .rendered_html h4 {
font-size: 100%;
}
</style>
Explanation: Introduction to data analytics with pandas
Quentin Caudron
<br />
<img src="images/pydata.png" width="200px" />
<br />@QuentinCaudron
Notebooks : https://github.com/QCaudron/pydata_pandas
You can ignore this next cell, it's only for presentations !
End of explanation
import pandas as pd
Explanation: Systems check
Do you have a working Python installation, with the pandas package ?
End of explanation
import pandas as pd
%matplotlib inline
Explanation: Note : This cell should run without raising a traceback. Assuming it runs, you can also try printing the value of pd.__version__ to see what version of pandas you have installed.
A little about me
Lapsed computational physicist
PhD computational neuroscience, postdoc statistical epidemiology
Data Scientist at CBRE - www.cbredev.com
ATOM in Seattle
A little about the hero of this story
<center><img src="images/coffee_machine.jpg" width="400px" /></center>
We'll be analysing a real-world dataset together. It's about my favourite thing in the world : coffee. This dataset was collected at the Mathematics Institute at the University of Warwick. It's a time-series dataset, describing the total number of coffees made by our espresso machine by a certain date.
A little about this workshop
We'll be running through an analysis of this dataset as a way to expose you to the pandas API. The aim is to develop a little familiarity with how to work with pandas.
Slides are available at https://github.com/QCaudron/pydata_pandas. One notebook contains solutions; beware of spoilers.
The notebooks contain notes about what we're doing that I'll skip during this workshop, but try to explain on the way.
The pandas API is enormous. The documentation is excellent, don't hesitate to look things up.
Key questions
Who are the main contributors to this dataset, and when are contributions generally made ?
What are the department's weekday coffee habits ?
How much coffee are people drinking ?
Let's begin
End of explanation
# Read data from data/coffees.csv
data =
Explanation: Note : The second line here tells matplotlib to plot directly under the cell where any plotting code is called. pandas uses matplotlib to generate graphs, and without this, the graphs would appear outside the Jupyter notebook when you called plt.show() - but we just want them to appear without having to do this.
http://ipython.readthedocs.io/en/stable/interactive/plotting.html#id1
Importing the data
Let's import the coffee data from CSV.
End of explanation
# .head()
Explanation: Note : pandas can read from many data formats : CSV, JSON, Excel, HDF5, SQL, and more.
http://pandas.pydata.org/pandas-docs/version/0.20/io.html
What does this data look like ?
Let's just look at the first few rows.
End of explanation
# .loc or .iloc
Explanation: We have an index, and three columns : timestamp, coffees, and contributor.
Uh-oh. Why is there a string of text, testing, in our coffee numbers ? What's going on in the coffees column in the row after that ?
Note : df.head(n=10) would show the first ten rows. The default is n=5.
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.head.html
Let's look at that string in the third row.
End of explanation
# [] indexing on a series
Explanation: Definitely a string. We'll note this as something to fix after we finish looking around.
Note : .loc uses a label-based lookup, which means that the value you pass into the square brackets must be in the index. Another method, .iloc, is integer-location-based, so .iloc[2] would return the third row. In this case, they're the same, but had we changed our index, as we'll see later, things would work differently.
Indexing a dataframe with [] directly returns a pd.Series or pd.DataFrame by searching over columns, not rows. Indexing a pd.Series with [] is like indexing a dataframe with .iloc.
https://pandas.pydata.org/pandas-docs/stable/indexing.html
We should also take a look at that NaN. In fact, let's look at the first five values in coffees.
End of explanation
print("Dataset length :")
# len()
print()
Explanation: Note : here, we're indexing a series ( a pd.Series object ). From a pd.DataFrame ( here, data ), when you access a single column ( data.coffees or data["coffees"] ), the object returned is a pd.Series. From that, indexing directly with [] works in an integer-location-based manner, and like with numpy arrays, you can take slices ( [:5] ).
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html
How long is the dataset ?
End of explanation
# .describe()
Explanation: What else can we find out ?
End of explanation
# .isnull() and boolean indexing with []
Explanation: Looks like we also have some missing data - we have 671 rows, but the coffees column only has 658 entries.
Note : .describe() returns different things based on what's in the dataframe, as we'll see later. For numerical columns, it will return things like the mean, standard deviation, and percentiles. For object columns ( strings or datetimes ), it will return the most frequent entry and the first and last items. For all columns, .describe() will return the count of objects in that column ( not counting NaNs ) and the unique number of entries. You can determine what's returned using .describe()'s keyword arguments.
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.describe.html
Let's look at the dataframe where coffees is null.
End of explanation
# .dtypes
Explanation: Note : .isnull() returns a boolean array ( an array of Trues and Falses ), that you can then use to index the dataframe directly. Here, our boolean array tells us which entries in the coffees column are null, and we use that to index against the full dataframe - so we get back every column in the dataframe, but only those rows where coffees is null.
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.isnull.html
What type of Python objects are the columns ?
End of explanation
# print the first element of the series with [] indexing
print()
# print its type()
print()
Explanation: The contributor column makes sense as object, because we expect strings there; but surely the timestamp should be a timestamp-type, and coffees should be numerical ?
Let's inspect what's in the timestamp column.
End of explanation
# cast the coffees column using pd.to_numeric, and coerce errors
data.coffees =
data.head()
Explanation: It looks like the timestamp field was read from CSV as a string. That makes sense - CSV files are very basic. We'll have pandas interpret these strings as datetimes for us automatically.
Note : here's an example of using direct [] indexing on a pd.Series. We're accessing the first entry, just to see what type of object we have there.
On our first pass, what problems did we find ?
The timestamp column contains strings; these need to be datetimes
The coffees column contains some null values and at least one string
Cleaning the data
The coffees column should only contain numerical data.
End of explanation
# Use .dropna() using a subset, and pass inplace
data.head()
Explanation: #### The coffees column contains NaNs.
End of explanation
# Cast to int using .astype()
data.coffees =
data.head()
Explanation: The coffees column is of type float.
End of explanation
# pd.to_datetime()
data.timestamp =
# Confirm dtypes
data.dtypes
Explanation: Let's have pandas parse the timestamp strings to datetime objects.
End of explanation
# .describe(), passing the include kwarg to see all information
# What do the first few rows look like ?
Explanation: So where do we stand ?
End of explanation
# .plot() on the coffees series
Explanation: Note : .describe(include="all") is describing all attributes of all columns, but some don't make sense based on the column's dtype. For example, the contributor column has no first and last attributes, because those describe the first and last entries in an ordered series. That makes sense for the timestamp - those have an intuitive definition of sorting - but not so much for strings ( alphabetical order doesn't really matter when they're arbitrary strings ). Similary, the timestamp column has no mean or other numerical traits. What does it mean to calculate the mean timestamp ?
The time-series at a glance
Let's begin by visualising the coffee counts.
End of explanation
# .plot() on the dataframe,
# pass x kwarg to plot against the timestamp
# use a dot-dash style
Explanation: pandas is plotting the coffees against the index, which is just a series of integers.
Note : .plot() on a pd.Series will plot the data against the index. On a pd.DataFrame, the .plot() method allows plotting of one column against another.
By default, .plot() renders a line graph, but you can specify which type of plot you'd like - bar, line, histogram, area, scatter, etc..
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.plot.html
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html
Let's use the dataframe's plot() method rather than that of the series.
End of explanation
# .tail() with ten rows
Explanation: We have some very uneven spacing in places. We might start by cutting off the last few points of this time-series, which is missing a lot of data.
We'll inspect the last few points of this time-series.
End of explanation
# Use conditional indexing against the timestamp
data =
data.tail()
Explanation: After mid-March, things start getting spaced rather erratically.
Let's cut off the tail of the time-series, anything after 2013-03-01.
End of explanation
# Once again, plot the data against the timestamp
Explanation: Note : this is another example of boolean indexing. data.timestamp < "2013-03-01" is a boolean array, and can be passed into the dataframe immediately in [], much like with a np.ndarray.
One final look.
End of explanation
# .value_counts()
Explanation: 1. Contributions to the time-series
Who are our main contributors ?
End of explanation
# .plot() a bar chart of the value counts
Explanation: Note : .value_counts() counts the unique values in a series. It's similar to doing a .groupby() followed by a .count(), as we'll see soon.
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html
Let's plot this.
End of explanation
# Create a series of the weekdays
# for each entry using .dt.weekday
weekdays =
# .assign() it to our dataframe
data =
data.head()
Explanation: On which weekdays were contributions made ?
End of explanation
weekday_names = ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"]
weekday_dict = {key: weekday_names[key] for key in range(7)}
def day_of_week(idx):
return weekday_dict[idx]
# Use .apply() to apply a custom function to the weekdays column
data.weekdays =
data.head()
Explanation: Can we replace these integers with actual weekdays ?
End of explanation
# .groupby() the weekdays and then .count() rows in each group
weekday_counts =
# We can reorder this dataframe by our weekday_names
# list using .loc, indexing with the names
weekday_counts =
weekday_counts
Explanation: Let's group by these weekdays.
End of explanation
# Plot a bar chart of the coffees data in weekday_counts
# Title : "Datapoints added on each weekday"
Explanation: Note : this first line could be replaced by weekday_counts = data.weekdays.value_counts(), with the only difference being that that would return a series to us, and here, we got back a dataframe.
We can now visualise these weekday counts.
End of explanation
# Set the dataframe's .index property
data.index =
# Let's drop the timestamp column, as we no longer need it
data.head()
Explanation: 2. Weekday trends
First, we'll set our timestamps to the dataframe's index
End of explanation
# pd.date_range, with daily frequency, and normalisation
midnights =
midnights
Explanation: Let's add some rows at midnight on every day.
End of explanation
# Take the union of the existing and new indices
new_index =
new_index
Explanation: Note : pd.date_range creates a fixed-frequency DatetimeIndex. normalize=True ensures these datetimes are at midnight, and not at whatever time the starting point is.
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.date_range.html
Let's take the union of this index and our dataset's index.
End of explanation
# .reindex() the dataframe to get an upsampled dataframe
upsampled_data =
upsampled_data.head(10)
Explanation: Note : the union of these indices is just a new index where entries from both indices are present. It's sorted by time.
Now we can reindex our dataframe with this new index.
End of explanation
# .interpolate the upsampled_data using the time method
upsampled_data =
upsampled_data.head(10)
Explanation: Note : .reindex() keeps any values that conform to the new index, and inserts NaNs where we have no values.
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html
We can fill in these NaNs using interpolation.
End of explanation
# .resample() the upsampled dataframe,
# using .asfreq() to get only exactly daily values
daily_data =
# Drop the contributor column, we no longer need it
daily_data =
# Generate a column of weekday_names
daily_data["weekdays"] =
daily_data.head()
# Let's plot the data once more, to see how we're doing
Explanation: We're now ready to resample the time-series at a daily frequency.
End of explanation
# Use .diff() on the coffees column; follow up with .shift()
coffees_made =
# Add this as a column to the dataframe
daily_data["coffees_made_today"] =
daily_data.head(n=10)
Explanation: Let's begin by figuring out how many coffees are made on any given day.
End of explanation
# .groupby weekdays, take the mean, and
# grab the coffees_made_today column
coffees_by_day =
coffees_by_day
Explanation: Note : we use .shift() here because if we look at the .diff() between a Monday and a Tuesday, those coffees are attributed to the Tuesday. However, what we want to say is "this many coffees were made at some point on the Monday", so we shift the entire series up one.
Now we can group this by weekday.
End of explanation
# Sort coffees_by_day by our list of weekend names
coffees_by_day =
# Plot a bar chart
Explanation: Let's order this series and then plot it.
End of explanation
# Bring in data/department_members.csv;
# have the first column be the index, and parse the dates
people =
people.head()
Explanation: Wednesdays was seminar day...
3. Coffee per person
We can now pull in data on how many people were in the department.
End of explanation
# Use an outer join, then interpolate over
# missing values using nearest values
daily_data =
daily_data.head(n=15)
Explanation: Let's join the datasets.
End of explanation
# New column is the ratio of coffees made on a
# given day to number of members in the department
daily_data["coffees_per_person"] =
# Let's drop those remaining NaNs while we're at it
daily_data.head(n=10)
Explanation: Note : by default, inner joins are performed. That is, if a row from one of the datasets has an index that isn't in the other dataset, that row is dropped. You can specify whether you want outer, left, or right joins, as well plenty of other useful options. The pandas API for joining or merging datasets is very developed.
https://pandas.pydata.org/pandas-docs/stable/merging.html
Let's create a column for the number of coffees consumed per person.
End of explanation
# Plot the coffees_per_person column
Explanation: We can now plot this column.
End of explanation
# read data/coffee_status.csv
# parse_dates as kwarg; also pass index_col
machine_status =
machine_status.head()
Explanation: Those are strange plateaus. We'll pull in another dataset, telling us when the machine was broken.
End of explanation
# .value_counts()
Explanation: Note : the parse_dates keyword argument takes several values. By passing in a list of strings, we're telling pandas to attempt to parse the dates in columns with those names.
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html
What values are in the status column ?
End of explanation
# Make a pd.Series from the status series where things are OK
numerical_status =
numerical_status.plot()
Explanation: A quick trick to plot this as a time-series...
End of explanation
# .join() daily_data with machine_status
daily_data =
daily_data.head()
Explanation: Note : the first line here creates a boolean pd.Series, holding the value True when machine_status.status is "OK", and False otherwise. Because it's a pd.Series, its index stays the same as that of machine_status, which was a DatetimeIndex. Then, we can plot the boolean series ( True appearing as 1, and False appearing as 0 ), and just quickly scan to see that there are long areas where the coffee machine was operations, with short bouts ( thankfully ! ) of the machine being broken.
Let's join the datasets on the date field !
End of explanation
# Column depicting when the status was "OK"
# Cast the series to ints before as you create a new column in the dataframe
daily_data["numerical_status"] =
daily_data.head()
Explanation: We'll bring in this numerical representation of status column into our dataframe too.
End of explanation
# Plot both columns on the same graph, using default args
Explanation: Let's plot both the coffees per person and the numerical status.
End of explanation
# Resample weekly, taking the mean
# of each week to get a weekly value
weekly_data =
# Plot the coffees per person and the machine's status
Explanation: We see a strong weekday-weekend effect. Resampling weekly will fix that.
End of explanation |
6,601 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: We'll start off by trying to find out if the string "phone" is inside the text string. Now we could quickly do this with
Step2: But let's show the format for regular expressions, because later on we will be searching for patterns that won't have such a simple solution.
Step3: Now we've seen that re.search() will take the pattern, scan the text, and then returns a Match object. If no pattern is found, a None is returned (in Jupyter Notebook this just means that nothing is output below the cell).
Let's take a closer look at this Match object.
Step4: Notice the span, there is also a start and end index information.
Step5: But what if the pattern occurs more than once?
Step6: Notice it only matches the first instance. If we wanted a list of all matches, we can use .findall() method
Step7: To get actual match objects, use the iterator
Step8: If you wanted the actual text that matched, you can use the .group() method.
Step9: Patterns
So far we've learned how to search for a basic string. What about more complex examples? Such as trying to find a telephone number in a large string of text? Or an email address?
We could just use search method if we know the exact phone or email, but what if we don't know it? We may know the general format, and we can use that along with regular expressions to search the document for strings that match a particular pattern.
This is where the syntax may appear strange at first, but take your time with this; often it's just a matter of looking up the pattern code.
Let's begin!
Identifiers for Characters in Patterns
Characters such as a digit or a single string have different codes that represent them. You can use these to build up a pattern string. Notice how these make heavy use of the backwards slash \ . Because of this when defining a pattern string for regular expression we use the format
Step10: Notice the repetition of \d. That is a bit of an annoyance, especially if we are looking for very long strings of numbers. Let's explore the possible quantifiers.
Quantifiers
Now that we know the special character designations, we can use them along with quantifiers to define how many we expect.
<table ><tr><th>Character</th><th>Description</th><th>Example Pattern Code</th><th >Exammple Match</th></tr>
<tr ><td><span >+</span></td><td>Occurs one or more times</td><td> Version \w-\w+</td><td>Version A-b1_1</td></tr>
<tr ><td><span >{3}</span></td><td>Occurs exactly 3 times</td><td>\D{3}</td><td>abc</td></tr>
<tr ><td><span >{2,4}</span></td><td>Occurs 2 to 4 times</td><td>\d{2,4}</td><td>123</td></tr>
<tr ><td><span >{3,}</span></td><td>Occurs 3 or more</td><td>\w{3,}</td><td>anycharacters</td></tr>
<tr ><td><span >\*</span></td><td>Occurs zero or more times</td><td>A\*B\*C*</td><td>AAACC</td></tr>
<tr ><td><span >?</span></td><td>Once or none</td><td>plurals?</td><td>plural</td></tr></table>
Let's rewrite our pattern using these quantifiers
Step11: Groups
What if we wanted to do two tasks, find phone numbers, but also be able to quickly extract their area code (the first three digits). We can use groups for any general task that involves grouping together regular expressions (so that we can later break them down).
Using the phone number example, we can separate groups of regular expressions using parentheses
Step12: Additional Regex Syntax
Or operator |
Use the pipe operator to have an or statment. For example
Step13: The Wildcard Character
Use a "wildcard" as a placement that will match any character placed there. You can use a simple period . for this. For example
Step14: Notice how we only matched the first 3 letters, that is because we need a . for each wildcard letter. Or use the quantifiers described above to set its own rules.
Step15: However this still leads the problem to grabbing more beforehand. Really we only want words that end with "at".
Step16: Starts With and Ends With
We can use the ^ to signal starts with, and the $ to signal ends with
Step17: Note that this is for the entire string, not individual words!
Exclusion
To exclude characters, we can use the ^ symbol in conjunction with a set of brackets []. Anything inside the brackets is excluded. For example
Step18: To get the words back together, use a + sign
Step19: We can use this to remove punctuation from a sentence.
Step20: Brackets for Grouping
As we showed above we can use brackets to group together options, for example if we wanted to find hyphenated words
Step21: Parentheses for Multiple Options
If we have multiple options for matching, we can use parentheses to list out these options. For Example | Python Code:
text = "The agent's phone number is 408-555-1234. Call soon!"
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
Regular Expressions
Regular Expressions (sometimes called regex for short) allow a user to search for strings using almost any sort of rule they can come up with. For example, finding all capital letters in a string, or finding a phone number in a document.
Regular expressions are notorious for their seemingly strange syntax. This strange syntax is a byproduct of their flexibility. Regular expressions have to be able to filter out any string pattern you can imagine, which is why they have a complex string pattern format.
Regular expressions are handled using Python's built-in re library. See the docs for more information.
Let's begin by explaining how to search for basic patterns in a string!
Searching for Basic Patterns
Let's imagine that we have the following string:
End of explanation
'phone' in text
Explanation: We'll start off by trying to find out if the string "phone" is inside the text string. Now we could quickly do this with:
End of explanation
import re
pattern = 'phone'
re.search(pattern,text)
pattern = "NOT IN TEXT"
re.search(pattern,text)
Explanation: But let's show the format for regular expressions, because later on we will be searching for patterns that won't have such a simple solution.
End of explanation
pattern = 'phone'
match = re.search(pattern,text)
match
Explanation: Now we've seen that re.search() will take the pattern, scan the text, and then returns a Match object. If no pattern is found, a None is returned (in Jupyter Notebook this just means that nothing is output below the cell).
Let's take a closer look at this Match object.
End of explanation
match.span()
match.start()
match.end()
Explanation: Notice the span, there is also a start and end index information.
End of explanation
text = "my phone is a new phone"
match = re.search("phone",text)
match.span()
Explanation: But what if the pattern occurs more than once?
End of explanation
matches = re.findall("phone",text)
matches
len(matches)
Explanation: Notice it only matches the first instance. If we wanted a list of all matches, we can use .findall() method:
End of explanation
for match in re.finditer("phone",text):
print(match.span())
Explanation: To get actual match objects, use the iterator:
End of explanation
match.group()
Explanation: If you wanted the actual text that matched, you can use the .group() method.
End of explanation
text = "My telephone number is 408-555-1234"
phone = re.search(r'\d\d\d-\d\d\d-\d\d\d\d',text)
phone.group()
Explanation: Patterns
So far we've learned how to search for a basic string. What about more complex examples? Such as trying to find a telephone number in a large string of text? Or an email address?
We could just use search method if we know the exact phone or email, but what if we don't know it? We may know the general format, and we can use that along with regular expressions to search the document for strings that match a particular pattern.
This is where the syntax may appear strange at first, but take your time with this; often it's just a matter of looking up the pattern code.
Let's begin!
Identifiers for Characters in Patterns
Characters such as a digit or a single string have different codes that represent them. You can use these to build up a pattern string. Notice how these make heavy use of the backwards slash \ . Because of this when defining a pattern string for regular expression we use the format:
r'mypattern'
placing the r in front of the string allows python to understand that the \ in the pattern string are not meant to be escape slashes.
Below you can find a table of all the possible identifiers:
<table ><tr><th>Character</th><th>Description</th><th>Example Pattern Code</th><th >Exammple Match</th></tr>
<tr ><td><span >\d</span></td><td>A digit</td><td>file_\d\d</td><td>file_25</td></tr>
<tr ><td><span >\w</span></td><td>Alphanumeric</td><td>\w-\w\w\w</td><td>A-b_1</td></tr>
<tr ><td><span >\s</span></td><td>White space</td><td>a\sb\sc</td><td>a b c</td></tr>
<tr ><td><span >\D</span></td><td>A non digit</td><td>\D\D\D</td><td>ABC</td></tr>
<tr ><td><span >\W</span></td><td>Non-alphanumeric</td><td>\W\W\W\W\W</td><td>*-+=)</td></tr>
<tr ><td><span >\S</span></td><td>Non-whitespace</td><td>\S\S\S\S</td><td>Yoyo</td></tr></table>
For example:
End of explanation
re.search(r'\d{3}-\d{3}-\d{4}',text)
Explanation: Notice the repetition of \d. That is a bit of an annoyance, especially if we are looking for very long strings of numbers. Let's explore the possible quantifiers.
Quantifiers
Now that we know the special character designations, we can use them along with quantifiers to define how many we expect.
<table ><tr><th>Character</th><th>Description</th><th>Example Pattern Code</th><th >Exammple Match</th></tr>
<tr ><td><span >+</span></td><td>Occurs one or more times</td><td> Version \w-\w+</td><td>Version A-b1_1</td></tr>
<tr ><td><span >{3}</span></td><td>Occurs exactly 3 times</td><td>\D{3}</td><td>abc</td></tr>
<tr ><td><span >{2,4}</span></td><td>Occurs 2 to 4 times</td><td>\d{2,4}</td><td>123</td></tr>
<tr ><td><span >{3,}</span></td><td>Occurs 3 or more</td><td>\w{3,}</td><td>anycharacters</td></tr>
<tr ><td><span >\*</span></td><td>Occurs zero or more times</td><td>A\*B\*C*</td><td>AAACC</td></tr>
<tr ><td><span >?</span></td><td>Once or none</td><td>plurals?</td><td>plural</td></tr></table>
Let's rewrite our pattern using these quantifiers:
End of explanation
phone_pattern = re.compile(r'(\d{3})-(\d{3})-(\d{4})')
results = re.search(phone_pattern,text)
# The entire result
results.group()
# Can then also call by group position.
# remember groups were separated by parentheses ()
# Something to note is that group ordering starts at 1. Passing in 0 returns everything
results.group(1)
results.group(2)
results.group(3)
# We only had three groups of parentheses
results.group(4)
Explanation: Groups
What if we wanted to do two tasks, find phone numbers, but also be able to quickly extract their area code (the first three digits). We can use groups for any general task that involves grouping together regular expressions (so that we can later break them down).
Using the phone number example, we can separate groups of regular expressions using parentheses:
End of explanation
re.search(r"man|woman","This man was here.")
re.search(r"man|woman","This woman was here.")
Explanation: Additional Regex Syntax
Or operator |
Use the pipe operator to have an or statment. For example
End of explanation
re.findall(r".at","The cat in the hat sat here.")
re.findall(r".at","The bat went splat")
Explanation: The Wildcard Character
Use a "wildcard" as a placement that will match any character placed there. You can use a simple period . for this. For example:
End of explanation
re.findall(r"...at","The bat went splat")
Explanation: Notice how we only matched the first 3 letters, that is because we need a . for each wildcard letter. Or use the quantifiers described above to set its own rules.
End of explanation
# One or more non-whitespace that ends with 'at'
re.findall(r'\S+at',"The bat went splat")
Explanation: However this still leads the problem to grabbing more beforehand. Really we only want words that end with "at".
End of explanation
# Ends with a number
re.findall(r'\d$','This ends with a number 2')
# Starts with a number
re.findall(r'^\d','1 is the loneliest number.')
Explanation: Starts With and Ends With
We can use the ^ to signal starts with, and the $ to signal ends with:
End of explanation
phrase = "there are 3 numbers 34 inside 5 this sentence."
re.findall(r'[^\d]',phrase)
Explanation: Note that this is for the entire string, not individual words!
Exclusion
To exclude characters, we can use the ^ symbol in conjunction with a set of brackets []. Anything inside the brackets is excluded. For example:
End of explanation
re.findall(r'[^\d]+',phrase)
Explanation: To get the words back together, use a + sign
End of explanation
test_phrase = 'This is a string! But it has punctuation. How can we remove it?'
re.findall('[^!.? ]+',test_phrase)
clean = ' '.join(re.findall('[^!.? ]+',test_phrase))
clean
Explanation: We can use this to remove punctuation from a sentence.
End of explanation
text = 'Only find the hypen-words in this sentence. But you do not know how long-ish they are'
re.findall(r'[\w]+-[\w]+',text)
Explanation: Brackets for Grouping
As we showed above we can use brackets to group together options, for example if we wanted to find hyphenated words:
End of explanation
# Find words that start with cat and end with one of these options: 'fish','nap', or 'claw'
text = 'Hello, would you like some catfish?'
texttwo = "Hello, would you like to take a catnap?"
textthree = "Hello, have you seen this caterpillar?"
re.search(r'cat(fish|nap|claw)',text)
re.search(r'cat(fish|nap|claw)',texttwo)
# None returned
re.search(r'cat(fish|nap|claw)',textthree)
Explanation: Parentheses for Multiple Options
If we have multiple options for matching, we can use parentheses to list out these options. For Example:
End of explanation |
6,602 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Link analysis
(Inspired by and borrowed heavily from
Step2: Co-authorship network
Summaries maps paper ids to paper summaries. Let us now create here mappings by different criteria.
We'll start by building a mapping from authors to the set of ids of papers they authored.
We'll be using Python's sets again for that purpose.
Step3: We now build a co-authorship network, a graph linking authors, to the set of co-authors they have published with.
Step4: Now we can have a look at some basic statistics about our graph
Step5: With this data in hand, we can plot the degree distribution by showing the number of collaborators a scientist has published with
Step6: Citations network
We'll start by expanding the Citations dataset into two mappings
Step7: Let us now look at an arbitrary paper, let's say PubMed ID 16820458 ("Changes in the spoilage-related microbiota of beef during refrigerated storage under different packaging conditions"). We can now use the cited_by mapping to retrieve what we know of its list of references.
As mentioned above, because the process generating the dataset asked for papers citing a given paper (and not papers a paper cites), the papers we get through cited_by are then necessarily all members of our datasets, and we can therefore find them in Summaries.
Step8: If we lookup the same paper in papers_citing, we now see that some of the cited papers are themselves in our dataset, but others are not (denote here by '??')
Step9: Paper 17696886, for example, is not in our dataset and we do not have any direct information about it, but its repeated occurrence in other papers' citation lists does allow us to reconstruct a good portion of its references. Below is the list of papers in our dataset cited by that paper
Step10: Now that we have a better understanding about the data we're dealing with, let us obtain again some basic statistics about our graph.
Step11: Most cited papers
Let us now find which 10 papers are the most cited in our dataset.
Step12: Link Analysis for Search Engines
In order to use the citation network, we need to be able to perform some complex graph algorithms on it. To make our lives easier, we will load the data into the python package NetworkX, a package for the creation, manipulation, and study of the structure, dynamics, and function of complex networks, which provides a number of these graph algorithms (such as HITS and PageRank) out of the box.
You probably have to install the NetworkX package first.
Step13: We now have a NetworkX Directed Graph stored in G, where a node represents a paper, and an edge represents a citation. This means we can now apply the algorithms and functions of NetworkX to our graph
Step14: As this graph was generated from citations only, we need to add all isolated nodes (nodes that are not cited and do not cite other papers) as well
Step15: Assignments
Your name
Step16: [Write your answer text here]
Using the Link Analysis algorithms provided by NetworkX, calculate the PageRank score for each node in the citation network, and store them in a variable. Print out the PageRank values for the two example papers given below.
Hint
Step17: Copy your search engine from mini-assignment 3, and create a version that incorporates a paper's PageRank score in it's final score, in addition to tf-idf. Show the result of an example query, and explain your decision on how to combine the two scores (PageRank and tf-idf). | Python Code:
import pickle, bz2
from collections import *
import numpy as np
import matplotlib.pyplot as plt
# show plots inline within the notebook
%matplotlib inline
# set plots' resolution
plt.rcParams['savefig.dpi'] = 100
from IPython.display import display, HTML
Ids_file = 'data/air__Ids.pkl.bz2'
Summaries_file = 'data/air__Summaries.pkl.bz2'
Citations_file = 'data/air__Citations.pkl.bz2'
Abstracts_file = 'data/air__Abstracts.pkl.bz2'
Ids = pickle.load( bz2.BZ2File( Ids_file, 'rb' ) )
Summaries = pickle.load( bz2.BZ2File( Summaries_file, 'rb' ) )
paper = namedtuple( 'paper', ['title', 'authors', 'year', 'doi'] )
for (id, paper_info) in Summaries.items():
Summaries[id] = paper( *paper_info )
Citations = pickle.load( bz2.BZ2File( Citations_file, 'rb' ) )
def display_summary( id, extra_text='' ):
Function for printing a paper's summary through IPython's Rich Display System.
Trims long titles or author lists, and links to the paper's DOI (when available).
s = Summaries[ id ]
title = ( s.title if s.title[-1]!='.' else s.title[:-1] )
title = title[:150].rstrip() + ('' if len(title)<=150 else '...')
if s.doi!='':
title = '<a href=http://dx.doi.org/%s>%s</a>' % (s.doi, title)
authors = ', '.join( s.authors[:5] ) + ('' if len(s.authors)<=5 else ', ...')
lines = [
title,
authors,
str(s.year),
'<small>id: %d%s</small>' % (id, extra_text)
]
display( HTML( '<blockquote>%s</blockquote>' % '<br>'.join(lines) ) )
from math import log10
def tokenize(text):
return text.split(' ')
def preprocess(tokens):
result = []
for token in tokens:
result.append(token.lower())
return result
Abstracts = pickle.load( bz2.BZ2File( Abstracts_file, 'rb' ) )
inverted_index = defaultdict(set)
for (id, abstract) in Abstracts.items():
for term in preprocess(tokenize(abstract)):
inverted_index[term].add(id)
tf_matrix = defaultdict(Counter)
for (id, abstract) in Abstracts.items():
tf_matrix[id] = Counter(preprocess(tokenize(abstract)))
def tf(t,d):
return float(tf_matrix[d][t])
def df(t):
return float(len(inverted_index[t]))
numdocs = float(len(Abstracts))
def num_documents():
return numdocs
# We don't need to keep this object in memory any longer:
Abstracts = {}
Explanation: Link analysis
(Inspired by and borrowed heavily from: Collective Intelligence - Luís F. Simões. IR version and assignments by J.E. Hoeksema, 2014-11-12. Converted to Python 3 and minor changes by Tobias Kuhn, 2015-11-17.)
This notebook's purpose is to give examples of how to use graph algorithms to improve search engine. We look at two graphs in particular: the co-authorship network and the citation network.
The citation network is similar to the link network of the web: Citations are like web links pointing to other documents. We can therefore apply the same network-based ranking methods.
Code from previous exercises
End of explanation
papers_of_author = defaultdict(set)
for id,p in Summaries.items():
for a in p.authors:
papers_of_author[a].add( id )
papers_of_author['Vine AK']
for id in papers_of_author['Vine AK']:
display_summary(id)
Explanation: Co-authorship network
Summaries maps paper ids to paper summaries. Let us now create here mappings by different criteria.
We'll start by building a mapping from authors to the set of ids of papers they authored.
We'll be using Python's sets again for that purpose.
End of explanation
coauthors = defaultdict(set)
for p in Summaries.values():
for a in p.authors:
coauthors[a].update( p.authors )
# The code above results in each author being listed as having co-autored with himself/herself.
# We remove these self-references here:
for a,ca in coauthors.items():
ca.remove(a)
print(', '.join( coauthors['Vine AK'] ))
Explanation: We now build a co-authorship network, a graph linking authors, to the set of co-authors they have published with.
End of explanation
print('Number of nodes: %8d (node = author)' % len(coauthors))
print('Number of links: %8d (link = collaboration between the two linked authors on at least one paper)' \
% sum( len(cas) for cas in coauthors.values() ))
Explanation: Now we can have a look at some basic statistics about our graph:
End of explanation
plt.hist( x=[ len(ca) for ca in coauthors.values() ], bins=range(55), histtype='bar', align='left', normed=True )
plt.xlabel('number of collaborators')
plt.ylabel('fraction of scientists')
plt.xlim(0,50);
Explanation: With this data in hand, we can plot the degree distribution by showing the number of collaborators a scientist has published with:
End of explanation
papers_citing = Citations # no changes needed, this is what we are storing already in the Citations dataset
cited_by = defaultdict(list)
for ref, papers_citing_ref in papers_citing.items():
for id in papers_citing_ref:
cited_by[ id ].append( ref )
Explanation: Citations network
We'll start by expanding the Citations dataset into two mappings:
papers_citing[id]: papers citing a given paper;
cited_by[id]: papers cited by a given paper (in other words, its list of references).
If we see the Citations dataset as a directed graph where papers are nodes, and citations links between then, then papers_citing gives you the list of a node's incoming links, whereas cited_by gives you the list of its outgoing links.
The dataset was assembled by querying for papers citing a given paper. As a result, the data mapped to in cited_by (its values) is necessarily limited to ids of papers that are part of the dataset.
End of explanation
paper_id = 16820458
refs = { id : Summaries[id].title for id in cited_by[paper_id] }
print(len(refs), 'references identified for the paper with id', paper_id)
refs
Explanation: Let us now look at an arbitrary paper, let's say PubMed ID 16820458 ("Changes in the spoilage-related microbiota of beef during refrigerated storage under different packaging conditions"). We can now use the cited_by mapping to retrieve what we know of its list of references.
As mentioned above, because the process generating the dataset asked for papers citing a given paper (and not papers a paper cites), the papers we get through cited_by are then necessarily all members of our datasets, and we can therefore find them in Summaries.
End of explanation
{ id : Summaries.get(id,['??'])[0] for id in papers_citing[paper_id] }
Explanation: If we lookup the same paper in papers_citing, we now see that some of the cited papers are themselves in our dataset, but others are not (denote here by '??'):
End of explanation
paper_id2 = 17696886
refs2 = { id : Summaries[id].title for id in cited_by[paper_id2] }
print(len(refs2), 'references identified for the paper with id', paper_id2)
refs2
Explanation: Paper 17696886, for example, is not in our dataset and we do not have any direct information about it, but its repeated occurrence in other papers' citation lists does allow us to reconstruct a good portion of its references. Below is the list of papers in our dataset cited by that paper:
End of explanation
print('Number of core ids %d (100.00 %%)' % len(Ids))
with_cit = [ id for id in Ids if papers_citing[id]!=[] ]
print('Number of papers cited at least once: %d (%.2f %%)' % (len(with_cit), 100.*len(with_cit)/len(Ids)))
isolated = set( id for id in Ids if papers_citing[id]==[] and id not in cited_by )
print('Number of isolated nodes: %d (%.2f %%)\n\t' \
'(papers that are not cited by any others, nor do themselves cite any in the dataset)'% (
len(isolated), 100.*len(isolated)/len(Ids) ))
noCit_withRefs = [ id for id in Ids if papers_citing[id]==[] and id in cited_by ]
print('Number of dataset ids with no citations, but known references: %d (%.2f %%)' % (
len(noCit_withRefs), 100.*len(noCit_withRefs)/len(Ids)))
print('(percentages calculated with respect to just the core ids (members of `Ids`) -- exclude outsider ids)\n')
Ids_set = set( Ids )
citing_Ids = set( cited_by.keys() ) # == set( c for citing in papers_citing.itervalues() for c in citing )
outsiders = citing_Ids - Ids_set # set difference: removes from `citing_Ids` all the ids that occur in `Ids_set`
nodes = citing_Ids | Ids_set - isolated # set union, followed by set difference
print('Number of (non-isolated) nodes in the graph: %d\n\t(papers with at least 1 known citation, or 1 known reference)' % len(nodes))
print(len( citing_Ids ), 'distinct ids are citing papers in our dataset.')
print('Of those, %d (%.2f %%) are ids from outside the dataset.\n' % ( len(outsiders), 100.*len(outsiders)/len(citing_Ids) ))
all_cits = [ c for citing in papers_citing.values() for c in citing ]
outsider_cits = [ c for citing in papers_citing.values() for c in citing if c in outsiders ]
print('Number of links (citations) in the graph:', len(all_cits))
print('A total of %d citations are logged in the dataset.' % len(all_cits))
print('Citations by ids from outside the dataset comprise %d (%.2f %%) of that total.\n' % (
len(outsider_cits),
100.*len(outsider_cits)/len(all_cits) ))
Explanation: Now that we have a better understanding about the data we're dealing with, let us obtain again some basic statistics about our graph.
End of explanation
nr_cits_per_paper = [ (id, len(cits)) for (id,cits) in papers_citing.items() ]
for (id, cits) in sorted( nr_cits_per_paper, key=lambda i:i[1], reverse=True )[:10]:
display_summary( id, ', nr. citations: %d' % cits )
Explanation: Most cited papers
Let us now find which 10 papers are the most cited in our dataset.
End of explanation
import networkx as nx
G = nx.DiGraph(cited_by)
Explanation: Link Analysis for Search Engines
In order to use the citation network, we need to be able to perform some complex graph algorithms on it. To make our lives easier, we will load the data into the python package NetworkX, a package for the creation, manipulation, and study of the structure, dynamics, and function of complex networks, which provides a number of these graph algorithms (such as HITS and PageRank) out of the box.
You probably have to install the NetworkX package first.
End of explanation
print(nx.info(G))
print(nx.is_directed(G))
print(nx.density(G))
Explanation: We now have a NetworkX Directed Graph stored in G, where a node represents a paper, and an edge represents a citation. This means we can now apply the algorithms and functions of NetworkX to our graph:
End of explanation
G.add_nodes_from(isolated)
print(nx.info(G))
print(nx.is_directed(G))
print(nx.density(G))
Explanation: As this graph was generated from citations only, we need to add all isolated nodes (nodes that are not cited and do not cite other papers) as well:
End of explanation
# Add your code here
Explanation: Assignments
Your name: ...
Plot the in-degree distribution (the distribution of the number of incoming links) for the citation network. What can you tell about the shape of this distribution, and what does this tell us about the network?
End of explanation
# Add your code here
# print PageRank for paper 7168798
# print PageRank for paper 21056779
Explanation: [Write your answer text here]
Using the Link Analysis algorithms provided by NetworkX, calculate the PageRank score for each node in the citation network, and store them in a variable. Print out the PageRank values for the two example papers given below.
Hint: the pagerank_scipy implementation tend to be considerably faster than its regular pagerank counterpart (but you have to install the SciPy package for that).
End of explanation
# Add your code here
Explanation: Copy your search engine from mini-assignment 3, and create a version that incorporates a paper's PageRank score in it's final score, in addition to tf-idf. Show the result of an example query, and explain your decision on how to combine the two scores (PageRank and tf-idf).
End of explanation |
6,603 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Introduction" data-toc-modified-id="Introduction-1"><span class="toc-item-num">1 </span>Introduction</a></span></li><li><span><a href="#Setup" data-toc-modified-id="Setup-2"><span class="toc-item-num">2 </span>Setup</a></span><ul class="toc-item"><li><span><a href="#Setup---Debug" data-toc-modified-id="Setup---Debug-2.1"><span class="toc-item-num">2.1 </span>Setup - Debug</a></span></li><li><span><a href="#Setup---Imports" data-toc-modified-id="Setup---Imports-2.2"><span class="toc-item-num">2.2 </span>Setup - Imports</a></span></li><li><span><a href="#Setup---working-folder-paths" data-toc-modified-id="Setup---working-folder-paths-2.3"><span class="toc-item-num">2.3 </span>Setup - working folder paths</a></span></li><li><span><a href="#Setup---logging" data-toc-modified-id="Setup---logging-2.4"><span class="toc-item-num">2.4 </span>Setup - logging</a></span></li><li><span><a href="#Setup---virtualenv-jupyter-kernel" data-toc-modified-id="Setup---virtualenv-jupyter-kernel-2.5"><span class="toc-item-num">2.5 </span>Setup - virtualenv jupyter kernel</a></span></li><li><span><a href="#Setup---Initialize-Django" data-toc-modified-id="Setup---Initialize-Django-2.6"><span class="toc-item-num">2.6 </span>Setup - Initialize Django</a></span></li><li><span><a href="#Setup---Initialize-LoggingHelper" data-toc-modified-id="Setup---Initialize-LoggingHelper-2.7"><span class="toc-item-num">2.7 </span>Setup - Initialize LoggingHelper</a></span></li><li><span><a href="#Setup---initialize-ProquestHNPNewspaper" data-toc-modified-id="Setup---initialize-ProquestHNPNewspaper-2.8"><span class="toc-item-num">2.8 </span>Setup - initialize ProquestHNPNewspaper</a></span><ul class="toc-item"><li><span><a href="#load-from-database" data-toc-modified-id="load-from-database-2.8.1"><span class="toc-item-num">2.8.1 </span>load from database</a></span></li><li><span><a href="#set-up-manually" data-toc-modified-id="set-up-manually-2.8.2"><span class="toc-item-num">2.8.2 </span>set up manually</a></span></li></ul></li></ul></li><li><span><a href="#Find-articles-to-be-loaded" data-toc-modified-id="Find-articles-to-be-loaded-3"><span class="toc-item-num">3 </span>Find articles to be loaded</a></span><ul class="toc-item"><li><span><a href="#Uncompress-files" data-toc-modified-id="Uncompress-files-3.1"><span class="toc-item-num">3.1 </span>Uncompress files</a></span></li><li><span><a href="#Work-with-uncompressed-files" data-toc-modified-id="Work-with-uncompressed-files-3.2"><span class="toc-item-num">3.2 </span>Work with uncompressed files</a></span></li><li><span><a href="#parse-and-load-XML-files" data-toc-modified-id="parse-and-load-XML-files-3.3"><span class="toc-item-num">3.3 </span>parse and load XML files</a></span></li><li><span><a href="#build-list-of-all-ObjectTypes" data-toc-modified-id="build-list-of-all-ObjectTypes-3.4"><span class="toc-item-num">3.4 </span>build list of all ObjectTypes</a></span></li><li><span><a href="#map-files-to-types" data-toc-modified-id="map-files-to-types-3.5"><span class="toc-item-num">3.5 </span>map files to types</a></span></li></ul></li><li><span><a href="#XML-analysis" data-toc-modified-id="XML-analysis-4"><span class="toc-item-num">4 </span>XML analysis</a></span></li><li><span><a href="#TODO" data-toc-modified-id="TODO-5"><span class="toc-item-num">5 </span>TODO</a></span></li></ul></div>
Introduction
Back to Table of Contents
This is a notebook that expands on the OpenCalais code in the file article_coding.py, also in this folder. It includes more sections on selecting publications you want to submit to OpenCalais as an example. It is intended to be copied and re-used.
Setup
Back to Table of Contents
Setup - Debug
Back to Table of Contents
Step1: Setup - Imports
Back to Table of Contents
Step2: Setup - working folder paths
Back to Table of Contents
What data are we looking at?
Step3: Setup - logging
Back to Table of Contents
configure logging for this notebook's kernel (If you do not run this cell, you'll get the django application's logging configuration.
Step4: Setup - virtualenv jupyter kernel
Back to Table of Contents
If you are using a virtualenv, make sure that you
Step5: Setup - Initialize LoggingHelper
Back to Table of Contents
Create a LoggingHelper instance to use to log debug and also print at the same time.
Preconditions
Step6: Setup - initialize ProquestHNPNewspaper
Back to Table of Contents
Create an initialize an instance of ProquestHNPNewspaper for this paper.
load from database
Back to Table of Contents
Step7: set up manually
Back to Table of Contents
Step8: If desired, add to database.
Step9: Find articles to be loaded
Back to Table of Contents
Specify which folder of XML files should be loaded into system, then process all files within the folder.
The compressed archives from proquest_hnp just contain publication XML files, no containing folder.
To process
Step10: For each *.zip file in the paper's source folder
Step11: Work with uncompressed files
Back to Table of Contents
Change working directories to the uncompressed paper path.
Step12: parse and load XML files
Back to Table of Contents
Load one of the files into memory and see what we can do with it. Beautiful Soup?
Looks like the root element is "Record", then the high-level type of the article is "ObjectType".
ObjectType values
Step13: Processing 5752 files in /mnt/hgfs/projects/phd/proquest_hnp/uncompressed/BostonGlobe/BG_20171002210239_00001
----> XML file count
Step14: Example output | Python Code:
debug_flag = False
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Introduction" data-toc-modified-id="Introduction-1"><span class="toc-item-num">1 </span>Introduction</a></span></li><li><span><a href="#Setup" data-toc-modified-id="Setup-2"><span class="toc-item-num">2 </span>Setup</a></span><ul class="toc-item"><li><span><a href="#Setup---Debug" data-toc-modified-id="Setup---Debug-2.1"><span class="toc-item-num">2.1 </span>Setup - Debug</a></span></li><li><span><a href="#Setup---Imports" data-toc-modified-id="Setup---Imports-2.2"><span class="toc-item-num">2.2 </span>Setup - Imports</a></span></li><li><span><a href="#Setup---working-folder-paths" data-toc-modified-id="Setup---working-folder-paths-2.3"><span class="toc-item-num">2.3 </span>Setup - working folder paths</a></span></li><li><span><a href="#Setup---logging" data-toc-modified-id="Setup---logging-2.4"><span class="toc-item-num">2.4 </span>Setup - logging</a></span></li><li><span><a href="#Setup---virtualenv-jupyter-kernel" data-toc-modified-id="Setup---virtualenv-jupyter-kernel-2.5"><span class="toc-item-num">2.5 </span>Setup - virtualenv jupyter kernel</a></span></li><li><span><a href="#Setup---Initialize-Django" data-toc-modified-id="Setup---Initialize-Django-2.6"><span class="toc-item-num">2.6 </span>Setup - Initialize Django</a></span></li><li><span><a href="#Setup---Initialize-LoggingHelper" data-toc-modified-id="Setup---Initialize-LoggingHelper-2.7"><span class="toc-item-num">2.7 </span>Setup - Initialize LoggingHelper</a></span></li><li><span><a href="#Setup---initialize-ProquestHNPNewspaper" data-toc-modified-id="Setup---initialize-ProquestHNPNewspaper-2.8"><span class="toc-item-num">2.8 </span>Setup - initialize ProquestHNPNewspaper</a></span><ul class="toc-item"><li><span><a href="#load-from-database" data-toc-modified-id="load-from-database-2.8.1"><span class="toc-item-num">2.8.1 </span>load from database</a></span></li><li><span><a href="#set-up-manually" data-toc-modified-id="set-up-manually-2.8.2"><span class="toc-item-num">2.8.2 </span>set up manually</a></span></li></ul></li></ul></li><li><span><a href="#Find-articles-to-be-loaded" data-toc-modified-id="Find-articles-to-be-loaded-3"><span class="toc-item-num">3 </span>Find articles to be loaded</a></span><ul class="toc-item"><li><span><a href="#Uncompress-files" data-toc-modified-id="Uncompress-files-3.1"><span class="toc-item-num">3.1 </span>Uncompress files</a></span></li><li><span><a href="#Work-with-uncompressed-files" data-toc-modified-id="Work-with-uncompressed-files-3.2"><span class="toc-item-num">3.2 </span>Work with uncompressed files</a></span></li><li><span><a href="#parse-and-load-XML-files" data-toc-modified-id="parse-and-load-XML-files-3.3"><span class="toc-item-num">3.3 </span>parse and load XML files</a></span></li><li><span><a href="#build-list-of-all-ObjectTypes" data-toc-modified-id="build-list-of-all-ObjectTypes-3.4"><span class="toc-item-num">3.4 </span>build list of all ObjectTypes</a></span></li><li><span><a href="#map-files-to-types" data-toc-modified-id="map-files-to-types-3.5"><span class="toc-item-num">3.5 </span>map files to types</a></span></li></ul></li><li><span><a href="#XML-analysis" data-toc-modified-id="XML-analysis-4"><span class="toc-item-num">4 </span>XML analysis</a></span></li><li><span><a href="#TODO" data-toc-modified-id="TODO-5"><span class="toc-item-num">5 </span>TODO</a></span></li></ul></div>
Introduction
Back to Table of Contents
This is a notebook that expands on the OpenCalais code in the file article_coding.py, also in this folder. It includes more sections on selecting publications you want to submit to OpenCalais as an example. It is intended to be copied and re-used.
Setup
Back to Table of Contents
Setup - Debug
Back to Table of Contents
End of explanation
import datetime
import glob
import logging
import lxml
import os
import six
import xml
import xmltodict
import zipfile
Explanation: Setup - Imports
Back to Table of Contents
End of explanation
# paper identifier
paper_identifier = "BostonGlobe"
archive_identifier = "BG_20171002210239_00001"
# source
source_paper_folder = "/mnt/hgfs/projects/phd/proquest_hnp/proquest_hnp/data"
source_paper_path = "{}/{}".format( source_paper_folder, paper_identifier )
# uncompressed
uncompressed_paper_folder = "/mnt/hgfs/projects/phd/proquest_hnp/uncompressed"
uncompressed_paper_path = "{}/{}".format( uncompressed_paper_folder, paper_identifier )
# make sure an identifier is set before you make a path here.
if ( ( archive_identifier is not None ) and ( archive_identifier != "" ) ):
# identifier is set.
source_archive_file = "{}.zip".format( archive_identifier )
source_archive_path = "{}/{}".format( source_paper_path, source_archive_file )
uncompressed_archive_path = "{}/{}".format( uncompressed_paper_path, archive_identifier )
#-- END check to see if archive_identifier present. --#
%pwd
# current working folder
current_working_folder = "/home/jonathanmorgan/work/django/research/work/phd_work/data/article_loading/proquest_hnp/{}".format( paper_identifier )
current_datetime = datetime.datetime.now()
current_date_string = current_datetime.strftime( "%Y-%m-%d-%H-%M-%S" )
Explanation: Setup - working folder paths
Back to Table of Contents
What data are we looking at?
End of explanation
logging_file_name = "{}/research-data_load-{}-{}.log.txt".format( current_working_folder, paper_identifier, current_date_string )
logging.basicConfig(
level = logging.DEBUG,
format = '%(asctime)s - %(levelname)s - %(name)s - %(message)s',
filename = logging_file_name,
filemode = 'w' # set to 'a' if you want to append, rather than overwrite each time.
)
Explanation: Setup - logging
Back to Table of Contents
configure logging for this notebook's kernel (If you do not run this cell, you'll get the django application's logging configuration.
End of explanation
# init django
django_init_folder = "/home/jonathanmorgan/work/django/research/work/phd_work"
django_init_path = "django_init.py"
if( ( django_init_folder is not None ) and ( django_init_folder != "" ) ):
# add folder to front of path.
django_init_path = "{}/{}".format( django_init_folder, django_init_path )
#-- END check to see if django_init folder. --#
%run $django_init_path
# context_text imports
from context_text.article_coding.article_coding import ArticleCoder
from context_text.article_coding.article_coding import ArticleCoding
from context_text.article_coding.open_calais_v2.open_calais_v2_article_coder import OpenCalaisV2ArticleCoder
from context_text.collectors.newsbank.newspapers.GRPB import GRPB
from context_text.collectors.newsbank.newspapers.DTNB import DTNB
from context_text.models import Article
from context_text.models import Article_Subject
from context_text.models import Newspaper
from context_text.shared.context_text_base import ContextTextBase
# context_text_proquest_hnp
from context_text_proquest_hnp.proquest_hnp_newspaper_helper import ProquestHNPNewspaperHelper
Explanation: Setup - virtualenv jupyter kernel
Back to Table of Contents
If you are using a virtualenv, make sure that you:
have installed your virtualenv as a kernel.
choose the kernel for your virtualenv as the kernel for your notebook (Kernel --> Change kernel).
Since I use a virtualenv, need to get that activated somehow inside this notebook. One option is to run ../dev/wsgi.py in this notebook, to configure the python environment manually as if you had activated the sourcenet virtualenv. To do this, you'd make a code cell that contains:
%run ../dev/wsgi.py
This is sketchy, however, because of the changes it makes to your Python environment within the context of whatever your current kernel is. I'd worry about collisions with the actual Python 3 kernel. Better, one can install their virtualenv as a separate kernel. Steps:
activate your virtualenv:
workon research
in your virtualenv, install the package ipykernel.
pip install ipykernel
use the ipykernel python program to install the current environment as a kernel:
python -m ipykernel install --user --name <env_name> --display-name "<display_name>"
sourcenet example:
python -m ipykernel install --user --name sourcenet --display-name "research (Python 3)"
More details: http://ipython.readthedocs.io/en/stable/install/kernel_install.html
Setup - Initialize Django
Back to Table of Contents
First, initialize my dev django project, so I can run code in this notebook that references my django models and can talk to the database using my project's settings.
End of explanation
# python_utilities
from python_utilities.logging.logging_helper import LoggingHelper
# init
my_logging_helper = LoggingHelper()
my_logging_helper.set_logger_name( "proquest_hnp-article-loading-{}".format( paper_identifier ) )
log_message = None
Explanation: Setup - Initialize LoggingHelper
Back to Table of Contents
Create a LoggingHelper instance to use to log debug and also print at the same time.
Preconditions: Must be run after Django is initialized, since python_utilities is in the django path.
End of explanation
my_paper = ProquestHNPNewspaperHelper()
paper_instance = my_paper.initialize_from_database( paper_identifier )
my_paper.source_all_papers_folder = source_paper_folder
my_paper.destination_all_papers_folder = uncompressed_paper_folder
print( my_paper )
print( paper_instance )
Explanation: Setup - initialize ProquestHNPNewspaper
Back to Table of Contents
Create an initialize an instance of ProquestHNPNewspaper for this paper.
load from database
Back to Table of Contents
End of explanation
my_paper = ProquestHNPNewspaperHelper()
my_paper.paper_identifier = paper_identifier
my_paper.source_all_papers_folder = source_paper_folder
my_paper.source_paper_path = source_paper_path
my_paper.destination_all_papers_folder = uncompressed_paper_folder
my_paper.destination_paper_path = uncompressed_paper_path
my_paper.paper_start_year = 1872
my_paper.paper_end_year = 1985
my_newspaper = Newspaper.objects.get( id = 6 )
my_paper.newspaper = my_newspaper
Explanation: set up manually
Back to Table of Contents
End of explanation
phnp_newspaper_instance = my_paper.create_PHNP_newspaper()
print( phnp_newspaper_instance )
Explanation: If desired, add to database.
End of explanation
# create folder to hold the results of decompressing paper's zip files.
did_uncomp_paper_folder_exist = my_paper.make_dest_paper_folder()
Explanation: Find articles to be loaded
Back to Table of Contents
Specify which folder of XML files should be loaded into system, then process all files within the folder.
The compressed archives from proquest_hnp just contain publication XML files, no containing folder.
To process:
uncompresed paper folder ( <paper_folder> ) - make a folder in /mnt/hgfs/projects/phd/proquest_hnp/uncompressed for the paper whose data you are working with, named the same as the paper's folder in /mnt/hgfs/projects/phd/proquest_hnp/proquest_hnp/data.
for example, for the Boston Globe, name it "BostonGlobe".
uncompressed archive folder ( <archive_folder> ) - inside a given paper's folder in uncompressed, for each archive file, create a folder named the same as the archive file, but with no ".zip" at the end.
For example, for the file "BG_20171002210239_00001.zip", make a folder named "BG_20171002210239_00001".
path should be "<paper_folder>/<archive_name_no_zip>.
unzip the archive into this folder:
unzip <path_to_zip> -d <archive_folder>
Uncompress files
Back to Table of Contents
See if the uncompressed paper folder exists. If not, set flag and create it.
End of explanation
# decompress the files
my_paper.uncompress_paper_zip_files()
Explanation: For each *.zip file in the paper's source folder:
parse file name from path returned by glob.
parse the part before ".zip" from the file name. This is referred to subsequently as the "archive identifier".
check if folder named the same as the "archive identifier" is present.
If no:
create it.
then, uncompress the archive into it.
If yes:
output a message. Don't want to uncompress if it was already uncompressed once.
End of explanation
%cd $uncompressed_paper_path
%ls
Explanation: Work with uncompressed files
Back to Table of Contents
Change working directories to the uncompressed paper path.
End of explanation
# loop over files in the current archive folder path.
object_type_to_count_map = my_paper.process_archive_object_types( uncompressed_archive_path )
Explanation: parse and load XML files
Back to Table of Contents
Load one of the files into memory and see what we can do with it. Beautiful Soup?
Looks like the root element is "Record", then the high-level type of the article is "ObjectType".
ObjectType values:
Advertisement
...
Good options for XML parser:
lxml.etree - https://stackoverflow.com/questions/12290091/reading-xml-file-and-fetching-its-attributes-value-in-python
xmltodict - https://docs.python-guide.org/scenarios/xml/
beautifulsoup using lxml
End of explanation
xml_folder_list = glob.glob( "{}/*".format( uncompressed_paper_path ) )
print( "folder_list: {}".format( xml_folder_list ) )
# build map of all object types for a paper to the overall counts of each
paper_object_type_to_count_map = my_paper.process_paper_object_types()
Explanation: Processing 5752 files in /mnt/hgfs/projects/phd/proquest_hnp/uncompressed/BostonGlobe/BG_20171002210239_00001
----> XML file count: 5752
Counters:
- Processed 5752 files
- No Record: 0
- No ObjectType: 0
- No ObjectType value: 0
ObjectType values and occurrence counts:
- A|d|v|e|r|t|i|s|e|m|e|n|t: 1902
- Article|Feature: 1792
- N|e|w|s: 53
- Commentary|Editorial: 36
- G|e|n|e|r|a|l| |I|n|f|o|r|m|a|t|i|o|n: 488
- S|t|o|c|k| |Q|u|o|t|e: 185
- Advertisement|Classified Advertisement: 413
- E|d|i|t|o|r|i|a|l| |C|a|r|t|o|o|n|/|C|o|m|i|c: 31
- Correspondence|Letter to the Editor: 119
- Front Matter|Table of Contents: 193
- O|b|i|t|u|a|r|y: 72
- F|r|o|n|t| |P|a|g|e|/|C|o|v|e|r| |S|t|o|r|y: 107
- I|m|a|g|e|/|P|h|o|t|o|g|r|a|p|h: 84
- Marriage Announcement|News: 6
- I|l|l|u|s|t|r|a|t|i|o|n: 91
- R|e|v|i|e|w: 133
- C|r|e|d|i|t|/|A|c|k|n|o|w|l|e|d|g|e|m|e|n|t: 30
- News|Legal Notice: 17
build list of all ObjectTypes
Back to Table of Contents
Loop over all folders in the paper path. For each folder, grab all files in the folder. For each file, parse XML, then get the ObjectType value and if it isn't already in map of obect types to counts, add it. Increment count.
From command line, in the uncompressed BostonGlobe folder:
find . -type f -iname "*.xml" | wc -l
resulted in 11,374,500 articles. That is quite a few.
End of explanation
# directory to work in.
uncompressed_archive_folder = "BG_20151211054235_00003"
uncompressed_archive_path = "{}/{}".format( uncompressed_paper_path, uncompressed_archive_folder )
# build map of file types to lists of files of that type in specified folder.
object_type_to_file_path_map = my_paper.map_archive_folder_files_to_types( uncompressed_archive_path )
# which types do we want to preview?
types_to_output = master_object_type_list
#types_to_output = [ 'Advertisement|Classified Advertisement' ]
# declare variables
xml_file_path_list = None
xml_file_path_example_list = None
xml_file_path = None
xml_file = None
xml_dict = None
xml_string = None
# loop over types
for object_type in types_to_output:
# print type and count
xml_file_path_list = object_type_to_file_path_map.get( object_type, [] )
xml_file_path_example_list = xml_file_path_list[ : 10 ]
print( "\n- {}:".format( object_type ) )
for xml_file_path in xml_file_path_example_list:
print( "----> {}".format( xml_file_path ) )
# try to parse the file
with open( xml_file_path ) as xml_file:
# parse XML
xml_dict = xmltodict.parse( xml_file.read() )
#-- END with open( xml_file_path ) as xml_file: --#
# pretty-print
xml_string = xmltodict.unparse( xml_dict, pretty = True )
# output
print( xml_string )
#-- END loop over example file paths. --#
#-- END loop over object types. --#
Explanation: Example output:
XML file count: 5752
Counters:
- Processed 5752 files
- No Record: 0
- No ObjectType: 0
- No ObjectType value: 0
ObjectType values and occurrence counts:
- A|d|v|e|r|t|i|s|e|m|e|n|t: 2114224
- Feature|Article: 5271887
- I|m|a|g|e|/|P|h|o|t|o|g|r|a|p|h: 249942
- O|b|i|t|u|a|r|y: 625143
- G|e|n|e|r|a|l| |I|n|f|o|r|m|a|t|i|o|n: 1083164
- S|t|o|c|k| |Q|u|o|t|e: 202776
- N|e|w|s: 140274
- I|l|l|u|s|t|r|a|t|i|o|n: 106925
- F|r|o|n|t| |P|a|g|e|/|C|o|v|e|r| |S|t|o|r|y: 386421
- E|d|i|t|o|r|i|a|l| |C|a|r|t|o|o|n|/|C|o|m|i|c: 78993
- Editorial|Commentary: 156342
- C|r|e|d|i|t|/|A|c|k|n|o|w|l|e|d|g|e|m|e|n|t: 68356
- Classified Advertisement|Advertisement: 291533
- R|e|v|i|e|w: 86889
- Table of Contents|Front Matter: 69798
- Letter to the Editor|Correspondence: 202071
- News|Legal Notice: 24053
- News|Marriage Announcement: 41314
- B|i|r|t|h| |N|o|t|i|c|e: 926
- News|Military/War News: 3
- U|n|d|e|f|i|n|e|d: 5
- Article|Feature: 137526
- Front Matter|Table of Contents: 11195
- Commentary|Editorial: 3386
- Marriage Announcement|News: 683
- Correspondence|Letter to the Editor: 7479
- Legal Notice|News: 1029
- Advertisement|Classified Advertisement: 12163
map files to types
Back to Table of Contents
Choose a directory, then loop over the files in the directory to build a map of types to lists of file names.
End of explanation |
6,604 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
What are the underlying biophysics that govern astrocyte behavior? To explore this question we have at our disposal a large dataset of observations. A key concept we must face in the analysis of this data is that of uncertainty. Sources of uncertainty will be noise in our measurements due to the recording apparatus, the finite precision of our measurements, as well as the intrinsic stochasticity of the process being measured. Perhaps the most important source of uncertainty we will consider is due to there being sources of variability that are themselves unobserved. Probability theory provides us with a framework to reason in the presence of uncertainty and information theory allows us to quantify uncertainty. We will make use of both in our exploration of the data. A precise definition of the data can be found in appendix B.
We begin by considering the data and the processes that give rise to it. The dataset is composed of microscopy recordings of astrocytes in the visual cortex of ferrets. At the highest level, we have noise being added to the data by the recording apparatus. A model for the noise has been developed and can be found in Apprendix A. The next level of uncertainty comes as a consquence of discretizing the continuous process. Consider for example a military satellite tracking a vehicle. If one wishes to predict the future location of the van, the prediction is limited to be within one of the discrete cells that make up its measurements. However, the true location of the van could be anywhere within that grid cell. Lastly, there is intrinsic stochasticity at the molecular level that we ignore for now. We consider the fluctuations taking place at that scale to be averaged out in our observations.
The unobserved sources of variability will be our primary focus. Before we address that, let us lay down some preliminary concepts. We are going to assume that there exists some true unknown process governing the activity of an astrocyte. Our measurements can then be considered snapshots of this process at various points throughout its life. This suggests that these snapshots (our observations) are a function of the underlying data generating process. Considering the many sources of uncertainty outlined above, we will describe this process as a probability distribution. There will be many ways to interpret the data as a probability, but we will begin by considering any one frame of a video to be the result of a data generating distribution, $P_{data}(x)$. Here $x$ is considered to be an image with $n$ pixels. So $P_{data}$ is a joint distribution over each pixel of the frame with a probability density function (pdf), $p_{data}(x_1,x_2,\dots,x_n)$.
To build intuition about what $p_{data}(x)$ is and how it relates to the assumed data generating process, we will explore a simple example. Take an image with only 2 pixels... [$x_1$,$x_2$] where both $x_1$ and $x_2$ are in [0,1]. Each image can be considered a two dimensional point in $\mathbb{R}^2$. All possible images would occupy a square in the 2 dimensional plane as follows
Step1: This data generating distribution assumes both pixels are uniformly distributed and independent (no correlation). The 2 pixel images we see from this distribution will have no implicit structure and appear as random noise.
Step2: Now consider the case where there is some process correlating the two variables. This would be similar to the underlying biophysics governing the activity of an astrocyte. In that case, the pixels would be correlated in some manner due to the mechanism driving the cell and we would see structure in the microscopy recordings. In this simple case, let's consider a direct correlation of the form $x_1 = \frac{1}{2} \cos(2\pi x_2)+\frac{1}{2}+\epsilon$ where $\epsilon$ is a noise term coming from a low variability normal distribution $\epsilon \sim N(0,\frac{1}{10})$. We see below that in this case, the images plotted in two dimensions resulting from this distribution form a distinct pattern. In addition if we look at the images themselves one may be able to see a pattern...
Step3: We will refer to the structure suggested by the two dimensional points as the 'manifold'. This is a common practice when analyzing images. A 27 by 27 dimensional image will be a point in 784 dimensional space. If we are examining images with structure, various images of the number 2 for example, then it turns out that these images will form a manifold in 784 dimensional space. In most cases, as is the case in our contrived example, this manifold exists in a lower dimensional space than that of the images themselves. The goal is to 'learn' this manifold. In our simple case we can describe the manifold as a function of only 1 variable $$f(t) = <t,\frac{1}{2} \cos(2\pi t)+\frac{1}{2}>$$ This is what we would call the underlying data generating process. In practice we usually describe the manifold in terms of a probability distribution. We will refer to the data generating distribution in our example as $p_{test}(x_1, x_2)$. Why did we choose a probability to describe the manifold created by the data generating process? How might this probability be interpreted?
Learning the actual distribution turns out to be a rather difficult task. Here we will use a common non parametric technique for describing distributions, the histrogram. Looking at a histogram of the images, or two dimensional points, will give us insight into the structure of the distribution from which they came.
Step4: As our intuition might have suggested, the data generating distribution looks very similar to the structure suggested by the two dimensional images plotted above. There is high probability very near the actual curve $x_1 = \frac{1}{2} \cos(2\pi x_2)+\frac{1}{2}$ and low probability as we move away. We imposed the uncertainty via the Gaussian noise term $\epsilon$. However, in real data the uncertainty can be due to the myriad of sources outlined above. In these cases a complex probability distribution isn't an arbitrary choice for representing the data, it becomes necessary [cite Cristopher Bishop 2006]. | Python Code:
x1 = np.random.uniform(size=500)
x2 = np.random.uniform(size=500)
plt.scatter(x1,x2); plt.xlim(-0.25,1.25); plt.ylim(-0.25,1.25)
plt.grid(); plt.show()
Explanation: Introduction
What are the underlying biophysics that govern astrocyte behavior? To explore this question we have at our disposal a large dataset of observations. A key concept we must face in the analysis of this data is that of uncertainty. Sources of uncertainty will be noise in our measurements due to the recording apparatus, the finite precision of our measurements, as well as the intrinsic stochasticity of the process being measured. Perhaps the most important source of uncertainty we will consider is due to there being sources of variability that are themselves unobserved. Probability theory provides us with a framework to reason in the presence of uncertainty and information theory allows us to quantify uncertainty. We will make use of both in our exploration of the data. A precise definition of the data can be found in appendix B.
We begin by considering the data and the processes that give rise to it. The dataset is composed of microscopy recordings of astrocytes in the visual cortex of ferrets. At the highest level, we have noise being added to the data by the recording apparatus. A model for the noise has been developed and can be found in Apprendix A. The next level of uncertainty comes as a consquence of discretizing the continuous process. Consider for example a military satellite tracking a vehicle. If one wishes to predict the future location of the van, the prediction is limited to be within one of the discrete cells that make up its measurements. However, the true location of the van could be anywhere within that grid cell. Lastly, there is intrinsic stochasticity at the molecular level that we ignore for now. We consider the fluctuations taking place at that scale to be averaged out in our observations.
The unobserved sources of variability will be our primary focus. Before we address that, let us lay down some preliminary concepts. We are going to assume that there exists some true unknown process governing the activity of an astrocyte. Our measurements can then be considered snapshots of this process at various points throughout its life. This suggests that these snapshots (our observations) are a function of the underlying data generating process. Considering the many sources of uncertainty outlined above, we will describe this process as a probability distribution. There will be many ways to interpret the data as a probability, but we will begin by considering any one frame of a video to be the result of a data generating distribution, $P_{data}(x)$. Here $x$ is considered to be an image with $n$ pixels. So $P_{data}$ is a joint distribution over each pixel of the frame with a probability density function (pdf), $p_{data}(x_1,x_2,\dots,x_n)$.
To build intuition about what $p_{data}(x)$ is and how it relates to the assumed data generating process, we will explore a simple example. Take an image with only 2 pixels... [$x_1$,$x_2$] where both $x_1$ and $x_2$ are in [0,1]. Each image can be considered a two dimensional point in $\mathbb{R}^2$. All possible images would occupy a square in the 2 dimensional plane as follows
End of explanation
random_images = [(x1[i],x2[i]) for i in np.random.randint(500,size=10)]
for im in random_images:
plt.figure(); plt.imshow([im], cmap='gray', vmin=0, vmax=1)
Explanation: This data generating distribution assumes both pixels are uniformly distributed and independent (no correlation). The 2 pixel images we see from this distribution will have no implicit structure and appear as random noise.
End of explanation
x1 = lambda x2: 0.5*np.cos(2*np.pi*x2)+0.5
x2 = np.linspace(0,1,100)
eps = np.random.normal(scale=0.1, size=100)
plt.scatter(x2, x1(x2)+eps); plt.xlim(-0.25,1.25); plt.ylim(-0.25,1.25); plt.axes().set_aspect('equal')
structured_images = zip(x1(np.linspace(0,1,10)), np.linspace(0,1,10))
for im in structured_images:
plt.figure(); plt.imshow([im], cmap='gray', vmin=0, vmax=1)
Explanation: Now consider the case where there is some process correlating the two variables. This would be similar to the underlying biophysics governing the activity of an astrocyte. In that case, the pixels would be correlated in some manner due to the mechanism driving the cell and we would see structure in the microscopy recordings. In this simple case, let's consider a direct correlation of the form $x_1 = \frac{1}{2} \cos(2\pi x_2)+\frac{1}{2}+\epsilon$ where $\epsilon$ is a noise term coming from a low variability normal distribution $\epsilon \sim N(0,\frac{1}{10})$. We see below that in this case, the images plotted in two dimensions resulting from this distribution form a distinct pattern. In addition if we look at the images themselves one may be able to see a pattern...
End of explanation
from matplotlib.colors import LogNorm
x2 = np.random.uniform(size=10000)
eps = np.random.normal(scale=0.1, size=10000)
hist2d = plt.hist2d(x2,x1(x2)+eps, bins=50, norm=LogNorm())
plt.colorbar(); plt.xlim(-0.25,1.25); plt.ylim(-0.5,1.5)
plt.show()
Explanation: We will refer to the structure suggested by the two dimensional points as the 'manifold'. This is a common practice when analyzing images. A 27 by 27 dimensional image will be a point in 784 dimensional space. If we are examining images with structure, various images of the number 2 for example, then it turns out that these images will form a manifold in 784 dimensional space. In most cases, as is the case in our contrived example, this manifold exists in a lower dimensional space than that of the images themselves. The goal is to 'learn' this manifold. In our simple case we can describe the manifold as a function of only 1 variable $$f(t) = <t,\frac{1}{2} \cos(2\pi t)+\frac{1}{2}>$$ This is what we would call the underlying data generating process. In practice we usually describe the manifold in terms of a probability distribution. We will refer to the data generating distribution in our example as $p_{test}(x_1, x_2)$. Why did we choose a probability to describe the manifold created by the data generating process? How might this probability be interpreted?
Learning the actual distribution turns out to be a rather difficult task. Here we will use a common non parametric technique for describing distributions, the histrogram. Looking at a histogram of the images, or two dimensional points, will give us insight into the structure of the distribution from which they came.
End of explanation
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.gca(projection='3d')
X,Y = np.mgrid[0:50,0:50]
ax.plot_surface(X, Y, hist2d[0])#, linewidth=0, antialiased=False)
Explanation: As our intuition might have suggested, the data generating distribution looks very similar to the structure suggested by the two dimensional images plotted above. There is high probability very near the actual curve $x_1 = \frac{1}{2} \cos(2\pi x_2)+\frac{1}{2}$ and low probability as we move away. We imposed the uncertainty via the Gaussian noise term $\epsilon$. However, in real data the uncertainty can be due to the myriad of sources outlined above. In these cases a complex probability distribution isn't an arbitrary choice for representing the data, it becomes necessary [cite Cristopher Bishop 2006].
End of explanation |
6,605 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CSX91
Step1: Q. What happens if there is no return?
2. Scope
In python functions have their own scope (namespace).
Python first looks at the function's namespace first before looking at the global namespace.
Let's use locals() and globals() to see what happens
Step2: 2.1 Variable lifetime
Variables within functions exist only withing their namespaces. Once the function stops, all the variables inside it gets destroyed. For instance, the following won't work.
Step3: 3. Variable Resolution
Python first looks at the function's namespace first before looking at the global namespace.
Step4: If you try and reassign a global variable inside a function, like so
Step5: Q. What would be the value of aString now? For instance, if I did this
Step6: 4. Function Arguments
Step7: Arguments in functions can be classified as
Step8: Other ways of calling
Step9: Q. will these two work?
Step10: Never call args after kwargs
5. Nesting functions
You can nest functions.
Class nesting is somewhat uncommon, but can be done.
Step11: All the namespace conventions apply here.
What would happen if I changed x inside inner()?
Step12: What about global variables?
Step13: Declare global every time the global x needs changing
6. Classes
Define classes with the class keyword
Here's a simple class
Step14: All arg and kwarg conventions apply here
6.1 Overriding class methods
Lets try
Step15: We know the __call__ raises an exception. Python lets you redefine it
Step16: There are many such redefinitions permitted by python. See Python Docs
6.2 Emulating numeric types
A very useful feature in python is the ability to emulate numeric types.
Would this work?
Step17: Let's rewrite this
Step18: Aside
Step19: 7. Functions and Class are Objects
Functions and objects are like anything else in python.
All objects inherit from a base class in python.
For instance,
Step20: It follows that the variable a here is a class.
Step21: This means
Step22: 8. Closures
Remember this example?
Step23: Obviously, this fails. Why? As per variable lifetime rules (see 2.1), foo() has ceased execution, x is destroyed.
So how about this?
Step24: This works. But it shouldn't, because x is local to foo(), when foo() has ceased execution, x must be destroyed. Right?
Turns out, Python supports a feature called function closure. This enables nested inner functions to keep track of their namespaces.
8.1 Aside
Step25: Nested lambda is permitted (idk why you'd use them, still, worth a mention)
Step26: 8.1.1 Sorted
Python's sorted function can sort based on a key argument, key is a lambda function that deterimes how the data is sorted.
Step27: 9. Decorators!
Decorators are callables that take a function as argument, and return a replacement function (with additional functionalities)
Step28: Lets look at memory locations of the functions.
Step29: A common practice is to replace the original function with the decorated function
Step30: Python uses @ to represent foo = outer(foo). The above code can be retwritten as follows
Step31: 9.1 Logging and timing a function
Decorators can be classes, they can take input arguments/keyword args.
Lets build a decorator that logs and times another function | Python Code:
def foo():
return 1
foo()
Explanation: CSX91: Python Tutorial
1. Functions
Fucntions in Python are created using the keyword def
It can return values with return
Let's create a simple function:
End of explanation
aString = 'Global var'
def foo():
a = 'Local var'
print locals()
foo()
print globals()
Explanation: Q. What happens if there is no return?
2. Scope
In python functions have their own scope (namespace).
Python first looks at the function's namespace first before looking at the global namespace.
Let's use locals() and globals() to see what happens:
End of explanation
def foo():
x = 10
foo()
print x
Explanation: 2.1 Variable lifetime
Variables within functions exist only withing their namespaces. Once the function stops, all the variables inside it gets destroyed. For instance, the following won't work.
End of explanation
aString = 'Global var'
def foo():
print aString
foo()
Explanation: 3. Variable Resolution
Python first looks at the function's namespace first before looking at the global namespace.
End of explanation
aString = 'Global var'
def foo():
aString = 'Local var'
print aString
foo()
Explanation: If you try and reassign a global variable inside a function, like so:
End of explanation
aString = 'Global var'
def foo():
global aString # <------ Declared here
aString = 'Local var'
print aString
def bar():
print aString
foo()
bar()
Explanation: Q. What would be the value of aString now? For instance, if I did this:
As we can see, global variables can be accessed (even changed if they are mutable data types) but not (by default) assigned to.
Global variables are very dangerous. So, python wants you to be sure of what you're doing.
If you MUST reassign it. Declare it as global. Like so:
End of explanation
def foo(x):
print locals()
foo(1)
Explanation: 4. Function Arguments: args and kwargs
Python allows us to pass function arguments (duh..)
There arguments are local to the function. For instance:
End of explanation
"Args"
def foo(x,y):
print x+y
"kwargs"
def bar(x=5, y=8):
print x-y
"Both"
def foobar(x,y=100):
print x*y
"Calling with args"
foo(5,12)
"Calling with kwargs"
bar()
"Calling both"
foobar(10)
Explanation: Arguments in functions can be classified as:
Args
kwargs (keyword args)
When calling a function, args are mandatory. kwargs are optional.
End of explanation
"Args"
def foo(x,y):
print x+y
"kwargs"
def bar(x=5, y=8):
print x-y
"Both"
def foobar(x,y=100):
print x*y
"kwargs"
bar(5,8) # kwargs as args (default: x=5, y=8)
bar(5,y=8) # x=5, y=8
"Change the order of kwargs if you want"
bar(y=8, x=5)
"args as kwargs will also work"
foo(x=5, y=12)
Explanation: Other ways of calling:
All the following are legit:
End of explanation
"Args"
def foo(x,y):
print x+y
"kwargs"
def bar(x=5, y=8):
print x-y
"Both"
def foobar(x,y=100):
print x*y
bar(x=9, 7) #1
foo(x=5, 6) #2
Explanation: Q. will these two work?
End of explanation
def outer():
x=1
def inner():
print x
inner()
outer()
Explanation: Never call args after kwargs
5. Nesting functions
You can nest functions.
Class nesting is somewhat uncommon, but can be done.
End of explanation
def outer():
x = 1
def inner():
x = 2
print 'Inner x=%d'%(x)
inner()
return x
print 'Outer x=%d'%outer()
Explanation: All the namespace conventions apply here.
What would happen if I changed x inside inner()?
End of explanation
x = 4
def outer():
global x
x = 1
def inner():
global x
x = 2
print 'Inner x=%d'%(x)
inner()
return x
print 'Outer x=%d'%outer()
print 'Global x=%d'%x
Explanation: What about global variables?
End of explanation
class foo():
def __init__(i, arg1): # self can br replaced by anything.
i.arg1 = arg1
def bar(i, arg2): # Always use self as the first argument
print i.arg1, arg2
FOO = foo(7)
FOO.bar(5)
print FOO.arg1
Explanation: Declare global every time the global x needs changing
6. Classes
Define classes with the class keyword
Here's a simple class
End of explanation
class foo():
def __init__(i, num):
i.num = num
d = foo(2)
d()
Explanation: All arg and kwarg conventions apply here
6.1 Overriding class methods
Lets try:
End of explanation
class foo():
def __init__(i, num):
i.num = num
def __call__(i):
return i.num
d = foo(2)
d()
Explanation: We know the __call__ raises an exception. Python lets you redefine it:
End of explanation
class foo():
def __init__(i, num):
i.num = num
FOO = foo(5)
FOO += 1
Explanation: There are many such redefinitions permitted by python. See Python Docs
6.2 Emulating numeric types
A very useful feature in python is the ability to emulate numeric types.
Would this work?
End of explanation
class foo():
def __init__(i, num):
i.num = num
def __add__(i, new):
i.num += new
return i
def __sub__(i, new):
i.num -= new
return i
FOO = foo(5)
FOO += 1
print FOO.num
FOO -= 4
print FOO.num
Explanation: Let's rewrite this:
End of explanation
class foo():
"Me is foo"
def __init__(i, num):
i.num = num
def __add__(i, new):
i.num += new
return i
def __sub__(i, new):
i.num -= new
return i
def __repr__(i):
return i.__doc__
def __getitem__(i, num):
print "Nothing @ %d"%(num)
FOO = foo(4)
FOO[2]
Explanation: Aside: __repr__, __call__,__getitem__,... are all awesome.
End of explanation
issubclass(int, object)
Explanation: 7. Functions and Class are Objects
Functions and objects are like anything else in python.
All objects inherit from a base class in python.
For instance,
End of explanation
a = 9
dir(a)
Explanation: It follows that the variable a here is a class.
End of explanation
from pdb import set_trace # pdb is quite useful
def add(x,y): return x+y
def sub(x,y): return x-y
def foo(x,y,func=add):
set_trace()
return func(x,y)
foo(7,4,sub)
Explanation: This means:
Functions and Classes can be passed as arguments.
Functions can return other functions/classes.
End of explanation
def foo():
x=1
foo()
print x
Explanation: 8. Closures
Remember this example?
End of explanation
def foo():
x='Outer String'
def bar():
print x
return bar
test = foo()
test()
Explanation: Obviously, this fails. Why? As per variable lifetime rules (see 2.1), foo() has ceased execution, x is destroyed.
So how about this?
End of explanation
def foo(x,y): return x**y
bar = lambda x,y: x**y # <--- Notice no return statements
print foo(4,2)
print bar(4,2)
Explanation: This works. But it shouldn't, because x is local to foo(), when foo() has ceased execution, x must be destroyed. Right?
Turns out, Python supports a feature called function closure. This enables nested inner functions to keep track of their namespaces.
8.1 Aside: lambda functions and sorted
Anonymous functions in python can be defined using the lambda keyword.
The following two are the same:
End of explanation
foo = lambda x: lambda y: x+y
print foo(3)(5)
Explanation: Nested lambda is permitted (idk why you'd use them, still, worth a mention)
End of explanation
student_tuples = [ #(Name, height(cms), weight(kg))
('john', 180, 85),
('doe', 177, 99),
('jane', 169, 69),
]
# Sort based on height
print 'Weight: ', sorted(student_tuples, key=lambda stud: stud[1])
# Sort based on Name
print 'Name: ', sorted(student_tuples, key=lambda stud: stud[0])
# Sort based on BMI
print 'BMI: ', sorted(student_tuples, key=lambda stud: stud[2]*100/stud[1])
Explanation: 8.1.1 Sorted
Python's sorted function can sort based on a key argument, key is a lambda function that deterimes how the data is sorted.
End of explanation
def outer(func):
def inner(*args):
"Inner"
print 'Decorating...'
ret = func()
ret += 1
return ret
return inner
def foo():
"I'm foo"
return 1
print foo()
decorated_foo = outer(foo)
print decorated_foo()
Explanation: 9. Decorators!
Decorators are callables that take a function as argument, and return a replacement function (with additional functionalities)
End of explanation
def outer(func):
def inner(*args):
"Inner"
print 'Decorating...'
ret = func()
ret += 1
return ret
print inner.__doc__, inner
return inner
def foo():
"I'm foo"
return 1
print foo.__name__, foo
decorated_foo = outer(foo)
print decorated_foo.__name__, decorated_foo
Explanation: Lets look at memory locations of the functions.
End of explanation
def outer(func):
def inner():
"Inner"
print 'Decorating...'
ret = func()
ret += 1
return ret
return inner
def foo():
"I'm foo"
return 1
print foo()
foo = outer(foo)
print foo()
Explanation: A common practice is to replace the original function with the decorated function
End of explanation
def outer(func):
def inner():
"Inner"
print 'Decorating...'
ret = func()
ret += 1
return ret
return inner
@outer
def foo():
"I'm foo"
return 1
print foo()
Explanation: Python uses @ to represent foo = outer(foo). The above code can be retwritten as follows:
End of explanation
import time
from pdb import set_trace
def logger(func):
def inner(*args, **kwargs):
print "Arguments were: %s, %s"%(args, kwargs)
return func(*args, **kwargs)
return inner
def timer(func):
def inner(*args, **kwargs):
tb=time.time()
result = func(*args, **kwargs)
ta=time.time()
print "Time taken: %f sec"%(ta-tb)
return result
return inner
@logger
@timer
def foo(a=5, b=2):
return a+b
@logger
@timer
def bar(a=10, b=1):
time.sleep(0.1)
return a-b
if __name__=='__main__': ## <----- Note
foo(2,3)
bar(5,7)
Explanation: 9.1 Logging and timing a function
Decorators can be classes, they can take input arguments/keyword args.
Lets build a decorator that logs and times another function
End of explanation |
6,606 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
AnnVariables
This script runs repeated cross-validation as a search for suitable parameter values for
the ANN and the genetic algorithm.
It has been re-run for all data-sets. The output of some are saved as text lower in the notebook.
Step1: Load data
To load a different data, use the relevant get-method as shown by the commented line.
Step2: Configure some helper methods and starting parameters
Step3: Compare stuff
Here some default parameters are set, together with the range of possible values.
Note that this cell can be re-executed without overwriting the current results
(useful to load a different data-set or changing the range of possible values).
Step4: Define what the best is
The best parameter is the one that results in a longer median survival time (networks are configured to find low-risk groups).
Step5: Check all combinations of values
Not really all combinations. But allow one variable to be changed at a time. Then re-run to see if this affects what the best value is for the rest. Stop at some maximum repcount or in case no variable change was better than what has been found.
Step6: Results from above on multiple data sets
As can be seen above, some values tends to fluctuate constantly (example mutstd). This is interpreted
as no one value having a clear advantage over another. This is taken into account with the values seen below
to determine the final parameters, which are set in helpers.py (get_net method)
Step7: Plot group
Lower is better. | Python Code:
# import stuffs
%matplotlib inline
import numpy as np
import pandas as pd
from pyplotthemes import get_savefig, classictheme as plt
plt.latex = True
Explanation: AnnVariables
This script runs repeated cross-validation as a search for suitable parameter values for
the ANN and the genetic algorithm.
It has been re-run for all data-sets. The output of some are saved as text lower in the notebook.
End of explanation
from datasets import get_nwtco, get_colon, get_lung, get_pbc, get_flchain
#d = get_colon(prints=True, norm_in=True, norm_out=False, training=True)
d = get_nwtco(prints=True, norm_in=True, norm_out=False, training=True)
d = d.astype(float)
durcol = d.columns[0]
eventcol = d.columns[1]
if np.any(d[durcol] < 0):
raise ValueError("Negative times encountered")
print("End time:", d.iloc[:, 0].max())
#d
Explanation: Load data
To load a different data, use the relevant get-method as shown by the commented line.
End of explanation
import ann
from classensemble import ClassEnsemble
def get_net(rows, incols, func=ann.geneticnetwork.FITNESS_SURV_KAPLAN_MIN, mingroup=None,
popsize=100, generations=200, mutchance=0.15, mutstd=1.0, muthalf=0, conchance=0,
crossover=ann.geneticnetwork.CROSSOVER_UNIFORM, crosschance=1.0,
selection=ann.geneticnetwork.SELECTION_TOURNAMENT,
architecture=None):
outcount = 2
if architecture is None:
architecture = [0]
hidden_count = np.sum(architecture)
l = incols + hidden_count + outcount + 1
net = ann.geneticnetwork(incols, hidden_count, outcount)
net.fitness_function = func
if mingroup is None:
mingroup = int(0.25 * rows)
# Be explicit here even though I changed the defaults
net.connection_mutation_chance = conchance
net.activation_mutation_chance = 0
# Some other values
net.crossover_method = crossover
net.crossover_chance = crosschance
net.selection_method = selection
net.population_size = popsize
net.generations = generations
net.weight_mutation_chance = mutchance
net.weight_mutation_factor = mutstd
net.weight_mutation_halfpoint = muthalf
ann.utils.connect_feedforward(net, architecture, hidden_act=net.TANH, out_act=net.SOFTMAX)
#c = net.connections.reshape((l, l))
#c[-outcount:, :(incols + hidden_count)] = 1
#net.connections = c.ravel()
return net
def _netgen(df, netcount, funcs=None, **kwargs):
# Expects (function, mingroup)
if funcs is None:
funcs = [ann.geneticnetwork.FITNESS_SURV_KAPLAN_MIN,
ann.geneticnetwork.FITNESS_SURV_KAPLAN_MAX]
rows = df.shape[0]
incols = df.shape[1] - 2
hnets = []
lnets = []
for i in range(netcount):
if i % 2:
n = get_net(rows, incols, funcs[0], **kwargs)
hnets.append(n)
else:
n = get_net(rows, incols, funcs[1], **kwargs)
lnets.append(n)
return hnets, lnets
def _kanngen(df, netcount, **kwargs):
return _netgen(df, netcount, **kwargs)
def _riskgen(df, netcount, **kwargs):
return _netgen(df, netcount,
[ann.geneticnetwork.FITNESS_SURV_RISKGROUP_HIGH,
ann.geneticnetwork.FITNESS_SURV_RISKGROUP_LOW],
**kwargs)
def get_kanngen(netcount, **kwargs):
return lambda df: _kanngen(df, netcount, **kwargs)
#e = ClassEnsemble(netgen=netgen)
#er = ClassEnsemble(netgen=riskgen)
class NetFitter(object):
def __init__(self, func=ann.geneticnetwork.FITNESS_SURV_KAPLAN_MIN, **kwargs):
self.kwargs = kwargs
self.func = func
def fit(self, df, duration_col, event_col):
'''
Same as learn, but instead conforms to the interface defined by
Lifelines and accepts a data frame as the data. Also generates
new networks using self.netgen is it was defined.
'''
#print("dataframe shit", df is None, df.shape)
dsafe = df.copy()
rows = df.shape[0]
incols = df.shape[1] - 2
#print("Getting net...")
self.net = get_net(rows, incols, self.func, **self.kwargs)
# Save columns for prediction later
self.x_cols = df.columns - [duration_col, event_col]
#print("Learning on:", df.shape)
#print("Conn chance:", self.net.connection_mutation_chance)
#print("Connsbefore", self.net.connections.reshape((15, 15)))
self.net.learn(np.array(df[self.x_cols]),
np.array(df[[duration_col, event_col]]))
#print("Conns after", self.net.connections)
#print("Weights after", self.net.weights)
#print("After learning:", df.shape)
def get_log(self, df):
'''
Returns a truncated training log
'''
return pd.Series(self.net.log[:, 0])
def predict_classes(self, df):
'''
Predict the classes of an entire DateFrame.
Returns a DataFrame.
'''
labels = []
for idx, tin in enumerate(df[self.x_cols].values):
res = self.net.predict_class(tin)
if res == 0 and self.func == ann.geneticnetwork.FITNESS_SURV_KAPLAN_MIN:
labels.append('high')
elif res == 0:
labels.append('low')
else:
labels.append('mid')
retval = pd.DataFrame(index=df.index, columns=['group'])
retval.iloc[:, 0] = labels
return retval
net = get_net(2, 2)
from stats import k_fold_cross_validation
from lifelines.estimation import KaplanMeierFitter, median_survival_times
def score(T_actual, labels, E_actual):
'''
Return a score based on grouping
'''
scores = []
labels = labels.ravel()
for g in ['high', 'mid', 'low']:
members = labels == g
if np.sum(members) > 0:
kmf = KaplanMeierFitter()
kmf.fit(T_actual[members],
E_actual[members],
label='{}'.format(g))
# Last survival time
if np.sum(E_actual[members]) > 0:
lasttime = np.max(T_actual[members][E_actual[members] == 1])
else:
lasttime = np.nan
# End survival rate, median survival time, member count, last event
subscore = (kmf.survival_function_.iloc[-1, 0],
median_survival_times(kmf.survival_function_),
np.sum(members),
lasttime)
else:
# Rpart might fail in this respect
subscore = (np.nan, np.nan, np.sum(members), np.nan)
scores.append(subscore)
return scores
# Use to get training data score
def logscore(T_actual, log, E_actual):
# Return last value in the log
return log[-1]
# Use for validation
def high_median_time(T_actual, labels, E_actual):
members = (labels == 'high').ravel()
if np.sum(members) > 0:
kmf = KaplanMeierFitter()
kmf.fit(T_actual[members],
E_actual[members])
return median_survival_times(kmf.survival_function_)
else:
return np.nan
Explanation: Configure some helper methods and starting parameters
End of explanation
default_values = dict(crossover = 2, # (twopoint)
mutchance = 0.3,
generations = 300,
crosschance = 0.75,
selection = 0, # (geometric)
muthalf = 0,
conchance = 0.0,
popsize = 100,
architecture = [4],
mutstd = 1.0)
possible_values = dict(
popsize=[25, 50, 100, 200, 300],
#generations=[1, 10, 50, 100, 500],
#conchance=[0, 0.01, 0.05, 0.1, 0.25, 0.5, 1.0],
mutchance=[0.15, 0.30, 0.6, 0.9],
mutstd=[0.5, 1.0, 2.0, 3.0],
crosschance=[0.25, 0.5, 0.75, 1.0],
crossover=[ann.geneticnetwork.CROSSOVER_UNIFORM,
ann.geneticnetwork.CROSSOVER_ONEPOINT,
ann.geneticnetwork.CROSSOVER_TWOPOINT],
selection=[ann.geneticnetwork.SELECTION_TOURNAMENT,
ann.geneticnetwork.SELECTION_GEOMETRIC,
ann.geneticnetwork.SELECTION_ROULETTE],
architecture=[[0],
[4],
[4, 4]])
# Update values as we go along
try:
current_values
except NameError:
current_values = default_values.copy()
print(current_values)
Explanation: Compare stuff
Here some default parameters are set, together with the range of possible values.
Note that this cell can be re-executed without overwriting the current results
(useful to load a different data-set or changing the range of possible values).
End of explanation
def get_winning_value(values, repeat_results, current_val):
winner = 0, 0
for i, x in enumerate(values):
mres = np.median(np.array(repeat_results)[:, i, :])
# For stability
if mres > winner[0] or (x == current_val and mres >= winner[0]):
winner = mres, x
return winner[1]
Explanation: Define what the best is
The best parameter is the one that results in a longer median survival time (networks are configured to find low-risk groups).
End of explanation
print("Starting values")
for k, v in current_values.items():
print(" ", k, "=", v)
# Repeat all variables
n = 10
k = 2
repcount = 0
stable = False
while repcount < 4 and not stable:
repcount += 1
print(repcount)
stable = True
for key, values in sorted(possible_values.items()):
print(key)
models = []
for x in values:
kwargs = current_values.copy()
kwargs[key] = x
model = NetFitter(func=ann.geneticnetwork.FITNESS_SURV_KAPLAN_MIN,
**kwargs)
model.var_label = key
model.var_value = x
models.append(model)
# Train and test
repeat_results = []
for rep in range(n):
result = k_fold_cross_validation(models, d, durcol, eventcol,
k=k,
evaluation_measure=logscore,
predictor='get_log')
repeat_results.append(result)
# See who won
winval = get_winning_value(values, repeat_results, current_values[key])
if winval != current_values[key]:
stable = False
print(key, current_values[key], "->", winval)
current_values[key] = winval
print("\nValues optimized after", repcount, "iterations")
for k, v in current_values.items():
print(" ", k, "=", v)
# Just print results from above
print("\nValues optimized after", repcount, "iterations")
for k, v in current_values.items():
if k == 'selection':
if v == ann.geneticnetwork.SELECTION_GEOMETRIC:
name = 'geometric'
elif v == ann.geneticnetwork.SELECTION_ROULETTE:
name = 'roulette'
else:
name = 'tournament'
elif k == 'crossover':
if v == ann.geneticnetwork.CROSSOVER_ONEPOINT:
name = 'onepoint'
elif v == ann.geneticnetwork.CROSSOVER_TWOPOINT:
name = 'twopoint'
else:
name = 'uniform'
else:
name = None
if name is None:
print(" ", k, "=", v)
else:
print(" ", k, "=", v, "({})".format(name))
Explanation: Check all combinations of values
Not really all combinations. But allow one variable to be changed at a time. Then re-run to see if this affects what the best value is for the rest. Stop at some maximum repcount or in case no variable change was better than what has been found.
End of explanation
#netcount = 6
models = []
# Try different epoch counts
for x in range(1):
#e = ClassEnsemble(netgen=get_kanngen(netcount, generations=x))
e = NetFitter(func=ann.geneticnetwork.FITNESS_SURV_KAPLAN_MIN,
**current_values)
#, mingroup=int(0.25*d.shape[0]))
e.var_label = 'Current values'
e.var_value = '1 net'
models.append(e)
n = 10
k = 3
# Repeated cross-validation
repeat_results = []
for rep in range(n):
print("n =", rep)
# Training
#result = k_fold_cross_validation(models, d, durcol, eventcol, k=k, evaluation_measure=logscore, predictor='get_log')
# Validation
result = k_fold_cross_validation(models, d, durcol, eventcol, k=k,
evaluation_measure=high_median_time,
predictor='predict_classes')
repeat_results.append(result)
#repeat_results
Explanation: Results from above on multiple data sets
As can be seen above, some values tends to fluctuate constantly (example mutstd). This is interpreted
as no one value having a clear advantage over another. This is taken into account with the values seen below
to determine the final parameters, which are set in helpers.py (get_net method):
# Number of neurons in hidden layer, and output layer
hidden_count = 4
outcount = 2
# Can only mutate weights, not connections or activation functions
net.connection_mutation_chance = 0.0
net.activation_mutation_chance = 0
# Training parameters used for all experiments
net.crossover_method = net.CROSSOVER_TWOPOINT
net.selection_method = net.SELECTION_TOURNAMENT
net.population_size = 200
net.generations = 1000
net.weight_mutation_chance = 0.5
net.weight_mutation_factor = 1.5
net.crossover_chance = 0.75
Generations is simply set to be well beyond the convergence point. Other values are set by
majority vote (crossover method, selection method), or by rough averaging (population size, mutation factors, crossover chance).
lung (training)
Values optimized after 10 iterations
- crossover = 2 (twopoint)
- mutchance = 0.3
- generations = 300
- crosschance = 0.75
- selection = 0 (geometric)
- muthalf = 0
- conchance = 0.0
- popsize = 100
- architecture = [4]
- mutstd = 1.0
colon (training)
Values optimized after 4 iterations
- crossover = 2 (twopoint)
- mutchance = 0.15
- generations = 300
- crosschance = 0.75
- selection = 2 (tournament)
- muthalf = 0
- conchance = 0.0
- popsize = 200
- architecture = [4]
- mutstd = 2.0
nwtco (training)
Values optimized after 4 iterations
- crossover = 1 (onepoint)
- mutchance = 0.9
- generations = 300
- crosschance = 0.5
- selection = 2 (tournament)
- muthalf = 0
- conchance = 0.0
- popsize = 300
- architecture = [4, 4]
- mutstd = 2.0
Train a network with the parameters on high-risk group
This is just to check what the result is. It has no bearing on the choice of parameters above
End of explanation
def plot_score(repeat_results, models):
boxes = []
labels = []
var_label = None
# Makes no sense for low here for many datasets...
for i, m in enumerate(models):
labels.append(str(m.var_value))
var_label = m.var_label
vals = []
for result in repeat_results:
vals.extend(result[i])
boxes.append(vals)
plt.figure()
plt.boxplot(boxes, labels=labels, vert=False, colors=plt.colors[:len(models)])
plt.ylabel(var_label)
plt.title("Cross-validation: n={} k={}".format(n, k))
plt.xlabel("Median survival time (max={:.0f})".format(d[durcol].max()))
#plt.gca().set_xscale('log')
plot_score(repeat_results, models)
Explanation: Plot group
Lower is better.
End of explanation |
6,607 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Flowers retraining example
이미 학습된 잘 알려진 모델을 이용하여 꽃의 종류를 예측하는 예제입니다.
기존의 Minst 예제와는 거의 차이점이 없습니다. 단지 2가지만 다를 뿐입니다.
숫자이미지 대신에 꽃이미지이름으로 분류되어 있는 folder를 dataset으로 이용한다.
이미 잘 짜여진 Neural model과 사전학습된(pretrained) parameter를 사용한다.
classificaion 숫자를 조정한 새로운 Network로 재구성한다.
pytorch의 imageFolder라는 dataset클래스를 사용하였습니다.
python
traindir = './flower_photos'
batch_size = 8
train_loader = torch.utils.data.DataLoader(
datasets.ImageFolder(traindir,
transforms.Compose([
transforms.RandomSizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
normalize,])),
batch_size=batch_size,
shuffle=True)
Microsoft에서 발표한 resnet152를 사용합니다.
python
model = torchvision.models.resnet152(pretrained=True)
새로운 Network로 재구성합니다.
```python
don't update model parameters
for param in model.parameters()
Step1: 1. 입력DataLoader 설정
train 데이터로 loader를 지정 (dataset은 imagefolder, batch 사이즈 32, shuffle를 실행)
test 데이터로 loader를 지정 (dataset은 imagefoder, batch 사이즈 32,shuffle를 실행)
Step2: 2. 사전 설정
model
여기서 약간 특별한 처리를 해주어야 합니다.
1. resnet152는 pretrained된 것은 분류개수가 1000개이라서, flower폴더에 있는 5개로 분류개수를 가지도록 재구성합니다.
2. 마지막 parameter만를 update할 수 있도록 나머지 layer는 requires_grad를 False로 합니다.
loss
opimizer
그리고 최적화에는 재구성한 마지막 layer만을 update하도록 설정합니다.
Step3: 3. Trainning loop
* (입력 생성)
* model 생성
* loss 생성
* zeroGrad
* backpropagation
* optimizer step (update model parameter)
```
Step4: 4. Predict & Evaluate | Python Code:
!if [ ! -d "/tmp/flower_photos" ]; then curl http://download.tensorflow.org/example_images/flower_photos.tgz | tar xz -C /tmp ;rm /tmp/flower_photos/LICENSE.txt; fi
%matplotlib inline
Explanation: Flowers retraining example
이미 학습된 잘 알려진 모델을 이용하여 꽃의 종류를 예측하는 예제입니다.
기존의 Minst 예제와는 거의 차이점이 없습니다. 단지 2가지만 다를 뿐입니다.
숫자이미지 대신에 꽃이미지이름으로 분류되어 있는 folder를 dataset으로 이용한다.
이미 잘 짜여진 Neural model과 사전학습된(pretrained) parameter를 사용한다.
classificaion 숫자를 조정한 새로운 Network로 재구성한다.
pytorch의 imageFolder라는 dataset클래스를 사용하였습니다.
python
traindir = './flower_photos'
batch_size = 8
train_loader = torch.utils.data.DataLoader(
datasets.ImageFolder(traindir,
transforms.Compose([
transforms.RandomSizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
normalize,])),
batch_size=batch_size,
shuffle=True)
Microsoft에서 발표한 resnet152를 사용합니다.
python
model = torchvision.models.resnet152(pretrained=True)
새로운 Network로 재구성합니다.
```python
don't update model parameters
for param in model.parameters() :
param.requires_grad = False
modify last fully connected layter
model.fc = nn.Linear(model.fc.in_features, cls_num)
```
일단, 밑의 명령어를 수행시켜서, 실행디렉토리 밑에 꽃 이미지 압축파일을 풀어 놓습니다.
End of explanation
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
from torchvision import datasets, transforms
from torch.autograd import Variable
import matplotlib.pyplot as plt
import numpy as np
is_cuda = torch.cuda.is_available() # cuda 사용가능시, True
traindir = '/tmp/flower_photos'
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
batch_size = 256
train_loader = torch.utils.data.DataLoader(
datasets.ImageFolder(traindir,
transforms.Compose([
transforms.RandomSizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
normalize,])),
batch_size=batch_size,
shuffle=True,
num_workers=4)
cls_num = len(datasets.folder.find_classes(traindir)[0])
test_loader = torch.utils.data.DataLoader(
datasets.ImageFolder(traindir,
transforms.Compose([
transforms.RandomSizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
normalize,])),
batch_size=batch_size,
shuffle=True,
num_workers=1)
Explanation: 1. 입력DataLoader 설정
train 데이터로 loader를 지정 (dataset은 imagefolder, batch 사이즈 32, shuffle를 실행)
test 데이터로 loader를 지정 (dataset은 imagefoder, batch 사이즈 32,shuffle를 실행)
End of explanation
model = torchvision.models.resnet152(pretrained = True)
### don't update model parameters
for param in model.parameters() :
param.requires_grad = False
#modify last fully connected layter
model.fc = nn.Linear(model.fc.in_features, cls_num)
fc_parameters = [
{'params': model.fc.parameters()},
]
optimizer = torch.optim.Adam(fc_parameters, lr=1e-4, weight_decay=1e-4)
loss_fn = nn.CrossEntropyLoss()
if is_cuda : model.cuda(), loss_fn.cuda()
Explanation: 2. 사전 설정
model
여기서 약간 특별한 처리를 해주어야 합니다.
1. resnet152는 pretrained된 것은 분류개수가 1000개이라서, flower폴더에 있는 5개로 분류개수를 가지도록 재구성합니다.
2. 마지막 parameter만를 update할 수 있도록 나머지 layer는 requires_grad를 False로 합니다.
loss
opimizer
그리고 최적화에는 재구성한 마지막 layer만을 update하도록 설정합니다.
End of explanation
# trainning
model.train()
train_loss = []
train_accu = []
i = 0
for epoch in range(35):
for image, target in train_loader:
image, target = Variable(image.float()), Variable(target) # 입력image Target 설정
if is_cuda : image, target = image.cuda(), target.cuda()
output = model(image) # model 생성
loss = loss_fn(output, target) #loss 생성
optimizer.zero_grad() # zero_grad
loss.backward() # calc backward grad
optimizer.step() # update parameter
pred = output.data.max(1)[1]
accuracy = pred.eq(target.data).sum()/batch_size
train_loss.append(loss.data[0])
train_accu.append(accuracy)
if i % 300 == 0:
print(i, loss.data[0])
i += 1
plt.plot(train_accu)
plt.plot(train_loss)
Explanation: 3. Trainning loop
* (입력 생성)
* model 생성
* loss 생성
* zeroGrad
* backpropagation
* optimizer step (update model parameter)
```
End of explanation
model.eval()
correct = 0
for image, target in test_loader:
if is_cuda : image, target = image.cuda(), target.cuda()
image, target = Variable(image, volatile=True), Variable(target)
output = model(image)
prediction = output.data.max(1)[1]
correct += prediction.eq(target.data).sum()
print('\nTest set: Accuracy: {:.2f}%'.format(100. * correct / len(test_loader.dataset)))
Explanation: 4. Predict & Evaluate
End of explanation |
6,608 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
07 - Model Deployment
by Alejandro Correa Bahnsen & Iván Torroledo
version 1.2, Feb 2018
Part of the class Machine Learning for Risk Management
This notebook is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
Agenda
Step1: Creating features
Step2: Contain any of the following
Step3: Lenght of the url
Lenght of domain
is IP?
Number of .com
Step4: Create Model
Step5: Save model
Step6: Part 2
Step7: Part 3
Step8: Create api
Step9: Load model and create function that predicts an URL
Step10: Run API | Python Code:
import pandas as pd
import zipfile
with zipfile.ZipFile('../datasets/model_deployment/phishing.csv.zip', 'r') as z:
f = z.open('phishing.csv')
data = pd.read_csv(f, index_col=False)
data.head()
data.tail()
data.phishing.value_counts()
Explanation: 07 - Model Deployment
by Alejandro Correa Bahnsen & Iván Torroledo
version 1.2, Feb 2018
Part of the class Machine Learning for Risk Management
This notebook is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
Agenda:
Creating and saving a model
Running the model in batch
Exposing the model as an API
Part 1: Phishing Detection
Phishing, by definition, is the act of defrauding an online user in order to obtain personal information by posing as a trustworthy institution or entity. Users usually have a hard time differentiating between legitimate and malicious sites because they are made to look exactly the same. Therefore, there is a need to create better tools to combat attackers.
End of explanation
data.url[data.phishing==1].sample(50, random_state=1).tolist()
Explanation: Creating features
End of explanation
keywords = ['https', 'login', '.php', '.html', '@', 'sign']
for keyword in keywords:
data['keyword_' + keyword] = data.url.str.contains(keyword).astype(int)
Explanation: Contain any of the following:
* https
* login
* .php
* .html
* @
* sign
* ?
End of explanation
data['lenght'] = data.url.str.len() - 2
domain = data.url.str.split('/', expand=True).iloc[:, 2]
data['lenght_domain'] = domain.str.len()
domain.head(12)
data['isIP'] = (domain.str.replace('.', '') * 1).str.isnumeric().astype(int)
data['count_com'] = data.url.str.count('com')
data.sample(15, random_state=4)
Explanation: Lenght of the url
Lenght of domain
is IP?
Number of .com
End of explanation
X = data.drop(['url', 'phishing'], axis=1)
y = data.phishing
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score
clf = RandomForestClassifier(n_jobs=-1, n_estimators=100)
cross_val_score(clf, X, y, cv=10)
clf.fit(X, y)
Explanation: Create Model
End of explanation
from sklearn.externals import joblib
joblib.dump(clf, '../datasets/model_deployment/07_phishing_clf.pkl', compress=3)
Explanation: Save model
End of explanation
from m07_model_deployment import predict_proba
predict_proba('http://www.vipturismolondres.com/com.br/?atendimento=Cliente&/LgSgkszm64/B8aNzHa8Aj.php')
Explanation: Part 2: Model in batch
See m07_model_deployment.py
End of explanation
from flask import Flask
from flask_restplus import Api, Resource, fields
from sklearn.externals import joblib
import pandas as pd
Explanation: Part 3: API
Flask is considered more Pythonic than Django because Flask web application code is in most cases more explicit. Flask is easy to get started with as a beginner because there is little boilerplate code for getting a simple app up and running.
First we need to install some libraries
pip install flask-restplus
Load Flask
End of explanation
app = Flask(__name__)
api = Api(
app,
version='1.0',
title='Phishing Prediction API',
description='Phishing Prediction API')
ns = api.namespace('predict',
description='Phishing Classifier')
parser = api.parser()
parser.add_argument(
'URL',
type=str,
required=True,
help='URL to be analyzed',
location='args')
resource_fields = api.model('Resource', {
'result': fields.String,
})
Explanation: Create api
End of explanation
clf = joblib.load('../datasets/model_deployment/07_phishing_clf.pkl')
@ns.route('/')
class PhishingApi(Resource):
@api.doc(parser=parser)
@api.marshal_with(resource_fields)
def get(self):
args = parser.parse_args()
result = self.predict_proba(args)
return result, 200
def predict_proba(self, args):
url = args['URL']
url_ = pd.DataFrame([url], columns=['url'])
# Create features
keywords = ['https', 'login', '.php', '.html', '@', 'sign']
for keyword in keywords:
url_['keyword_' + keyword] = url_.url.str.contains(keyword).astype(int)
url_['lenght'] = url_.url.str.len() - 2
domain = url_.url.str.split('/', expand=True).iloc[:, 2]
url_['lenght_domain'] = domain.str.len()
url_['isIP'] = (url_.url.str.replace('.', '') * 1).str.isnumeric().astype(int)
url_['count_com'] = url_.url.str.count('com')
# Make prediction
p1 = clf.predict_proba(url_.drop('url', axis=1))[0,1]
print('url=', url,'| p1=', p1)
return {
"result": p1
}
Explanation: Load model and create function that predicts an URL
End of explanation
app.run(debug=True, use_reloader=False, host='0.0.0.0', port=5000)
Explanation: Run API
End of explanation |
6,609 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a id='top'></a>
Complex vibration modes
Complex vibration modes arise in experimental research and numerical simulations when non proportional damping is adopted. In such cases a state space formulation of the second order differential dynamic equilibrium equation is the preferred way to adress the problem.
This notebook is inspired in one of Pete Avitabile's Modal Space articles, namely the one discussing the difference between complex modes and real normal modes. Additional information about state space formulation in structural dynamics can be found for example here.
Table of contents
Preamble
Dynamic equilibrium equation
State space formulation
Dynamic system setup
Undamped system
Proportionally damped system
Non proportionally damped system
Conclusions
Odds and ends
Preamble
We will start by setting up the computational environment for this notebook. Furthermore, we will need numpy and scipy for the numerical simulations and matplotlib for the plots
Step1: We will also need a couple of specific modules and a litle "IPython magic" to show the plots
Step2: Back to top
Dynamic equilibrium equation
In structural dynamics the second order differential dynamic equilibrium equation can be written in terms of generalized coordinates (d[isplacement]) and their first (v[elocity]) and second (a[cceleration]) time derivatives
Step3: Let us perform the eigenanalysis of the (undamped) second order differential dynamic equilibrium equation for later comparison of results
Step4: The angular frequencies are computed as the square root of the eigenvalues
Step5: The modal vectors, the columns of the modal matrix, have unit norm
Step6: Back to top
Undamped system
In the undamped system, the damping matrix is all zeros
Step7: The system matrix is the following
Step8: Performing the eigenanalysis on this matrix yields the following complex valued results
Step9: As we can see, the eigenvalues come in complex conjugate pairs. Therefore we can take for instance only the ones in the upper half-plane
Step10: In this case, since damping is zero, the real part of the complex eigenvalues is also zero (apart from round-off errors) and the imaginary part is equal to the angular frequency computed previously for the dynamic system
Step11: The columns of the modal matrix, the modal vectors, also come in conjugate pairs. Each vector has unit norm as in the dynamic system
Step12: Moreover, we can check that the modal matrix is composed of four blocks, each with $NDOF \times NDOF$ dimension. Some column reordering is necessary in order to match both modal matrices
Step13: To help visualize the complex valued modal vectors we will do a polar plot of the corresponding amplitudes and angles
Step14: Back to top
Proportionally damped system
In a proportionally damped system, the damping matrix is proportional to the mass and stiffness matrices
Step15: This damping matrix is orthogonal because the mass and stiffness matrices are also orthogonal
Step16: The system matrix is the following
Step17: The eigenanalysis yields the eigenvalues and eigenvectors
Step18: As we can see, the eigenvalues come in complex conjugate pairs. Let us take only the ones in the upper half-plane
Step19: These complex eigenvalues can be decomposed into angular frequency and damping coefficient
Step20: The columns of the modal matrix, the modal vectors, also come in conjugate pairs, each vector having unit norm
Step21: Moreover, the modal matrix is composed of four blocks, each with $NDOF \times NDOF$ dimension. Some column reordering is necessary in order to match both modal matrices
Step22: We will visualize again the complex valued modal vectors with a polar plot of the corresponding amplitudes and angles
Step23: Back to top
Non proportionally damped system
In non proportionally damped systems the damping matrix is not proportional neither to the mass matrix nor the stiffness matrix. Let us consider the following damping matrix
Step24: Non proportinal damping carries the fact that the damping matrix is not orthogonal anymore
Step25: The system matrix is the following
Step26: The eigenanalysis yields the eigenvalues and eigenvectors
Step27: As we can see, the eigenvalues come in complex conjugate pairs. Again, let us take only the ones in the upper half-plane
Step28: These complex eigenvalues can be decomposed into angular frequency and damping coefficient much like in the propotional damping case
Step29: Again, the columns of the modal matrix, the modal vectors, come in conjugate pairs, and each vector has unit norm
Step30: Moreover, the modal matrix is composed of four blocks, each with $NDOF \times NDOF$ dimension. Some column reordering is necessary in order to match both modal matrices
Step31: Once more we will visualize the complex valued modal vectors through a polar plot of the corresponding amplitudes and angles | Python Code:
import sys
import numpy as np
import scipy as sp
import matplotlib as mpl
print('System: {}'.format(sys.version))
print('numpy version: {}'.format(np.__version__))
print('scipy version: {}'.format(sp.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
Explanation: <a id='top'></a>
Complex vibration modes
Complex vibration modes arise in experimental research and numerical simulations when non proportional damping is adopted. In such cases a state space formulation of the second order differential dynamic equilibrium equation is the preferred way to adress the problem.
This notebook is inspired in one of Pete Avitabile's Modal Space articles, namely the one discussing the difference between complex modes and real normal modes. Additional information about state space formulation in structural dynamics can be found for example here.
Table of contents
Preamble
Dynamic equilibrium equation
State space formulation
Dynamic system setup
Undamped system
Proportionally damped system
Non proportionally damped system
Conclusions
Odds and ends
Preamble
We will start by setting up the computational environment for this notebook. Furthermore, we will need numpy and scipy for the numerical simulations and matplotlib for the plots:
End of explanation
from numpy import linalg as LA
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: We will also need a couple of specific modules and a litle "IPython magic" to show the plots:
End of explanation
MM = np.matrix(np.diag([2,3]))
print(MM)
KK = np.matrix([[2, -1],[-1, 1]])
print(KK)
Explanation: Back to top
Dynamic equilibrium equation
In structural dynamics the second order differential dynamic equilibrium equation can be written in terms of generalized coordinates (d[isplacement]) and their first (v[elocity]) and second (a[cceleration]) time derivatives:
\begin{equation}
\mathbf{M} \times \mathbf{a(t)} + \mathbf{C} \times \mathbf{v(t)} + \mathbf{K} \times \mathbf{d(t)} = \mathbf{F(t)}
\end{equation}
where:
$\mathbf{M}$ is the mass matrix
$\mathbf{C}$ is the damping matrix
$\mathbf{K}$ is the stiffness matrix
$\mathbf{a(t)}$ is the acceleration vector
$\mathbf{v(t)}$ is the velocity vector
$\mathbf{d(t)}$ is the displacement vector
$\mathbf{F(t)}$ is the force input vector
All these matrices are of size $NDOF \times NDOF$, where $NDOF$ is the number of generalized degrees of freedom of the dynamic system.
Back to top
State space formulation
In a state space formulation the second order differential dynamic equilibrium equation is turned into a system of first order differential dynamic equilibrium equations:
\begin{equation}
\begin{matrix}
\mathbf{\dot{x}(t)} = \mathbf{A} \cdot \mathbf{x(t)} + \mathbf{B} \cdot \mathbf{u(t)} \
\mathbf{y(t)} = \mathbf{C} \cdot \mathbf{x(t)} + \mathbf{D} \cdot \mathbf{u(t)}
\end{matrix}
\end{equation}
where
$\mathbf{A}$ is the system matrix
$\mathbf{B}$ is the imput matrix
$\mathbf{C}$ is the output matrix
$\mathbf{D}$ is the feedthrough matrix
$\mathbf{x(t)}$ is the state vector
$\mathbf{y(t)}$ is the output vector
$\mathbf{u(t)}$ is the input vector
The state vector, of size $2 \times NDOF$ by $1$, has the following form:
\begin{equation}
\mathbf{x(t)} = \left[ \begin{matrix}
\mathbf{u(t)} \
\mathbf{\dot{u}(t)}
\end{matrix} \right]
\end{equation}
The system matrix, of size $2 \times NDOF$ by $2 \times NDOF$, is built using the M, C and K matrices:
\begin{equation}
\mathbf{A} = \left[ \begin{matrix}
\mathbf{0} & \mathbf{I} \
-\mathbf{M}^{-1} \cdot \mathbf{K} & -\mathbf{M}^{-1} \cdot \mathbf{C}
\end{matrix} \right]
\end{equation}
The loading matrix, of size $2 \times NDOF$ by $1$, is composed of 0's and 1's.
Back to top
Dynamic system setup
In this example we will use the folowing mass and stiffness matrices:
End of explanation
W2, F1 = LA.eig(LA.solve(MM,KK)) # eigenanalysis
ix = np.argsort(np.absolute(W2)) # sort eigenvalues in ascending order
W2 = W2[ix] # sorted eigenvalues
F1 = F1[:,ix] # sorted eigenvectors
print(np.round_(W2, 4))
print(np.round_(F1, 4))
Explanation: Let us perform the eigenanalysis of the (undamped) second order differential dynamic equilibrium equation for later comparison of results:
End of explanation
print(np.sqrt(W2))
Explanation: The angular frequencies are computed as the square root of the eigenvalues:
End of explanation
print(LA.norm(F1, axis=0))
Explanation: The modal vectors, the columns of the modal matrix, have unit norm:
End of explanation
C0 = np.matrix(np.zeros_like(MM))
print(C0)
Explanation: Back to top
Undamped system
In the undamped system, the damping matrix is all zeros:
End of explanation
A = np.bmat([[np.zeros_like(MM), np.identity(MM.shape[0])], [LA.solve(-MM,KK), LA.solve(-MM,C0)]])
print(A)
Explanation: The system matrix is the following:
End of explanation
w0, v0 = LA.eig(A)
ix = np.argsort(np.absolute(w0))
w0 = w0[ix]
v0 = v0[:,ix]
print(np.round_(w0, 4))
print(np.round_(v0, 4))
Explanation: Performing the eigenanalysis on this matrix yields the following complex valued results:
End of explanation
print(np.round_(w0[[0,2]], 4))
Explanation: As we can see, the eigenvalues come in complex conjugate pairs. Therefore we can take for instance only the ones in the upper half-plane:
End of explanation
print(w0[[0,2]].imag)
Explanation: In this case, since damping is zero, the real part of the complex eigenvalues is also zero (apart from round-off errors) and the imaginary part is equal to the angular frequency computed previously for the dynamic system:
End of explanation
print(LA.norm(v0[:,[0,2]], axis=0))
Explanation: The columns of the modal matrix, the modal vectors, also come in conjugate pairs. Each vector has unit norm as in the dynamic system:
End of explanation
AA = v0[:2,[0,2]]
AB = AA.conjugate()
BA = np.multiply(AA,w0[[0,2]])
BB = BA.conjugate()
v0_new = np.bmat([[AA, AB], [BA, BB]])
print(np.round_(v0_new[:,[0,2,1,3]], 4))
Explanation: Moreover, we can check that the modal matrix is composed of four blocks, each with $NDOF \times NDOF$ dimension. Some column reordering is necessary in order to match both modal matrices:
End of explanation
fig, ax = plt.subplots(1, 2, subplot_kw=dict(polar=True))
for mode in range(2):
ax[mode].set_title('Mode #{}'.format(mode+1))
for dof in range(2):
r = np.array([0, np.absolute(v0[dof,2*mode])])
t = np.array([0, np.angle(v0[dof,2*mode])])
ax[mode].plot(t, r, 'o-', label='DOF #{}'.format(dof+1))
plt.legend(loc='lower left', bbox_to_anchor=(1., 0.))
plt.show()
Explanation: To help visualize the complex valued modal vectors we will do a polar plot of the corresponding amplitudes and angles:
End of explanation
C1 = 0.1*MM+0.1*KK
print(C1)
Explanation: Back to top
Proportionally damped system
In a proportionally damped system, the damping matrix is proportional to the mass and stiffness matrices:
\begin{equation}
\mathbf{C} = \alpha \times \mathbf{M} + \beta \times \mathbf{K}
\end{equation}
Let us assume $\alpha$ to be 0.1 and $\beta$ to be 0.1. This yields the following damping matrix:
End of explanation
print(np.round_(F1.T*C1*F1, 4))
Explanation: This damping matrix is orthogonal because the mass and stiffness matrices are also orthogonal:
End of explanation
A = np.bmat([[np.zeros_like(MM), np.identity(MM.shape[0])], [LA.solve(-MM,KK), LA.solve(-MM,C1)]])
print(A)
Explanation: The system matrix is the following:
End of explanation
w1, v1 = LA.eig(A)
ix = np.argsort(np.absolute(w1))
w1 = w1[ix]
v1 = v1[:,ix]
print(np.round_(w1, 4))
print(np.round_(v1, 4))
Explanation: The eigenanalysis yields the eigenvalues and eigenvectors:
End of explanation
print(np.round_(w1[[0,2]], 4))
Explanation: As we can see, the eigenvalues come in complex conjugate pairs. Let us take only the ones in the upper half-plane:
End of explanation
zw = -w1.real # damping coefficient time angular frequency
wD = w1.imag # damped angular frequency
zn = 1./np.sqrt(1.+(wD/-zw)**2) # the minus sign is formally correct!
wn = zw/zn # undamped angular frequency
print('Angular frequency: {}'.format(wn[[0,2]]))
print('Damping coefficient: {}'.format(zn[[0,2]]))
Explanation: These complex eigenvalues can be decomposed into angular frequency and damping coefficient:
End of explanation
print(LA.norm(v1[:,[0,2]], axis=0))
Explanation: The columns of the modal matrix, the modal vectors, also come in conjugate pairs, each vector having unit norm:
End of explanation
AA = v1[:2,[0,2]]
AB = AA.conjugate()
BA = np.multiply(AA,w1[[0,2]])
BB = BA.conjugate()
v1_new = np.bmat([[AA, AB], [BA, BB]])
print(np.round_(v1_new[:,[0,2,1,3]], 4))
Explanation: Moreover, the modal matrix is composed of four blocks, each with $NDOF \times NDOF$ dimension. Some column reordering is necessary in order to match both modal matrices:
End of explanation
fig, ax = plt.subplots(1, 2, subplot_kw=dict(polar=True))
for mode in range(2):
ax[mode].set_title('Mode #{}'.format(mode+1))
for dof in range(2):
r = np.array([0, np.absolute(v1[dof,2*mode])])
t = np.array([0, np.angle(v1[dof,2*mode])])
ax[mode].plot(t, r, 'o-', label='DOF #{}'.format(dof+1))
plt.legend(loc='lower left', bbox_to_anchor=(1., 0.))
plt.show()
Explanation: We will visualize again the complex valued modal vectors with a polar plot of the corresponding amplitudes and angles:
End of explanation
C2 = np.matrix([[0.4, -0.1],[-0.1, 0.1]])
print(C2)
Explanation: Back to top
Non proportionally damped system
In non proportionally damped systems the damping matrix is not proportional neither to the mass matrix nor the stiffness matrix. Let us consider the following damping matrix:
End of explanation
print(np.round_(F1.T*C2*F1, 4))
Explanation: Non proportinal damping carries the fact that the damping matrix is not orthogonal anymore:
End of explanation
A = np.bmat([[np.zeros_like(MM), np.identity(MM.shape[0])], [LA.solve(-MM,KK), LA.solve(-MM,C2)]])
print(A)
Explanation: The system matrix is the following:
End of explanation
w2, v2 = LA.eig(A)
ix = np.argsort(np.absolute(w2))
w2 = w2[ix]
v2 = v2[:,ix]
print(np.round_(w2, 4))
print(np.round_(v2, 4))
Explanation: The eigenanalysis yields the eigenvalues and eigenvectors:
End of explanation
print(np.round_(w2[[0,2]], 4))
Explanation: As we can see, the eigenvalues come in complex conjugate pairs. Again, let us take only the ones in the upper half-plane:
End of explanation
zw = -w2.real # damping coefficient time angular frequency
wD = w2.imag # damped angular frequency
zn = 1./np.sqrt(1.+(wD/-zw)**2) # the minus sign is formally correct!
wn = zw/zn # undamped angular frequency
print('Angular frequency: {}'.format(wn[[0,2]]))
print('Damping coefficient: {}'.format(zn[[0,2]]))
Explanation: These complex eigenvalues can be decomposed into angular frequency and damping coefficient much like in the propotional damping case:
End of explanation
print(LA.norm(v2[:,[0,2]], axis=0))
Explanation: Again, the columns of the modal matrix, the modal vectors, come in conjugate pairs, and each vector has unit norm:
End of explanation
AA = v2[:2,[0,2]]
AB = AA.conjugate()
BA = np.multiply(AA,w2[[0,2]])
BB = BA.conjugate()
v2_new = np.bmat([[AA, AB], [BA, BB]])
print(np.round_(v2_new[:,[0,2,1,3]], 4))
Explanation: Moreover, the modal matrix is composed of four blocks, each with $NDOF \times NDOF$ dimension. Some column reordering is necessary in order to match both modal matrices:
End of explanation
fig, ax = plt.subplots(1, 2, subplot_kw=dict(polar=True))
for mode in range(2):
ax[mode].set_title('Mode #{}'.format(mode+1))
for dof in range(2):
r = np.array([0, np.absolute(v2[dof,2*mode])])
t = np.array([0, np.angle(v2[dof,2*mode])])
ax[mode].plot(t, r, 'o-', label='DOF #{}'.format(dof+1))
plt.legend(loc='lower left', bbox_to_anchor=(1., 0.))
plt.show()
Explanation: Once more we will visualize the complex valued modal vectors through a polar plot of the corresponding amplitudes and angles:
End of explanation |
6,610 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Domain 54 Transitions
Step1: Testing of the finalized methods in local_complexity.py
Rule 54 domain, should be no non-unifilar transitions
Step2: Rule 18 domain, should be non-unifilar transitions going to state 4
Step3: Spurious state test not working out as I had hoped
Step4: Rule 54 from random initial condition
Step5: rule 18 from random initial condition
Step6: same as above but now with alpha = 0 in state estimation algorithm
Step7: Possible fix to numpy deprecation warning for indexing arrays with non-integer values (strings in my case)
Step8: Look at spurious state tests on rule 18 domain with different lightcone depths | Python Code:
dom_test = ECA(54,domain_54(20*4, 'a'))
dom_test.evolve(20*4)
diagram(dom_test.get_spacetime())
np.random.seed(0)
domain_states = epsilon_field(dom_test.get_spacetime())
domain_states.estimate_states(3,3,1)
domain_states.filter_data()
a = domain_states.state_transition((10,10), 'forward')
print a
b = domain_states.state_transition((10,10), 'right')
print b
c = domain_states.state_transition((10,10), 'left')
print c
print a == b
print a == a
transitions = [a, b]
print a in transitions
print c in transitions
transitions.append(c)
print c in transitions
print transitions
print '1 1\n 1 0 \n 1 '
print len(domain_states.all_transitions())
domain_states.all_transitions()
print domain_states.all_transitions(zipped = False)[2]
print domain_states.all_transitions()[23]
state_overlay_diagram(dom_test.get_spacetime(), domain_states.get_causal_field(), t_max = 40, x_max = 40)
domain_states.all_transitions()
from_2 = []
for transition in domain_states.all_transitions():
if transition[0] == 2:
from_2.append(transition)
from_2
print domain_states.transitions_from_state(2)
print domain_states.transitions_from_state(2)
print domain_states.transitions_from_state(2, zipped = False)
to_2 = []
for transition in domain_states.all_transitions():
if transition[2] == 2:
to_2.append(transition)
to_2
print domain_states.transitions_to_state(2)
print domain_states.transitions_to_state(2)
print domain_states.transitions_to_state(2, zipped = False)
state_list = []
for state in domain_states.causal_states():
state_list.append(state.index())
print state_list
print np.unique(domain_states.get_causal_field())
to = []
sym = []
fro = []
for state in domain_states.all_transitions():
to.append(state[0])
sym.append(state[1])
fro.append(state[2])
to = np.array(to)
sym = np.array(sym)
fro = np.array(fro)
zip(to,sym,fro)
test = [to, sym, fro]
zip(*test)
domain_states.all_transitions()
np.random.seed(0)
dom_18 = ECA(18, domain_18(200))
dom_18.evolve(200)
np.random.seed(0)
dom_states = epsilon_field(dom_18.get_spacetime())
dom_states.estimate_states(3,3,1)
dom_states.filter_data()
print dom_states.number_of_states()
nonunifilar = []
for state in np.unique(dom_states.get_causal_field()):
for i in dom_states.transitions_from_state(state):
for j in dom_states.transitions_from_state(state):
if i == j:
pass
else:
if i[1] == j[1]:
nonunifilar.append(i)
nonunifilar.append(j)
print nonunifilar
nonunifilar = []
for state in np.unique(dom_states.get_causal_field()):
transition_symbols = []
print len(dom_states.all_transitions())
fro, symb, to = dom_states.transitions_from_state(2, zipped = False)
uniques, inds, inverse, counts = np.unique(symb, True,True,True)
print symb
print uniques[inverse]
print inverse
print uniques
print counts
print uniques[counts>1]
print zip(fro[inds[counts>1]], symb[inds[counts>1]], to[inds[counts>1]])
print dom_states.transitions_from_state(2)
print np.where(counts>1)[0]
print np.where(symb == uniques[counts>1][0])[0]
print symb[(symb == uniques[counts>1][0] or symb == uniques[counts>1])][1]
print symb[symb == np.any(uniques[counts>1])]
fro, symb, to = dom_states.transitions_from_state(2, zipped = False)
uniques, counts = np.unique(symb, False, False ,True)
for transition in dom_states.transitions_from_state(2):
if transition[1] in uniques[counts>1]:
print transition
fro, symb, to = dom_states.transitions_from_state(2, zipped = False)
uniques, counts = np.unique(symb, False, False ,True)
for transition in zip(fro, symb, to):
if transition[1] in uniques[counts>1]:
print transition
print np.unique(domain_states.all_transitions(zipped = False)[0])
print np.unique(dom_states.all_transitions(zipped = False)[0])
Explanation: Domain 54 Transitions
End of explanation
print domain_states.nonunifilar_transitions() == []
print len(domain_states.all_transitions())
print domain_states.spurious_states()
Explanation: Testing of the finalized methods in local_complexity.py
Rule 54 domain, should be no non-unifilar transitions
End of explanation
print dom_states.nonunifilar_transitions()
print np.all(dom_states.nonunifilar_transitions() == \
[(2, 'r:000', 4), (2, 'l:000', 1), (2, 'r:000', 1), (2, 'l:000', 4), (3, 'f:00000', 1), (3, 'f:00000', 4)])
print len(dom_states.all_transitions())
Explanation: Rule 18 domain, should be non-unifilar transitions going to state 4
End of explanation
print dom_states.spurious_states()
print dom_states.transitions_to_state(4)
print dom_states.transitions_from_state(1)
state_overlay_diagram(dom_18.get_spacetime(), dom_states.get_causal_field(), t_max = 50, x_max = 50)
Explanation: Spurious state test not working out as I had hoped
End of explanation
rule_54 = ECA(54, random_state(300, 2))
rule_54.evolve(300)
np.random.seed(0)
states_54 = epsilon_field(rule_54.get_spacetime())
states_54.estimate_states(3,3,1)
states_54.filter_data()
print len(states_54.all_transitions())
print states_54.nonunifilar_transitions()
print states_54.number_of_states()
Explanation: Rule 54 from random initial condition
End of explanation
rule_18 = ECA(18, random_state(300, 2))
rule_18.evolve(300)
np.random.seed(0)
states_18 = epsilon_field(rule_18.get_spacetime())
states_18.estimate_states(3,3,1)
states_18.filter_data()
print states_18.number_of_states()
print len(states_18.all_transitions())
print len(states_18.nonunifilar_transitions())
Explanation: rule 18 from random initial condition
End of explanation
np.random.seed(0)
states_18 = epsilon_field(rule_18.get_spacetime())
states_18.estimate_states(3,3,1, alpha = 0)
states_18.filter_data()
print states_18.number_of_states()
print len(states_18.all_transitions())
print len(states_18.nonunifilar_transitions())
Explanation: same as above but now with alpha = 0 in state estimation algorithm
End of explanation
print '00010'
print [0,0,0,1,0]
print np.array([0,0,0,1,0])
print np.array(list('00010')).astype(int)
Explanation: Possible fix to numpy deprecation warning for indexing arrays with non-integer values (strings in my case)
End of explanation
np.random.seed(0)
domain = ECA(18, domain_18(400))
domain.evolve(400)
np.random.seed(0)
dom_states = epsilon_field(domain.get_spacetime())
dom_states.estimate_states(3,3,1, alpha=0)
dom_states.filter_data()
print dom_states.spurious_states()
print dom_states.nonunifilar_transitions()
print dom_states.transitions_to_state(2)
dom_states.all_transitions()
state_overlay_diagram(domain.get_spacetime(), dom_states.get_causal_field(), t_max = 40, x_max = 40)
Explanation: Look at spurious state tests on rule 18 domain with different lightcone depths
End of explanation |
6,611 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Convolutional Generative Adversarial Network
Learning Objectives
Build a GAN architecture (consisting of a generator and discriminator) in Keras
Define the loss for the generator and discriminator
Define a training step for the GAN using tf.GradientTape() and @tf.function
Train the GAN on the MNIST dataset
Introduction
This notebook demonstrates how to build and train a Generative Adversarial Network (GAN) to generate images of handwritten digits using a Deep Convolutional Generative Adversarial Network (DCGAN).
GANs consist of two models which are trained simultaneously through an adversarial process. A generator ("the artist") learns to create images that look real, while a discriminator ("the art critic") learns to tell real images apart from fakes.
During training, the generator progressively becomes better at creating images that look real, while the discriminator becomes better at recognizing fake images. The process reaches equilibrium when the discriminator can no longer distinguish real images from fakes.
In this notebook we'll build a GAN to generate MNIST digits. This notebook demonstrates this process on the MNIST dataset. The following animation shows a series of images produced by the generator as it was trained for 50 epochs. The images begin as random noise, and increasingly resemble hand written digits over time.
Import TensorFlow and other libraries
Step1: Load and prepare the dataset
For this notebook, we will use the MNIST dataset to train the generator and the discriminator. The generator will generate handwritten digits resembling the MNIST data.
Step2: Next, we define our input pipeline using tf.data. The pipeline below reads in train_images as tensor slices and then shuffles and batches the examples for training.
Step3: Create the generator and discriminator models
Both our generator and discriminator models will be defined using the Keras Sequential API.
The Generator
The generator uses tf.keras.layers.Conv2DTranspose (upsampling) layers to produce an image from a seed (random noise). We will start with a Dense layer that takes this seed as input, then upsample several times until you reach the desired image size of 28x28x1.
Exercise. Complete the code below to create the generator model. Start with a dense layer that takes as input random noise. We will create random noise using tf.random.normal([1, 100]). Use tf.keras.layers.Conv2DTranspose over multiple layers to upsample the random noise from dimension 100 to ultimately dimension 28x28x1 (the shape of our original MNIST digits).
Hint
Step4: Let's use the (as yet untrained) generator to create an image.
Step5: The Discriminator
Next, we will build the discriminator. The discriminator is a CNN-based image classifier. It should take in an image of shape 28x28x1 and return a single classification indicating if that image is real or not.
Exercise. Complete the code below to create the CNN-based discriminator model. Your model should be binary classifier which takes as input a tensor of shape 28x28x1. Experiment with different stacks of convolutions, activation functions, and/or dropout.
Step6: Using .summary() we can have a high-level summary of the generator and discriminator models.
Step7: Let's use the (as yet untrained) discriminator to classify the generated images as real or fake. The model will be trained to output positive values for real images, and negative values for fake images.
Step8: Define the loss and optimizers
Next, we will define the loss functions and optimizers for both the generator and discriminaotr models. Both the generator and discriminator will use the BinaryCrossentropy.
Step9: Discriminator loss
The method below quantifies how well the discriminator is able to distinguish real images from fakes.
Recall, when training the discriminator (i.e. holding the generator fixed) the loss function has two parts
Step10: Generator loss
The generator's loss quantifies how well it was able to trick the discriminator. Intuitively, if the generator is performing well, the discriminator will classify the fake images as real (or 1). Here, we will compare the discriminators decisions on the generated images to an array of 1s.
Exercise.
Complete the code to return the cross-entropy loss of the generator's output.
Step11: Optimizers for the geneerator and discriminator
Note that we must define two separete optimizers for the discriminator and the generator optimizers since we will train two networks separately.
Step12: Save checkpoints
This notebook also demonstrates how to save and restore models, which can be helpful in case a long running training task is interrupted.
Step13: Define the training loop
Next, we define the training loop for training our GAN. Below we set up global variables for training.
Step14: The training loop begins with generator receiving a random seed as input. That seed is used to produce an image. The discriminator is then used to classify real images (drawn from the training set) and fakes images (produced by the generator). The loss is calculated for each of these models, and the gradients are used to update the generator and discriminator.
Exercise.
Complete the code below to define the training loop for our GAN. Notice the use of tf.function below. This annotation causes the function train_step to be "compiled". The train_step function takes as input a batch of images. In the rest of the function,
- generated_images is created using the generator function with noise as input
- apply the discriminator model to the images and generated_images to create the real_output and fake_output (resp.)
- define the gen_loss and disc_loss using the methods you defined above.
- compute the gradients of the generator and the discriminator using gen_tape and disc_tape (resp.)
Lastly, we use the .apply_gradients method to make a gradient step for the generator_optimizer and discriminator_optimizer
Step15: We use the train_step function above to define training of our GAN. Note here, the train function takes as argument the tf.data dataset and the number of epochs for training.
Step16: Generate and save images.
We'll use a small helper function to generate images and save them.
Step17: Train the model
Call the train() method defined above to train the generator and discriminator simultaneously. Note, training GANs can be tricky. It's important that the generator and discriminator do not overpower each other (e.g., that they train at a similar rate).
At the beginning of the training, the generated images look like random noise. As training progresses, the generated digits will look increasingly real. After about 50 epochs, they resemble MNIST digits. This may take about one ot two minutes / epoch.
Step18: Restore the latest checkpoint.
Step19: Create a GIF
Lastly, we'll create a gif that shows the progression of our produced images through training.
Step20: Use imageio to create an animated gif using the images saved during training. | Python Code:
try:
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
tf.__version__
# To generate GIFs
!python3 -m pip install -q imageio
import glob
import os
import time
import imageio
import matplotlib.pyplot as plt
import numpy as np
import PIL
from IPython import display
from tensorflow.keras import layers
Explanation: Deep Convolutional Generative Adversarial Network
Learning Objectives
Build a GAN architecture (consisting of a generator and discriminator) in Keras
Define the loss for the generator and discriminator
Define a training step for the GAN using tf.GradientTape() and @tf.function
Train the GAN on the MNIST dataset
Introduction
This notebook demonstrates how to build and train a Generative Adversarial Network (GAN) to generate images of handwritten digits using a Deep Convolutional Generative Adversarial Network (DCGAN).
GANs consist of two models which are trained simultaneously through an adversarial process. A generator ("the artist") learns to create images that look real, while a discriminator ("the art critic") learns to tell real images apart from fakes.
During training, the generator progressively becomes better at creating images that look real, while the discriminator becomes better at recognizing fake images. The process reaches equilibrium when the discriminator can no longer distinguish real images from fakes.
In this notebook we'll build a GAN to generate MNIST digits. This notebook demonstrates this process on the MNIST dataset. The following animation shows a series of images produced by the generator as it was trained for 50 epochs. The images begin as random noise, and increasingly resemble hand written digits over time.
Import TensorFlow and other libraries
End of explanation
(train_images, train_labels), (_, _) = tf.keras.datasets.mnist.load_data()
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype(
"float32"
)
train_images = (train_images - 127.5) / 127.5 # Normalize the images to [-1, 1]
BUFFER_SIZE = 60000
BATCH_SIZE = 256
Explanation: Load and prepare the dataset
For this notebook, we will use the MNIST dataset to train the generator and the discriminator. The generator will generate handwritten digits resembling the MNIST data.
End of explanation
# Batch and shuffle the data
train_dataset = tf.data.Dataset.from_tensor_slices(train_images)
train_dataset = train_dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
Explanation: Next, we define our input pipeline using tf.data. The pipeline below reads in train_images as tensor slices and then shuffles and batches the examples for training.
End of explanation
# TODO 1
def make_generator_model():
model = tf.keras.Sequential()
# TODO: Your code goes here.
assert model.output_shape == (None, 28, 28, 1)
return model
Explanation: Create the generator and discriminator models
Both our generator and discriminator models will be defined using the Keras Sequential API.
The Generator
The generator uses tf.keras.layers.Conv2DTranspose (upsampling) layers to produce an image from a seed (random noise). We will start with a Dense layer that takes this seed as input, then upsample several times until you reach the desired image size of 28x28x1.
Exercise. Complete the code below to create the generator model. Start with a dense layer that takes as input random noise. We will create random noise using tf.random.normal([1, 100]). Use tf.keras.layers.Conv2DTranspose over multiple layers to upsample the random noise from dimension 100 to ultimately dimension 28x28x1 (the shape of our original MNIST digits).
Hint: Experiment with using BatchNormalization or different activation functions like LeakyReLU.
End of explanation
generator = make_generator_model()
noise = tf.random.normal([1, 100])
generated_image = generator(noise, training=False)
plt.imshow(generated_image[0, :, :, 0], cmap="gray")
Explanation: Let's use the (as yet untrained) generator to create an image.
End of explanation
# TODO 1.
def make_discriminator_model():
model = tf.keras.Sequential()
# TODO: Your code goes here.
assert model.output_shape == (None, 1)
return model
Explanation: The Discriminator
Next, we will build the discriminator. The discriminator is a CNN-based image classifier. It should take in an image of shape 28x28x1 and return a single classification indicating if that image is real or not.
Exercise. Complete the code below to create the CNN-based discriminator model. Your model should be binary classifier which takes as input a tensor of shape 28x28x1. Experiment with different stacks of convolutions, activation functions, and/or dropout.
End of explanation
make_generator_model().summary()
make_discriminator_model().summary()
Explanation: Using .summary() we can have a high-level summary of the generator and discriminator models.
End of explanation
discriminator = make_discriminator_model()
decision = discriminator(generated_image)
print(decision)
Explanation: Let's use the (as yet untrained) discriminator to classify the generated images as real or fake. The model will be trained to output positive values for real images, and negative values for fake images.
End of explanation
# This method returns a helper function to compute cross entropy loss
cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)
Explanation: Define the loss and optimizers
Next, we will define the loss functions and optimizers for both the generator and discriminaotr models. Both the generator and discriminator will use the BinaryCrossentropy.
End of explanation
#TODO 2
def discriminator_loss(real_output, fake_output):
real_loss = # TODO: Your code goes here.
fake_loss = # TODO: Your code goes here.
total_loss = # TODO: Your code goes here.
return total_loss
Explanation: Discriminator loss
The method below quantifies how well the discriminator is able to distinguish real images from fakes.
Recall, when training the discriminator (i.e. holding the generator fixed) the loss function has two parts: the loss when sampling from the real data and the loss when sampling from the fake data. The function below compares the discriminator's predictions on real images to an array of 1s, and the discriminator's predictions on fake (generated) images to an array of 0s.
Exercise.
Complete the code in the method below. The real_loss should return the cross-entropy for the discriminator's predictions on real images and the fake_loss should return the cross-entropy for the discriminator's predictions on fake images.
End of explanation
# TODO 2
def generator_loss(fake_output):
return # Your code goes here.
Explanation: Generator loss
The generator's loss quantifies how well it was able to trick the discriminator. Intuitively, if the generator is performing well, the discriminator will classify the fake images as real (or 1). Here, we will compare the discriminators decisions on the generated images to an array of 1s.
Exercise.
Complete the code to return the cross-entropy loss of the generator's output.
End of explanation
generator_optimizer = tf.keras.optimizers.Adam(1e-4)
discriminator_optimizer = tf.keras.optimizers.Adam(1e-4)
Explanation: Optimizers for the geneerator and discriminator
Note that we must define two separete optimizers for the discriminator and the generator optimizers since we will train two networks separately.
End of explanation
checkpoint_dir = "./gan_training_checkpoints"
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(
generator_optimizer=generator_optimizer,
discriminator_optimizer=discriminator_optimizer,
generator=generator,
discriminator=discriminator,
)
Explanation: Save checkpoints
This notebook also demonstrates how to save and restore models, which can be helpful in case a long running training task is interrupted.
End of explanation
EPOCHS = 50
noise_dim = 100
num_examples_to_generate = 16
# We will reuse this seed overtime (so it's easier)
# to visualize progress in the animated GIF)
seed = tf.random.normal([num_examples_to_generate, noise_dim])
Explanation: Define the training loop
Next, we define the training loop for training our GAN. Below we set up global variables for training.
End of explanation
# TODO 3
@tf.function
def train_step(images):
noise = tf.random.normal([BATCH_SIZE, noise_dim])
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
generated_images = # TODO: Your code goes here.
real_output = discriminator(images, training=True)
fake_output = discriminator(generated_images, training=True)
gen_loss = generator_loss(fake_output)
disc_loss = discriminator_loss(real_output, fake_output)
gradients_of_generator = gen_tape.gradient(
gen_loss, generator.trainable_variables)
gradients_of_discriminator = disc_tape.gradient(
disc_loss, discriminator.trainable_variables)
generator_optimizer.apply_gradients(
zip(gradients_of_generator, generator.trainable_variables))
discriminator_optimizer.apply_gradients(
zip(gradients_of_discriminator, discriminator.trainable_variables))
Explanation: The training loop begins with generator receiving a random seed as input. That seed is used to produce an image. The discriminator is then used to classify real images (drawn from the training set) and fakes images (produced by the generator). The loss is calculated for each of these models, and the gradients are used to update the generator and discriminator.
Exercise.
Complete the code below to define the training loop for our GAN. Notice the use of tf.function below. This annotation causes the function train_step to be "compiled". The train_step function takes as input a batch of images. In the rest of the function,
- generated_images is created using the generator function with noise as input
- apply the discriminator model to the images and generated_images to create the real_output and fake_output (resp.)
- define the gen_loss and disc_loss using the methods you defined above.
- compute the gradients of the generator and the discriminator using gen_tape and disc_tape (resp.)
Lastly, we use the .apply_gradients method to make a gradient step for the generator_optimizer and discriminator_optimizer
End of explanation
def train(dataset, epochs):
for epoch in range(epochs):
start = time.time()
for image_batch in dataset:
train_step(image_batch)
# Produce images for the GIF as we go
display.clear_output(wait=True)
generate_and_save_images(generator, epoch + 1, seed)
# Save the model every 15 epochs
if (epoch + 1) % 15 == 0:
checkpoint.save(file_prefix=checkpoint_prefix)
print(f"Time for epoch {epoch + 1} is {time.time() - start} sec")
# Generate after the final epoch
display.clear_output(wait=True)
generate_and_save_images(generator, epochs, seed)
Explanation: We use the train_step function above to define training of our GAN. Note here, the train function takes as argument the tf.data dataset and the number of epochs for training.
End of explanation
def generate_and_save_images(model, epoch, test_input):
# Notice `training` is set to False.
# This is so all layers run in inference mode (batchnorm).
predictions = model(test_input, training=False)
fig = plt.figure(figsize=(4, 4))
for i in range(predictions.shape[0]):
plt.subplot(4, 4, i + 1)
plt.imshow(predictions[i, :, :, 0] * 127.5 + 127.5, cmap="gray")
plt.axis("off")
plt.savefig(f"./gan_images/image_at_epoch_{epoch:04d}.png")
plt.show()
Explanation: Generate and save images.
We'll use a small helper function to generate images and save them.
End of explanation
# TODO 4
# TODO: Your code goes here.
Explanation: Train the model
Call the train() method defined above to train the generator and discriminator simultaneously. Note, training GANs can be tricky. It's important that the generator and discriminator do not overpower each other (e.g., that they train at a similar rate).
At the beginning of the training, the generated images look like random noise. As training progresses, the generated digits will look increasingly real. After about 50 epochs, they resemble MNIST digits. This may take about one ot two minutes / epoch.
End of explanation
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
Explanation: Restore the latest checkpoint.
End of explanation
# Display a single image using the epoch number
def display_image(epoch_no):
return PIL.Image.open(f"./gan_images/image_at_epoch_{epoch_no:04d}.png")
display_image(EPOCHS)
Explanation: Create a GIF
Lastly, we'll create a gif that shows the progression of our produced images through training.
End of explanation
anim_file = "dcgan.gif"
with imageio.get_writer(anim_file, mode="I") as writer:
filenames = glob.glob("./gan_images/image*.png")
filenames = sorted(filenames)
last = -1
for i, filename in enumerate(filenames):
frame = 2 * (i**0.5)
if round(frame) > round(last):
last = frame
else:
continue
image = imageio.imread(filename)
writer.append_data(image)
image = imageio.imread(filename)
writer.append_data(image)
import IPython
if IPython.version_info > (6, 2, 0, ""):
display.Image(filename=anim_file)
Explanation: Use imageio to create an animated gif using the images saved during training.
End of explanation |
6,612 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute effect-matched-spatial filtering (EMS)
This example computes the EMS to reconstruct the time course of the
experimental effect as described in
Step1: Note that a similar transformation can be applied with compute_ems
However, this function replicates Schurger et al's original paper, and thus
applies the normalization outside a leave-one-out cross-validation, which we
recommend not to do. | Python Code:
# Author: Denis Engemann <[email protected]>
# Jean-Remi King <[email protected]>
#
# License: BSD-3-Clause
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne import io, EvokedArray
from mne.datasets import sample
from mne.decoding import EMS, compute_ems
from sklearn.model_selection import StratifiedKFold
print(__doc__)
data_path = sample.data_path()
# Preprocess the data
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
event_ids = {'AudL': 1, 'VisL': 3}
# Read data and create epochs
raw = io.read_raw_fif(raw_fname, preload=True)
raw.filter(0.5, 45, fir_design='firwin')
events = mne.read_events(event_fname)
picks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=False, eog=True,
exclude='bads')
epochs = mne.Epochs(raw, events, event_ids, tmin=-0.2, tmax=0.5, picks=picks,
baseline=None, reject=dict(grad=4000e-13, eog=150e-6),
preload=True)
epochs.drop_bad()
epochs.pick_types(meg='grad')
# Setup the data to use it a scikit-learn way:
X = epochs.get_data() # The MEG data
y = epochs.events[:, 2] # The conditions indices
n_epochs, n_channels, n_times = X.shape
# Initialize EMS transformer
ems = EMS()
# Initialize the variables of interest
X_transform = np.zeros((n_epochs, n_times)) # Data after EMS transformation
filters = list() # Spatial filters at each time point
# In the original paper, the cross-validation is a leave-one-out. However,
# we recommend using a Stratified KFold, because leave-one-out tends
# to overfit and cannot be used to estimate the variance of the
# prediction within a given fold.
for train, test in StratifiedKFold(n_splits=5).split(X, y):
# In the original paper, the z-scoring is applied outside the CV.
# However, we recommend to apply this preprocessing inside the CV.
# Note that such scaling should be done separately for each channels if the
# data contains multiple channel types.
X_scaled = X / np.std(X[train])
# Fit and store the spatial filters
ems.fit(X_scaled[train], y[train])
# Store filters for future plotting
filters.append(ems.filters_)
# Generate the transformed data
X_transform[test] = ems.transform(X_scaled[test])
# Average the spatial filters across folds
filters = np.mean(filters, axis=0)
# Plot individual trials
plt.figure()
plt.title('single trial surrogates')
plt.imshow(X_transform[y.argsort()], origin='lower', aspect='auto',
extent=[epochs.times[0], epochs.times[-1], 1, len(X_transform)],
cmap='RdBu_r')
plt.xlabel('Time (ms)')
plt.ylabel('Trials (reordered by condition)')
# Plot average response
plt.figure()
plt.title('Average EMS signal')
mappings = [(key, value) for key, value in event_ids.items()]
for key, value in mappings:
ems_ave = X_transform[y == value]
plt.plot(epochs.times, ems_ave.mean(0), label=key)
plt.xlabel('Time (ms)')
plt.ylabel('a.u.')
plt.legend(loc='best')
plt.show()
# Visualize spatial filters across time
evoked = EvokedArray(filters, epochs.info, tmin=epochs.tmin)
evoked.plot_topomap(time_unit='s', scalings=1)
Explanation: Compute effect-matched-spatial filtering (EMS)
This example computes the EMS to reconstruct the time course of the
experimental effect as described in :footcite:SchurgerEtAl2013.
This technique is used to create spatial filters based on the difference
between two conditions. By projecting the trial onto the corresponding spatial
filters, surrogate single trials are created in which multi-sensor activity is
reduced to one time series which exposes experimental effects, if present.
We will first plot a trials x times image of the single trials and order the
trials by condition. A second plot shows the average time series for each
condition. Finally a topographic plot is created which exhibits the temporal
evolution of the spatial filters.
End of explanation
epochs.equalize_event_counts(event_ids)
X_transform, filters, classes = compute_ems(epochs)
Explanation: Note that a similar transformation can be applied with compute_ems
However, this function replicates Schurger et al's original paper, and thus
applies the normalization outside a leave-one-out cross-validation, which we
recommend not to do.
End of explanation |
6,613 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MC855 - Data analysis pyspark
Initializing spark and data
Spark version 2.2.0
Change /home/henrique/Downloads/spark to the path you downloaded and extracted spark
You may need to export hadoop bin to your path before running jupyter notebook
export PATH=$PATH
Step1: Loading directly from the csv
if this is working, the last frame is not needed anymore
Step2: Excluding rows that are not needed
Step3: Now, we need to start using MLlib - spark ml library
https
Step4: Separating features from target
The LogisticRegression function needs two Colums in df - label and features. (https
Step5: Inserting necessary fields insed features
Step6: Split our data into training set and test set
Step7: Apply Logistic Regression | Python Code:
# Import findspark
import findspark
# Initialize and provide path
findspark.init("/home/henrique/Downloads/spark")
# Or use this alternative
#findspark.init()
# Import SparkSession
from pyspark.sql import SparkSession
# Build the SparkSession
spark = SparkSession.builder \
.master("local") \
.appName("Linear Regression Model") \
.config("spark.executor.memory", "1gb") \
.getOrCreate()
sc = spark.sparkContext
Explanation: MC855 - Data analysis pyspark
Initializing spark and data
Spark version 2.2.0
Change /home/henrique/Downloads/spark to the path you downloaded and extracted spark
You may need to export hadoop bin to your path before running jupyter notebook
export PATH=$PATH:/Users/hadoop/hadoop/bin
End of explanation
import pyspark
sql = pyspark.sql.SQLContext(sc)
df = (sql.read
.format("com.databricks.spark.csv") # Choose the bib to oad csv
.option("header", "true") # Use the first line as header
.option("inferSchema", "true") # Try to infer data type - if this is not set all the typer will be str
.load("games.csv")) # File name
df
Explanation: Loading directly from the csv
if this is working, the last frame is not needed anymore
End of explanation
excludes = [
't1_ban1',
't1_ban2',
't1_ban3',
't1_ban4',
't1_ban5',
't1_champ1_sum1',
't1_champ1_sum2',
't1_champ1id',
't1_champ2_sum1',
't1_champ2_sum2',
't1_champ2id',
't1_champ3_sum1',
't1_champ3_sum2',
't1_champ3id',
't1_champ4_sum1',
't1_champ4_sum2',
't1_champ4id',
't1_champ5_sum1',
't1_champ5_sum2',
't1_champ5id',
't2_ban1',
't2_ban2',
't2_ban3',
't2_ban4',
't2_ban5',
't2_champ1_sum1',
't2_champ1_sum2',
't2_champ1id',
't2_champ2_sum1',
't2_champ2_sum2',
't2_champ2id',
't2_champ3_sum1',
't2_champ3_sum2',
't2_champ3id',
't2_champ4_sum1',
't2_champ4_sum2',
't2_champ4id',
't2_champ5_sum1',
't2_champ5_sum2',
't2_champ5id']
for exclude in excludes:
df = df.drop(exclude)
print(df.columns)
df.printSchema()
df.dtypes
df.select('gameId','t1_inhibitorKills','t2_towerKills','winner').show(15)
Explanation: Excluding rows that are not needed
End of explanation
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.feature import StandardScaler
from pyspark.ml.feature import StringIndexer
from pyspark.ml.feature import PCA
from pyspark.ml import Pipeline
from pyspark.ml.classification import LogisticRegression
from pyspark.mllib.regression import LabeledPoint
from pyspark.mllib.clustering import GaussianMixture
from pyspark.mllib.classification import LogisticRegressionWithLBFGS, LogisticRegressionModel
Explanation: Now, we need to start using MLlib - spark ml library
https://spark.apache.org/docs/2.2.0/ml-classification-regression.html#logistic-regression
http://people.duke.edu/~ccc14/sta-663-2016/21D_Spark_MLib.html
https://docs.databricks.com/spark/latest/mllib/binary-classification-mllib-pipelines.html
https://www.datacamp.com/community/tutorials/apache-spark-tutorial-machine-learning#gs.fip6MdA
https://wesslen.github.io/twitter/predicting_twitter_profile_location_with_pyspark/
End of explanation
# Renaming winner to label
df = df.withColumnRenamed("winner","label")
df.printSchema()
Explanation: Separating features from target
The LogisticRegression function needs two Colums in df - label and features. (https://stackoverflow.com/questions/44475917/how-does-an-mllib-estimator-know-what-are-the-features-and-target-columns)
Features are the variables we are gonna use to predict the target variable
First, winner is gonna be the label as we want to know the winner
After, all the other info are gonna be the target
End of explanation
feat_fields = ['gameDuration',
'seasonId',
'firstBlood',
'firstTower',
'firstInhibitor',
'firstBaron',
'firstDragon',
'firstRiftHerald',
't1_towerKills',
't1_inhibitorKills',
't2_baronKills',
't1_dragonKills',
't1_riftHeraldKills',
't2_towerKills',
't2_inhibitorKills',
't2_baronKills',
't2_dragonKills',
't2_riftHeraldKills']
assembler = VectorAssembler(inputCols=feat_fields, outputCol="features")
output = assembler.transform(df)
# The df will contain all the old Coluns and a new one features
# which will contain features we want
output.select('gameDuration','seasonId', 'features').show(20)
Explanation: Inserting necessary fields insed features
End of explanation
(trainingData, testData) = output.randomSplit([0.7, 0.3], seed = 1234)
print("Training Dataset Count: " + str(trainingData.count()))
print("Test Dataset Count: " + str(testData.count()))
Explanation: Split our data into training set and test set
End of explanation
lr = LogisticRegression(maxIter=20, regParam=0.3, elasticNetParam=0.8, family = "binomial")
lrModel = lr.fit(trainingData)
import matplotlib.pyplot as plt
import numpy as np
beta = np.sort(lrModel.coefficients)
plt.plot(beta)
plt.ylabel('Beta Coefficients')
plt.show()
trainingSummary = lrModel.summary
# Obtain the objective per iteration
objectiveHistory = trainingSummary.objectiveHistory
plt.plot(objectiveHistory)
plt.ylabel('Objective Function')
plt.xlabel('Iteration')
plt.show()
# Obtain the receiver-operating characteristic as a dataframe and areaUnderROC.
print("areaUnderROC: " + str(trainingSummary.areaUnderROC))
#trainingSummary.roc.show(n=10, truncate=15)
roc = trainingSummary.roc.toPandas()
plt.plot(roc['FPR'],roc['TPR'])
plt.ylabel('False Positive Rate')
plt.xlabel('True Positive Rate')
plt.title('ROC Curve')
plt.show()
pr = trainingSummary.pr.toPandas()
plt.plot(pr['recall'],pr['precision'])
plt.ylabel('Precision')
plt.xlabel('Recall')
plt.show()
predictions = lrModel.transform(testData)
predictions.select("label","prediction","probability")\
.show(n=10, truncate=40)
print("Number of correct prediction: " + str(predictions.filter(predictions['prediction'] == predictions['label']).count()))
print("Total of elements: " + str(testData.count()))
print(str(predictions.filter(predictions['prediction'] == predictions['label']).count()/testData.count()*100) + '%')
predictions.filter(predictions['prediction'] == predictions['label'])\
.select("gameId","probability","label","prediction").show(20)
from pyspark.ml.evaluation import BinaryClassificationEvaluator
print("Training: Area Under ROC: " + str(trainingSummary.areaUnderROC))
# Evaluate model
evaluator = BinaryClassificationEvaluator(rawPredictionCol="rawPrediction")
print("Test: Area Under ROC: " + str(evaluator.evaluate(predictions, {evaluator.metricName: "areaUnderROC"})))
Explanation: Apply Logistic Regression
End of explanation |
6,614 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex AI
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
Step3: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
Step6: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step7: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Step11: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
Step12: Location of Cloud Storage training data.
Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage.
Step13: Quick peek at your data
This tutorial uses a version of the MIT Human Motion dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.
Step14: Create a dataset
datasets.create-dataset-api
Create the Dataset
Next, create the Dataset resource using the create method for the VideoDataset class, which takes the following parameters
Step15: Example Output
Step16: Example output
Step17: Example output
Step18: Example output
Step19: Make a batch input file
Now make a batch input file, which you store in your local Cloud Storage bucket. The batch input file can be either CSV or JSONL. You will use JSONL in this tutorial. For JSONL file, you make one dictionary entry per line for each video. The dictionary contains the key/value pairs
Step20: Make the batch prediction request
Now that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters
Step21: Example output
Step22: Example Output
Step23: Example Output | Python Code:
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
Explanation: Vertex AI: Vertex AI Migration: AutoML Video Classificaton
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/ai-platform-samples/blob/master/vertex-ai-samples/tree/master/notebooks/official/migration/UJ14%20Vertex%20SDK%20AutoML%20Video%20Classification.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/ai-platform-samples/blob/master/vertex-ai-samples/tree/master/notebooks/official/migration/UJ14%20Vertex%20SDK%20AutoML%20Video%20Classification.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
Dataset
The dataset used for this tutorial is the Human Motion dataset from MIT. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
The Cloud Storage SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
Install and initialize the SDK.
Install Python 3.
Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install the latest version of Vertex SDK for Python.
End of explanation
! pip3 install -U google-cloud-storage $USER_FLAG
if os.getenv("IS_TESTING"):
! pip3 install --upgrade tensorflow $USER_FLAG
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
! gsutil mb -l $REGION $BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import google.cloud.aiplatform as aip
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
End of explanation
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
Explanation: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
End of explanation
IMPORT_FILE = "gs://automl-video-demo-data/hmdb_split1_5classes_train_inf.csv"
Explanation: Location of Cloud Storage training data.
Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage.
End of explanation
if "IMPORT_FILES" in globals():
FILE = IMPORT_FILES[0]
else:
FILE = IMPORT_FILE
count = ! gsutil cat $FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $FILE | head
Explanation: Quick peek at your data
This tutorial uses a version of the MIT Human Motion dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.
End of explanation
dataset = aip.VideoDataset.create(
display_name="MIT Human Motion" + "_" + TIMESTAMP,
gcs_source=[IMPORT_FILE],
import_schema_uri=aip.schema.dataset.ioformat.video.classification,
)
print(dataset.resource_name)
Explanation: Create a dataset
datasets.create-dataset-api
Create the Dataset
Next, create the Dataset resource using the create method for the VideoDataset class, which takes the following parameters:
display_name: The human readable name for the Dataset resource.
gcs_source: A list of one or more dataset index files to import the data items into the Dataset resource.
This operation may take several minutes.
End of explanation
dag = aip.AutoMLVideoTrainingJob(
display_name="hmdb_" + TIMESTAMP,
prediction_type="classification",
)
print(dag)
Explanation: Example Output:
INFO:google.cloud.aiplatform.datasets.dataset:Creating VideoDataset
INFO:google.cloud.aiplatform.datasets.dataset:Create VideoDataset backing LRO: projects/759209241365/locations/us-central1/datasets/5948525032035581952/operations/6913187331100901376
INFO:google.cloud.aiplatform.datasets.dataset:VideoDataset created. Resource name: projects/759209241365/locations/us-central1/datasets/5948525032035581952
INFO:google.cloud.aiplatform.datasets.dataset:To use this VideoDataset in another session:
INFO:google.cloud.aiplatform.datasets.dataset:ds = aiplatform.VideoDataset('projects/759209241365/locations/us-central1/datasets/5948525032035581952')
INFO:google.cloud.aiplatform.datasets.dataset:Importing VideoDataset data: projects/759209241365/locations/us-central1/datasets/5948525032035581952
INFO:google.cloud.aiplatform.datasets.dataset:Import VideoDataset data backing LRO: projects/759209241365/locations/us-central1/datasets/5948525032035581952/operations/6800597340416638976
Train a model
training.automl-api
Create and run training pipeline
To train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline.
Create training pipeline
An AutoML training pipeline is created with the AutoMLVideoTrainingJob class, with the following parameters:
display_name: The human readable name for the TrainingJob resource.
prediction_type: The type task to train the model for.
classification: A video classification model.
object_tracking: A video object tracking model.
action_recognition: A video action recognition model.
The instantiated object is the DAG (directed acyclic graph) for the training pipeline.
End of explanation
model = dag.run(
dataset=dataset,
model_display_name="hmdb_" + TIMESTAMP,
training_fraction_split=0.8,
test_fraction_split=0.2,
)
Explanation: Example output:
<google.cloud.aiplatform.training_jobs.AutoMLVideoTrainingJob object at 0x7fc3b6c90f10>
Run the training pipeline
Next, you run the DAG to start the training job by invoking the method run, with the following parameters:
dataset: The Dataset resource to train the model.
model_display_name: The human readable name for the trained model.
training_fraction_split: The percentage of the dataset to use for training.
test_fraction_split: The percentage of the dataset to use for test (holdout data).
The run method when completed returns the Model resource.
The execution of the training pipeline will take upto 20 minutes.
End of explanation
# Get model resource ID
models = aip.Model.list(filter="display_name=hmdb_" + TIMESTAMP)
# Get a reference to the Model Service client
client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"}
model_service_client = aip.gapic.ModelServiceClient(client_options=client_options)
model_evaluations = model_service_client.list_model_evaluations(
parent=models[0].resource_name
)
model_evaluation = list(model_evaluations)[0]
print(model_evaluation)
Explanation: Example output:
INFO:google.cloud.aiplatform.training_jobs:View Training:
https://console.cloud.google.com/ai/platform/locations/us-central1/training/6090621516762841088?project=759209241365
INFO:google.cloud.aiplatform.training_jobs:AutoMLVideoTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/6090621516762841088 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO:google.cloud.aiplatform.training_jobs:AutoMLVideoTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/6090621516762841088 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO:google.cloud.aiplatform.training_jobs:AutoMLVideoTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/6090621516762841088 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO:google.cloud.aiplatform.training_jobs:AutoMLVideoTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/6090621516762841088 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO:google.cloud.aiplatform.training_jobs:AutoMLVideoTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/6090621516762841088 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO:google.cloud.aiplatform.training_jobs:AutoMLVideoTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/6090621516762841088 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO:google.cloud.aiplatform.training_jobs:AutoMLVideoTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/6090621516762841088 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO:google.cloud.aiplatform.training_jobs:AutoMLVideoTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/6090621516762841088 current state:
...
INFO:google.cloud.aiplatform.training_jobs:AutoMLVideoTrainingJob run completed. Resource name: projects/759209241365/locations/us-central1/trainingPipelines/6090621516762841088
INFO:google.cloud.aiplatform.training_jobs:Model available at projects/759209241365/locations/us-central1/models/1899701006099283968
Evaluate the model
projects.locations.models.evaluations.list
Review model evaluation scores
After your model has finished training, you can review the evaluation scores for it.
First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.
End of explanation
test_items = ! gsutil cat $IMPORT_FILE | head -n2
if len(test_items[0]) == 5:
_, test_item_1, test_label_1, _, _ = str(test_items[0]).split(",")
_, test_item_2, test_label_2, _, _ = str(test_items[1]).split(",")
else:
test_item_1, test_label_1, _, _ = str(test_items[0]).split(",")
test_item_2, test_label_2, _, _ = str(test_items[1]).split(",")
print(test_item_1, test_label_1)
print(test_item_2, test_label_2)
Explanation: Example output:
name: "projects/759209241365/locations/us-central1/models/623915674158235648/evaluations/4280507618583117824"
metrics_schema_uri: "gs://google-cloud-aiplatform/schema/modelevaluation/classification_metrics_1.0.0.yaml"
metrics {
struct_value {
fields {
key: "auPrc"
value {
number_value: 0.9891107
}
}
fields {
key: "confidenceMetrics"
value {
list_value {
values {
struct_value {
fields {
key: "precision"
value {
number_value: 0.2
}
}
fields {
key: "recall"
value {
number_value: 1.0
}
}
}
}
Make batch predictions
predictions.batch-prediction
Get test item(s)
Now do a batch prediction to your Vertex model. You will use arbitrary examples out of the dataset as a test items. Don't be concerned that the examples were likely used in training the model -- we just want to demonstrate how to make a prediction.
End of explanation
import json
import tensorflow as tf
gcs_input_uri = BUCKET_NAME + "/test.jsonl"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
data = {
"content": test_item_1,
"mimeType": "video/avi",
"timeSegmentStart": "0.0s",
"timeSegmentEnd": "5.0s",
}
f.write(json.dumps(data) + "\n")
data = {
"content": test_item_2,
"mimeType": "video/avi",
"timeSegmentStart": "0.0s",
"timeSegmentEnd": "5.0s",
}
f.write(json.dumps(data) + "\n")
print(gcs_input_uri)
! gsutil cat $gcs_input_uri
Explanation: Make a batch input file
Now make a batch input file, which you store in your local Cloud Storage bucket. The batch input file can be either CSV or JSONL. You will use JSONL in this tutorial. For JSONL file, you make one dictionary entry per line for each video. The dictionary contains the key/value pairs:
content: The Cloud Storage path to the video.
mimeType: The content type. In our example, it is a avi file.
timeSegmentStart: The start timestamp in the video to do prediction on. Note, the timestamp must be specified as a string and followed by s (second), m (minute) or h (hour).
timeSegmentEnd: The end timestamp in the video to do prediction on.
End of explanation
batch_predict_job = model.batch_predict(
job_display_name="hmdb_" + TIMESTAMP,
gcs_source=gcs_input_uri,
gcs_destination_prefix=BUCKET_NAME,
sync=False,
)
print(batch_predict_job)
Explanation: Make the batch prediction request
Now that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters:
job_display_name: The human readable name for the batch prediction job.
gcs_source: A list of one or more batch request input files.
gcs_destination_prefix: The Cloud Storage location for storing the batch prediction resuls.
sync: If set to True, the call will block while waiting for the asynchronous batch job to complete.
End of explanation
batch_predict_job.wait()
Explanation: Example output:
INFO:google.cloud.aiplatform.jobs:Creating BatchPredictionJob
<google.cloud.aiplatform.jobs.BatchPredictionJob object at 0x7f806a6112d0> is waiting for upstream dependencies to complete.
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob created. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296
INFO:google.cloud.aiplatform.jobs:To use this BatchPredictionJob in another session:
INFO:google.cloud.aiplatform.jobs:bpj = aiplatform.BatchPredictionJob('projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296')
INFO:google.cloud.aiplatform.jobs:View Batch Prediction Job:
https://console.cloud.google.com/ai/platform/locations/us-central1/batch-predictions/5110965452507447296?project=759209241365
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296 current state:
JobState.JOB_STATE_RUNNING
Wait for completion of batch prediction job
Next, wait for the batch job to complete. Alternatively, one can set the parameter sync to True in the batch_predict() method to block until the batch prediction job is completed.
End of explanation
import json
import tensorflow as tf
bp_iter_outputs = batch_predict_job.iter_outputs()
prediction_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("prediction"):
prediction_results.append(blob.name)
tags = list()
for prediction_result in prediction_results:
gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}"
with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile:
for line in gfile.readlines():
line = json.loads(line)
print(line)
break
Explanation: Example Output:
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob created. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328
INFO:google.cloud.aiplatform.jobs:To use this BatchPredictionJob in another session:
INFO:google.cloud.aiplatform.jobs:bpj = aiplatform.BatchPredictionJob('projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328')
INFO:google.cloud.aiplatform.jobs:View Batch Prediction Job:
https://console.cloud.google.com/ai/platform/locations/us-central1/batch-predictions/181835033978339328?project=759209241365
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_SUCCEEDED
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob run completed. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328
Get the predictions
Next, get the results from the completed batch prediction job.
The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format:
content: The prediction request.
prediction: The prediction response.
ids: The internal assigned unique identifiers for each prediction request.
displayNames: The class names for each class label.
confidences: The predicted confidence, between 0 and 1, per class label.
timeSegmentStart: The time offset in the video to the start of the video sequence.
timeSegmentEnd: The time offset in the video to the end of the video sequence.
End of explanation
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
Explanation: Example Output:
{'instance': {'content': 'gs://automl-video-demo-data/hmdb51/Acrobacias_de_un_fenomeno_cartwheel_f_cm_np1_ba_bad_8.avi', 'mimeType': 'video/avi', 'timeSegmentStart': '0.0s', 'timeSegmentEnd': '5.0s'}, 'prediction': [{'id': '4517318233950257152', 'displayName': 'cartwheel', 'type': 'segment-classification', 'timeSegmentStart': '0s', 'timeSegmentEnd': '5s', 'confidence': 0.7450977}, {'id': '6823161243163951104', 'displayName': 'pullup', 'type': 'segment-classification', 'timeSegmentStart': '0s', 'timeSegmentEnd': '5s', 'confidence': 0.07339612}, {'id': '2211475224736563200', 'displayName': 'golf', 'type': 'segment-classification', 'timeSegmentStart': '0s', 'timeSegmentEnd': '5s', 'confidence': 0.065019816}, {'id': '9129004252377645056', 'displayName': 'kick_ball', 'type': 'segment-classification', 'timeSegmentStart': '0s', 'timeSegmentEnd': '5s', 'confidence': 0.06463309}, {'id': '121804997636653056', 'displayName': 'ride_horse', 'type': 'segment-classification', 'timeSegmentStart': '0s', 'timeSegmentEnd': '5s', 'confidence': 0.05185325}]}
{'instance': {'content': 'gs://automl-video-demo-data/hmdb51/_Rad_Schlag_die_Bank__cartwheel_f_cm_np1_le_med_0.avi', 'mimeType': 'video/avi', 'timeSegmentStart': '0.0s', 'timeSegmentEnd': '5.0s'}, 'prediction': [{'id': '4517318233950257152', 'displayName': 'cartwheel', 'type': 'segment-classification', 'timeSegmentStart': '0s', 'timeSegmentEnd': '5s', 'confidence': 0.76310456}, {'id': '2211475224736563200', 'displayName': 'golf', 'type': 'segment-classification', 'timeSegmentStart': '0s', 'timeSegmentEnd': '5s', 'confidence': 0.06767218}, {'id': '6823161243163951104', 'displayName': 'pullup', 'type': 'segment-classification', 'timeSegmentStart': '0s', 'timeSegmentEnd': '5s', 'confidence': 0.05853845}, {'id': '9129004252377645056', 'displayName': 'kick_ball', 'type': 'segment-classification', 'timeSegmentStart': '0s', 'timeSegmentEnd': '5s', 'confidence': 0.055601567}, {'id': '121804997636653056', 'displayName': 'ride_horse', 'type': 'segment-classification', 'timeSegmentStart': '0s', 'timeSegmentEnd': '5s', 'confidence': 0.055083193}]}
Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
AutoML Training Job
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation |
6,615 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tensor Products & Partial Traces
Contents
Tensor Products
Partial Trace
Super Operators & Tensor Manipulations
Step1: <a id='tensor'></a>
Tensor Products
To describe the states of multipartite quantum systems - such as two coupled qubits, a qubit coupled to an oscillator, etc. - we need to expand the Hilbert space by taking the tensor product of the state vectors for each of the system components. Similarly, the operators acting on the state vectors in the combined Hilbert space (describing the coupled system) are formed by taking the tensor product of the individual operators.
In QuTiP the function tensor is used to accomplish this task. This function takes as argument a collection
Step2: or equivalently using the list format
Step3: This is straightforward to generalize to more qubits by adding more component state vectors in the argument list to the tensor function, as illustrated in the following example
Step4: This state is slightly more complicated, describing two qubits in a superposition between the up and down states, while the third qubit is in its ground state.
To construct operators that act on an extended Hilbert space of a combined system, we similarly pass a list of operators for each component system to the tensor function. For example, to form the operator that represents the simultaneous action of the $\sigma_x$ operator on two qubits
Step5: To create operators in a combined Hilbert space that only act only on a single component, we take the tensor product of the operator acting on the subspace of interest, with the identity operators corresponding to the components that are to be unchanged. For example, the operator that represents $\sigma_z$ on the first qubit in a two-qubit system, while leaving the second qubit unaffected
Step6: Example
Step7: Three coupled qubits
The two-qubit example is easily generalized to three coupled qubits
Step8: Jaynes-Cummings Model
The simplest possible quantum mechanical description for light-matter interaction is encapsulated in the Jaynes-Cummings model, which describes the coupling between a two-level atom and a single-mode electromagnetic field (a cavity mode). Denoting the energy splitting of the atom and cavity omega_a and omega_c, respectively, and the atom-cavity interaction strength g, the Jaynes-Cumming Hamiltonian can be constructed as
Step9: <a id='partial'></a>
Partial Trace
The partial trace is an operation that reduces the dimension of a Hilbert space by eliminating some degrees of freedom by averaging (tracing). In this sense it is therefore the converse of the tensor product. It is useful when one is interested in only a part of a coupled quantum system. For open quantum systems, this typically involves tracing over the environment leaving only the system of interest. In QuTiP the class method ptrace is used to take partial traces. ptrace acts on the Qobj instance for which it is called, and it takes one argument sel, which is a list of integers that mark the component systems that should be kept. All other components are traced out.
For example, the density matrix describing a single qubit obtained from a coupled two-qubit system is obtained via
Step10: Note that the partial trace always results in a density matrix (mixed state), regardless of whether the composite system is a pure state (described by a state vector) or a mixed state (described by a density matrix)
Step11: <a id='super'></a>
Super Operators & Tensor Manipulations
Superoperators are operators
that act on Liouville space, the vectorspace of linear operators. Superoperators can be represented using the isomorphism $\mathrm{vec}
Step12: In the former case, the result correctly has four copies
of the compound index with dims [2, 3]. In the latter
case, however, each of the Hilbert space indices is listed
independently and in the wrong order.
The super_tensor function performs the needed
rearrangement, providing the most direct analog to tensor on
the underlying Hilbert space. In particular, for any two type="oper"
Qobjs A and B, to_super(tensor(A, B)) == super_tensor(to_super(A), to_super(B)) and
operator_to_vector(tensor(A, B)) == super_tensor(operator_to_vector(A), operator_to_vector(B)). Returning to the previous example
Step13: The composite function automatically switches between
tensor and super_tensor based on the type
of its arguments, such that composite(A, B) returns an appropriate Qobj to
represent the composition of two systems.
Step14: QuTiP also allows more general tensor manipulations that are
useful for converting between superoperator representations.
In particular, the tensor_contract function allows for
contracting one or more pairs of indices. As detailed in
the channel contraction tutorial, this can be used to find
superoperators that represent partial trace maps.
Using this functionality, we can construct some quite exotic maps,
such as a map from $3 \times 3$ operators to $2 \times 2$
operators | Python Code:
import numpy as np
from qutip import *
Explanation: Tensor Products & Partial Traces
Contents
Tensor Products
Partial Trace
Super Operators & Tensor Manipulations
End of explanation
tensor(basis(2, 0), basis(2, 0))
Explanation: <a id='tensor'></a>
Tensor Products
To describe the states of multipartite quantum systems - such as two coupled qubits, a qubit coupled to an oscillator, etc. - we need to expand the Hilbert space by taking the tensor product of the state vectors for each of the system components. Similarly, the operators acting on the state vectors in the combined Hilbert space (describing the coupled system) are formed by taking the tensor product of the individual operators.
In QuTiP the function tensor is used to accomplish this task. This function takes as argument a collection::
python
tensor(op1, op2, op3)
or a list:
python
tensor([op1, op2, op3])
of state vectors or operators and returns a composite quantum object for the combined Hilbert space. The function accepts an arbitray number of states or operators as argument. The type returned quantum object is the same as that of the input(s).
For example, the state vector describing two qubits in their ground states is formed by taking the tensor product of the two single-qubit ground state vectors:
End of explanation
tensor([basis(2, 0), basis(2, 0)])
Explanation: or equivalently using the list format:
End of explanation
tensor((basis(2, 0) + basis(2, 1)).unit(),
(basis(2, 0) + basis(2, 1)).unit(), basis(2, 0))
Explanation: This is straightforward to generalize to more qubits by adding more component state vectors in the argument list to the tensor function, as illustrated in the following example:
End of explanation
tensor(sigmax(), sigmax())
Explanation: This state is slightly more complicated, describing two qubits in a superposition between the up and down states, while the third qubit is in its ground state.
To construct operators that act on an extended Hilbert space of a combined system, we similarly pass a list of operators for each component system to the tensor function. For example, to form the operator that represents the simultaneous action of the $\sigma_x$ operator on two qubits:
End of explanation
tensor(sigmaz(), identity(2))
Explanation: To create operators in a combined Hilbert space that only act only on a single component, we take the tensor product of the operator acting on the subspace of interest, with the identity operators corresponding to the components that are to be unchanged. For example, the operator that represents $\sigma_z$ on the first qubit in a two-qubit system, while leaving the second qubit unaffected:
End of explanation
H = tensor(sigmaz(), identity(2)) + tensor(identity(2),
sigmaz()) + 0.05 * tensor(sigmax(), sigmax())
H
Explanation: Example: Constructing composite Hamiltonians
The tensor function is extensively used when constructing Hamiltonians for composite systems. Here we'll look at some simple examples.
Two coupled qubits
First, let's consider a system of two coupled qubits. Assume that both qubit has equal energy splitting, and that the qubits are coupled through a $\sigma_x\otimes\sigma_x$ interaction with strength $g = 0.05$ (in units where the bare qubit energy splitting is unity). The Hamiltonian describing this system is:
End of explanation
H = (tensor(sigmaz(), identity(2), identity(2)) +
tensor(identity(2), sigmaz(), identity(2)) +
tensor(identity(2), identity(2), sigmaz()) +
0.5 * tensor(sigmax(), sigmax(), identity(2)) +
0.25 * tensor(identity(2), sigmax(), sigmax()))
H
Explanation: Three coupled qubits
The two-qubit example is easily generalized to three coupled qubits:
End of explanation
N = 10 #Number of Fock states for cavity mode.
omega_a = 1.0
omega_c = 1.25
g = 0.05
a = tensor(identity(2), destroy(N))
sm = tensor(destroy(2), identity(N))
sz = tensor(sigmaz(), identity(N))
H = 0.5 * omega_a * sz + omega_c * a.dag() * a + g * (a.dag() * sm + a * sm.dag())
Explanation: Jaynes-Cummings Model
The simplest possible quantum mechanical description for light-matter interaction is encapsulated in the Jaynes-Cummings model, which describes the coupling between a two-level atom and a single-mode electromagnetic field (a cavity mode). Denoting the energy splitting of the atom and cavity omega_a and omega_c, respectively, and the atom-cavity interaction strength g, the Jaynes-Cumming Hamiltonian can be constructed as:
End of explanation
psi = tensor(basis(2, 0), basis(2, 1))
psi.ptrace(0)
psi.ptrace(1)
Explanation: <a id='partial'></a>
Partial Trace
The partial trace is an operation that reduces the dimension of a Hilbert space by eliminating some degrees of freedom by averaging (tracing). In this sense it is therefore the converse of the tensor product. It is useful when one is interested in only a part of a coupled quantum system. For open quantum systems, this typically involves tracing over the environment leaving only the system of interest. In QuTiP the class method ptrace is used to take partial traces. ptrace acts on the Qobj instance for which it is called, and it takes one argument sel, which is a list of integers that mark the component systems that should be kept. All other components are traced out.
For example, the density matrix describing a single qubit obtained from a coupled two-qubit system is obtained via:
End of explanation
psi = tensor((basis(2, 0) + basis(2, 1)).unit(), basis(2, 0))
psi.ptrace(0)
rho = tensor(ket2dm((basis(2, 0) + basis(2, 1)).unit()), fock_dm(2, 0))
rho.ptrace(0)
Explanation: Note that the partial trace always results in a density matrix (mixed state), regardless of whether the composite system is a pure state (described by a state vector) or a mixed state (described by a density matrix):
End of explanation
A = qeye([2])
B = qeye([3])
to_super(tensor(A, B)).dims
tensor(to_super(A), to_super(B)).dims
Explanation: <a id='super'></a>
Super Operators & Tensor Manipulations
Superoperators are operators
that act on Liouville space, the vectorspace of linear operators. Superoperators can be represented using the isomorphism $\mathrm{vec} : \mathcal{L}(\mathcal{H}) \to \mathcal{H} \otimes \mathcal{H}$.
To represent superoperators acting on $\mathcal{L}(\mathcal{H}_1 \otimes \mathcal{H}_2)$ thus takes some tensor rearrangement to get the desired ordering
$\mathcal{H}_1 \otimes \mathcal{H}_2 \otimes \mathcal{H}_1 \otimes \mathcal{H}_2$.
In particular, this means that tensor does not act as one might expect on the results of to_super:
End of explanation
super_tensor(to_super(A), to_super(B)).dims
Explanation: In the former case, the result correctly has four copies
of the compound index with dims [2, 3]. In the latter
case, however, each of the Hilbert space indices is listed
independently and in the wrong order.
The super_tensor function performs the needed
rearrangement, providing the most direct analog to tensor on
the underlying Hilbert space. In particular, for any two type="oper"
Qobjs A and B, to_super(tensor(A, B)) == super_tensor(to_super(A), to_super(B)) and
operator_to_vector(tensor(A, B)) == super_tensor(operator_to_vector(A), operator_to_vector(B)). Returning to the previous example:
End of explanation
composite(A, B).dims
composite(to_super(A), to_super(B)).dims
Explanation: The composite function automatically switches between
tensor and super_tensor based on the type
of its arguments, such that composite(A, B) returns an appropriate Qobj to
represent the composition of two systems.
End of explanation
tensor_contract(composite(to_super(A), to_super(B)), (1, 3), (4, 6)).dims
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/guide.css", "r").read()
return HTML(styles)
css_styling()
Explanation: QuTiP also allows more general tensor manipulations that are
useful for converting between superoperator representations.
In particular, the tensor_contract function allows for
contracting one or more pairs of indices. As detailed in
the channel contraction tutorial, this can be used to find
superoperators that represent partial trace maps.
Using this functionality, we can construct some quite exotic maps,
such as a map from $3 \times 3$ operators to $2 \times 2$
operators:
End of explanation |
6,616 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Chemistry Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 1.8. Coupling With Chemical Reactivity
Is Required
Step12: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step13: 2.2. Code Version
Is Required
Step14: 2.3. Code Languages
Is Required
Step15: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required
Step16: 3.2. Split Operator Advection Timestep
Is Required
Step17: 3.3. Split Operator Physical Timestep
Is Required
Step18: 3.4. Split Operator Chemistry Timestep
Is Required
Step19: 3.5. Split Operator Alternate Order
Is Required
Step20: 3.6. Integrated Timestep
Is Required
Step21: 3.7. Integrated Scheme Type
Is Required
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required
Step23: 4.2. Convection
Is Required
Step24: 4.3. Precipitation
Is Required
Step25: 4.4. Emissions
Is Required
Step26: 4.5. Deposition
Is Required
Step27: 4.6. Gas Phase Chemistry
Is Required
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required
Step30: 4.9. Photo Chemistry
Is Required
Step31: 4.10. Aerosols
Is Required
Step32: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required
Step33: 5.2. Global Mean Metrics Used
Is Required
Step34: 5.3. Regional Metrics Used
Is Required
Step35: 5.4. Trend Metrics Used
Is Required
Step36: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required
Step37: 6.2. Matches Atmosphere Grid
Is Required
Step38: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required
Step39: 7.2. Canonical Horizontal Resolution
Is Required
Step40: 7.3. Number Of Horizontal Gridpoints
Is Required
Step41: 7.4. Number Of Vertical Levels
Is Required
Step42: 7.5. Is Adaptive Grid
Is Required
Step43: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required
Step44: 8.2. Use Atmospheric Transport
Is Required
Step45: 8.3. Transport Details
Is Required
Step46: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required
Step47: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required
Step48: 10.2. Method
Is Required
Step49: 10.3. Prescribed Climatology Emitted Species
Is Required
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step51: 10.5. Interactive Emitted Species
Is Required
Step52: 10.6. Other Emitted Species
Is Required
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required
Step54: 11.2. Method
Is Required
Step55: 11.3. Prescribed Climatology Emitted Species
Is Required
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step57: 11.5. Interactive Emitted Species
Is Required
Step58: 11.6. Other Emitted Species
Is Required
Step59: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required
Step60: 12.2. Prescribed Upper Boundary
Is Required
Step61: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required
Step62: 13.2. Species
Is Required
Step63: 13.3. Number Of Bimolecular Reactions
Is Required
Step64: 13.4. Number Of Termolecular Reactions
Is Required
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required
Step67: 13.7. Number Of Advected Species
Is Required
Step68: 13.8. Number Of Steady State Species
Is Required
Step69: 13.9. Interactive Dry Deposition
Is Required
Step70: 13.10. Wet Deposition
Is Required
Step71: 13.11. Wet Oxidation
Is Required
Step72: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required
Step73: 14.2. Gas Phase Species
Is Required
Step74: 14.3. Aerosol Species
Is Required
Step75: 14.4. Number Of Steady State Species
Is Required
Step76: 14.5. Sedimentation
Is Required
Step77: 14.6. Coagulation
Is Required
Step78: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required
Step79: 15.2. Gas Phase Species
Is Required
Step80: 15.3. Aerosol Species
Is Required
Step81: 15.4. Number Of Steady State Species
Is Required
Step82: 15.5. Interactive Dry Deposition
Is Required
Step83: 15.6. Coagulation
Is Required
Step84: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required
Step85: 16.2. Number Of Reactions
Is Required
Step86: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required
Step87: 17.2. Environmental Conditions
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'emac-2-53-aerchem', 'atmoschem')
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: MESSY-CONSORTIUM
Source ID: EMAC-2-53-AERCHEM
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:10
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation |
6,617 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Statistical Analysis
In this notebook, we'll use selected statistical algorithms to analyze our dataset. Specifically, we'll do the following
Step1: Perform a Distribution Analysis
A distribution analysis helps us understand the distribution of various attributes of our data.
Step2: Per the Data Guide provided with the data, here are the corresponding meanings for the weather condition values.
-1 - Data missing or out of range
1 - Fine no high winds
2 - Raining no high winds
3 - Snowing no high winds
4 - Fine + high winds
5 - Raining + high winds
6 - Snowing + high winds
7 - Fog or mist
8 - Other
9 - Unknown
Step3: Categorical Variable Analysis
A categorical variable analysis helps us understands categorical types of data. Categorical types are non-numeric. In this example, we're using day of the week. Technically it's a category as opposed to purely numeric data. The creators of the dataset have already converted the category - the name of the day of the week - to a number. If they hadn't done this, we could use Pandas to do it for us, and then perform our analysis.
Step4: Linear Regression
"In statistics, regression analysis is a statistical process for estimating the relationships among variables...More specifically, regression analysis helps one understand how the typical value of the dependent variable (or 'criterion variable') changes when any one of the independent variables is varied, while the other independent variables are held fixed."
Linear regression is an approach for predicting a quantitative response using a single feature (or "predictor" or "input variable").
For this recipe we are going to use the Advertising dataset from 'An Introduction to Statistical Learning
with Applications in R'.
Step5: Time-Series Analysis
Step6: Outlier Detection
Outlier detection is used to find outliers in the data that can throw off your analysis.
Outliers come in two flavors
Step7: Logistic Regression
Logistic Regression is a statistical technique used to predict a binary outcome, for example purchase/no-purchase.
For this recipe we are going to use the Heart dataset from 'An Introduction to Statistical Learning with Applications in R'.
Step8: Random Forest
A random forest is an ensemble (a group) of decision trees which will output a prediction value
Step9: Support Vector Machine (SVM)
Support Vector Machines (SVM) are a group of supervised learning methods that can be applied to classification or regression.
For this recipe we are going to use the Heart dataset from 'An Introduction to Statistical Learning with Applications in R'.
Step10: Save the Models for Production Use | Python Code:
# Import the Python libraries we need
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
# Define a variable for the accidents data file
accidents_data_file = '/Users/robert.dempsey/Dropbox/Private/Art of Skill Hacking/Books/' \
'Python Business Intelligence Cookbook/Data/Stats19-Data1979-2004/Accidents7904.csv'
accidents = pd.read_csv(accidents_data_file,
sep=',',
header=0,
index_col=False,
parse_dates=True,
tupleize_cols=False,
error_bad_lines=False,
warn_bad_lines=True,
skip_blank_lines=True,
low_memory=False
)
accidents.head()
# Get a full list of the columns and data types
accidents.dtypes
Explanation: Statistical Analysis
In this notebook, we'll use selected statistical algorithms to analyze our dataset. Specifically, we'll do the following:
Statistical analysis
Distribution analysis
Categorical variable analysis
Linear Regression
Time-series analysis
Outlier detection
Predictive analysis
Logistic regression
Random Forest
Support Vector Machine (SVM)
Save the results
Save a predictive model for production use
First we'll get our data into our dataframe.
Create a DataFrame of the Data
End of explanation
# Create a histogram of the weather conditions
fig = plt.figure()
ax = fig.add_subplot(111)
ax.hist(accidents['Weather_Conditions'],
range = (accidents['Weather_Conditions'].min(),accidents['Weather_Conditions'].max()))
counts, bins, patches = ax.hist(accidents['Weather_Conditions'], facecolor='green', edgecolor='gray')
ax.set_xticks(bins)
plt.title('Weather Conditions Distribution')
plt.xlabel('Weather Condition')
plt.ylabel('Count of Weather Condition')
plt.show()
Explanation: Perform a Distribution Analysis
A distribution analysis helps us understand the distribution of various attributes of our data.
End of explanation
# Create a box plot of the light conditions
# The ';' at the end of the function call suppresses the usual matplotlib output
accidents.boxplot(column='Light_Conditions',
return_type='dict');
# Create a box plot of the light conditions grouped by weather conditions
accidents.boxplot(column='Light_Conditions',
by = 'Weather_Conditions',
return_type='dict');
Explanation: Per the Data Guide provided with the data, here are the corresponding meanings for the weather condition values.
-1 - Data missing or out of range
1 - Fine no high winds
2 - Raining no high winds
3 - Snowing no high winds
4 - Fine + high winds
5 - Raining + high winds
6 - Snowing + high winds
7 - Fog or mist
8 - Other
9 - Unknown
End of explanation
# Plot the distribution of casualties by day of the week
# Sunday = 1
casualty_count = accidents.groupby('Day_of_Week').Number_of_Casualties.count()
casualty_probability = accidents.groupby('Day_of_Week').Number_of_Casualties.sum()/accidents.groupby('Day_of_Week').Number_of_Casualties.count()
fig = plt.figure(figsize=(8,4))
ax1 = fig.add_subplot(121)
ax1.set_xlabel('Day of Week')
ax1.set_ylabel('Casualty Count')
ax1.set_title("Casualties by Day of Week")
casualty_count.plot(kind='bar')
ax2 = fig.add_subplot(122)
casualty_probability.plot(kind = 'bar')
ax2.set_xlabel('Day of Week')
ax2.set_ylabel('Probability of Casualties')
ax2.set_title("Probability of Casualties by Day of Week")
Explanation: Categorical Variable Analysis
A categorical variable analysis helps us understands categorical types of data. Categorical types are non-numeric. In this example, we're using day of the week. Technically it's a category as opposed to purely numeric data. The creators of the dataset have already converted the category - the name of the day of the week - to a number. If they hadn't done this, we could use Pandas to do it for us, and then perform our analysis.
End of explanation
# Import the data
# Define a variable for the accidents data file
data_file = '../data/ISL/Advertising.csv'
ads = pd.read_csv(data_file,
sep=',',
header=0,
index_col=False,
parse_dates=True,
tupleize_cols=False,
error_bad_lines=False,
warn_bad_lines=True,
skip_blank_lines=True,
low_memory=False
)
ads.head()
# How much data do we have?
ads.shape
# Visualize the relationship between sales and TV in a scatterplot
ads.plot(kind='scatter',
x='TV',
y='Sales',
figsize=(16, 8))
# Import the Python libraries we need
from sklearn.linear_model import LinearRegression
# Create an instance of the LinearRegression model
lm = LinearRegression()
# Create X and y
features = ['TV', 'Radio', 'Newspaper']
x = ads[features]
y = ads.Sales
# Fit the data to the model
lm.fit(x, y)
# Print the intercept and coefficients
# Intercept: the expected mean value of Y when all X=0
# Coefficients:
print(lm.intercept_)
print(lm.coef_)
# Aggregate the feature names and coefficients to create a single object
fc = zip(features, lm.coef_)
list(fc)
# Calculate the R-squared value: a statistical measure of how close the data are to the fitted regression line
# The closer to 100% this number is the better the model fits the data
lm.score(x, y)
# Make a sales prediction for a new observation
# Given the ad spend for three channels how many thousands of widgets do we predict we will sell
# Dollars (in thousands) spent on tv, radio, and newspaper advertising
lm.predict([75.60, 132.70, 34])
Explanation: Linear Regression
"In statistics, regression analysis is a statistical process for estimating the relationships among variables...More specifically, regression analysis helps one understand how the typical value of the dependent variable (or 'criterion variable') changes when any one of the independent variables is varied, while the other independent variables are held fixed."
Linear regression is an approach for predicting a quantitative response using a single feature (or "predictor" or "input variable").
For this recipe we are going to use the Advertising dataset from 'An Introduction to Statistical Learning
with Applications in R'.
End of explanation
# Create a dataframe containing the total number of casualties by date
casualty_count = accidents.groupby('Date').agg({'Number_of_Casualties': np.sum})
# Convert the index to a DateTimeIndex
casualty_count.index = pd.to_datetime(casualty_count.index)
# Sort the index so the plot looks correct
casualty_count.sort_index(inplace=True,
ascending=True)
# Plot the data
casualty_count.plot(figsize=(18, 4))
# Plot one year of the data
casualty_count['2000'].plot(figsize=(18, 4))
# Plot the yearly total casualty count for each year in the 1980's
the1980s = casualty_count['1980-01-01':'1989-12-31'].groupby(casualty_count['1980-01-01':'1989-12-31'].index.year).sum()
the1980s
# Show the plot
the1980s.plot(kind='bar',
figsize=(18, 4))
# Plot the 80's data as a line graph to better see the differences in years
the1980s.plot(figsize=(18, 4))
Explanation: Time-Series Analysis
End of explanation
# Import the dataset
data_file = '../data/ISL/College.csv'
# Use the first column as the index - the dataset is set up to be like this
colleges = pd.read_csv(data_file,
sep=',',
header=0,
index_col=0,
parse_dates=True,
tupleize_cols=False,
error_bad_lines=False,
warn_bad_lines=True,
skip_blank_lines=True,
low_memory=False
)
colleges.head()
colleges.dtypes
colleges.shape
# View a boxplot of the number of applications and the number of accepted applicants
colleges.boxplot(column=['Apps', 'Accept'],
return_type='axes',
figsize=(12,6))
# Visualize the relationship between the application and acceptance numbers in a scatterplot
colleges.plot(kind='scatter',
x='Accept',
y='Apps',
figsize=(16, 6))
# Label each point so we can see which points are the outliers
# Except for the outliers, this will be completely unreadable
# Create the plot
fig, ax = plt.subplots()
colleges.plot(kind='scatter',
x='Accept',
y='Apps',
figsize=(16, 6),
ax=ax)
# Label each of the points
for k, v in colleges.iterrows():
ax.annotate(k,(v['Accept'],v['Apps']))
# Re-draw the scatterplot
fig.canvas.draw()
Explanation: Outlier Detection
Outlier detection is used to find outliers in the data that can throw off your analysis.
Outliers come in two flavors: Univariate and Multivariate. Univariate outliers can be seen when looking at a single variable; multivariate outliers are found in multi-dimensional data.
For this recipe we are going to use the College dataset from 'An Introduction to Statistical Learning
with Applications in R'.
End of explanation
# Import the dataset
data_file = '../data/ISL/Heart.csv'
# Use the first column as the index - the dataset is set up to be like this
heart = pd.read_csv(data_file,
sep=',',
header=0,
index_col=0,
parse_dates=True,
tupleize_cols=False,
error_bad_lines=False,
warn_bad_lines=True,
skip_blank_lines=True,
low_memory=False
)
heart.head()
heart.dtypes
heart.shape
# Convert the ChestPain column to a numeric value
t2 = pd.Series({'asymptomatic' : 1,
'nonanginal' : 2,
'nontypical' : 3,
'typical': 4})
heart['ChestPain'] = heart['ChestPain'].map(t2)
heart.head()
# Convert the Thal column to a numeric value
t = pd.Series({'fixed' : 1,
'normal' : 2,
'reversible' : 3})
heart['Thal'] = heart['Thal'].map(t)
heart.head()
# Convert the AHD column to a numeric value
t = pd.Series({'No' : 0,
'Yes' : 1})
heart['AHD'] = heart['AHD'].map(t)
heart.head()
# Fill missing values in with 0
heart.fillna(0, inplace=True)
heart.head()
# What is the shape of the data?
heart.shape
# Create two matrices for our model to use
heart_data = heart.iloc[:,0:13].values
heart_targets = heart['AHD'].values
# Build the model
from sklearn import linear_model
logClassifier = linear_model.LogisticRegression(C=1, random_state=111)
# Add in cross validation for our model
from sklearn import cross_validation
X_train, X_test, y_train, y_test = cross_validation.train_test_split(heart_data,
heart_targets,
test_size=0.20,
random_state=111)
logClassifier.fit(X_train, y_train)
# Estimate the accuracy of the model on our dataset
# Splits the data, fits the model and computes the score 12 consecutive times with different splits each time
scores = cross_validation.cross_val_score(logClassifier, heart_data, heart_targets, cv=12)
scores
# Show the mean accuracy score and the standard deviation
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
# Run the test data
predicted = logClassifier.predict(X_test)
predicted
# Evaluate the model
from sklearn import metrics
metrics.accuracy_score(y_test, predicted)
# View the confusion matrix
# Confusion matrix - shows the predictions that the model made on the test data
# Diagonal from top-left corner to bottom-right corner is number of correct predictions for each row
# A number in a non-diagonal row is the count of errors for that row, and the column corresponds to the incorrect prediction
metrics.confusion_matrix(y_test, predicted)
Explanation: Logistic Regression
Logistic Regression is a statistical technique used to predict a binary outcome, for example purchase/no-purchase.
For this recipe we are going to use the Heart dataset from 'An Introduction to Statistical Learning with Applications in R'.
End of explanation
# Import the dataset
data_file = '../data/ISL/Heart.csv'
# Use the first column as the index - the dataset is set up to be like this
heart = pd.read_csv(data_file,
sep=',',
header=0,
index_col=0,
parse_dates=True,
tupleize_cols=False,
error_bad_lines=False,
warn_bad_lines=True,
skip_blank_lines=True,
low_memory=False
)
heart.head()
# Convert the ChestPain column to a numeric value
t2 = pd.Series({'asymptomatic' : 1,
'nonanginal' : 2,
'nontypical' : 3,
'typical': 4})
heart['ChestPain'] = heart['ChestPain'].map(t2)
# Convert the Thal column to a numeric value
t = pd.Series({'fixed' : 1,
'normal' : 2,
'reversible' : 3})
heart['Thal'] = heart['Thal'].map(t)
# Convert the AHD column to a numeric value
t = pd.Series({'No' : 0,
'Yes' : 1})
heart['AHD'] = heart['AHD'].map(t)
# Fill missing values in with 0
heart.fillna(0, inplace=True)
heart.head()
# Import the random forest library
from sklearn.ensemble import RandomForestClassifier
# Create the random forest object which will include all the parameters
# for the fit
rfClassifier = RandomForestClassifier(n_estimators = 100)
# Fit the training data to the AHD labels and create the decision trees
rfClassifier = rfClassifier.fit(X_train, y_train)
rfClassifier
# Take the same decision trees and run it on the test data
predicted = rfClassifier.predict(X_test)
predicted
# Estimate the accuracy of the model on our dataset
scores = cross_validation.cross_val_score(rfClassifier, heart_data, heart_targets, cv=12)
scores
# Show the mean accuracy score and the standard deviation
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
# Assess the model
metrics.accuracy_score(y_test, predicted)
# Show the confusion matrix
metrics.confusion_matrix(y_test, predicted)
Explanation: Random Forest
A random forest is an ensemble (a group) of decision trees which will output a prediction value
End of explanation
# Import the dataset
data_file = '../data/ISL/Heart.csv'
# Use the first column as the index - the dataset is set up to be like this
heart = pd.read_csv(data_file,
sep=',',
header=0,
index_col=0,
parse_dates=True,
tupleize_cols=False,
error_bad_lines=False,
warn_bad_lines=True,
skip_blank_lines=True,
low_memory=False
)
heart.head()
# Convert the ChestPain column to a numeric value
t2 = pd.Series({'asymptomatic' : 1,
'nonanginal' : 2,
'nontypical' : 3,
'typical': 4})
heart['ChestPain'] = heart['ChestPain'].map(t2)
# Convert the Thal column to a numeric value
t = pd.Series({'fixed' : 1,
'normal' : 2,
'reversible' : 3})
heart['Thal'] = heart['Thal'].map(t)
# Convert the AHD column to a numeric value
t = pd.Series({'No' : 0,
'Yes' : 1})
heart['AHD'] = heart['AHD'].map(t)
# Fill missing values in with 0
heart.fillna(0, inplace=True)
heart.head()
# Create an instance of a linear support vector classifier, an SVM classifier
from sklearn.svm import LinearSVC
svmClassifier = LinearSVC(random_state=111)
svmClassifier
# Train the model - the svmClassifier we created earlier - with training data
X_train, X_test, y_train, y_test = cross_validation.train_test_split(heart_data,
heart_targets,
test_size=0.20,
random_state=111)
svmClassifier.fit(X_train, y_train)
# Run the test data through our model by feeding it to the predict function of the model
predicted = svmClassifier.predict(X_test)
predicted
# Estimate the accuracy of the model on our dataset
scores = cross_validation.cross_val_score(rfClassifier, heart_data, heart_targets, cv=12)
# Show the mean accuracy score and the standard deviation
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
# Assess the model
metrics.accuracy_score(y_test, predicted)
# Show the confusion matrix
metrics.confusion_matrix(y_test, predicted)
Explanation: Support Vector Machine (SVM)
Support Vector Machines (SVM) are a group of supervised learning methods that can be applied to classification or regression.
For this recipe we are going to use the Heart dataset from 'An Introduction to Statistical Learning with Applications in R'.
End of explanation
# Import the Python libraries we need
import pickle
# Logistic Regression Model
hearts_classifier_file = "../models/hearts_lr_classifier_02.27.16.dat"
pickle.dump(logClassifier, open(hearts_classifier_file, "wb"))
# Random Forest Model
hearts_classifier_file = "../models/hearts_rf_classifier_02.27.16.dat"
pickle.dump(rfClassifier, open(hearts_classifier_file, "wb"))
# SVM Model
hearts_classifier_file = "../models/hearts_svm_classifier_02.27.16.dat"
pickle.dump(svmClassifier, open(hearts_classifier_file, "wb"))
# Reconstitute the logistic regression model as a test
model_file = "../models/hearts_lr_classifier_09.27.15.dat"
logClassifier2 = pickle.load(open(model_file, "rb"))
print(logClassifier2)
Explanation: Save the Models for Production Use
End of explanation |
6,618 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<p><font size="6"><b>Jupyter notebook INTRODUCTION </b></font></p>
DS Data manipulation, analysis and visualisation in Python
December, 2017
© 2016, Joris Van den Bossche and Stijn Van Hoey (jorisvandenbossche@gmail.com, stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons
Step1: <big><center>To run a cell
Step2: Writing code is what you will do most during this course!
Markdown
Text cells, using Markdown syntax. With the syntax, you can make text bold or italic, amongst many other things...
list
with
items
Link to interesting resources or images
Step3: Help
Step4: <div class="alert alert-success">
<b>EXERCISE</b>
Step5: edit mode to command mode
edit mode means you're editing a cell, i.e. with your cursor inside a cell to type content --> <font color="green">green colored side</font>
command mode means you're NOT editing(!), i.e. NOT with your cursor inside a cell to type content --> <font color="blue">blue colored side</font>
To start editing, click inside a cell or
<img src="../../img/enterbutton.png" alt="Key enter" style="width
Step6: %%timeit
Step7: %lsmagic
Step8: %whos
Step9: Let's get started! | Python Code:
from IPython.display import Image
Image(url='http://python.org/images/python-logo.gif')
Explanation: <p><font size="6"><b>Jupyter notebook INTRODUCTION </b></font></p>
DS Data manipulation, analysis and visualisation in Python
December, 2017
© 2016, Joris Van den Bossche and Stijn Van Hoey (jorisvandenbossche@gmail.com, stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons
End of explanation
# Code cell, then we are using python
print('Hello DS')
DS = 10
print(DS + 5) # Yes, we advise to use Python 3 (!)
Explanation: <big><center>To run a cell: push the start triangle in the menu or type SHIFT + ENTER/RETURN
Notebook cell types
We will work in Jupyter notebooks during this course. A notebook is a collection of cells, that can contain different content:
Code
End of explanation
import os
os.mkdir
my_very_long_variable_name = 3
Explanation: Writing code is what you will do most during this course!
Markdown
Text cells, using Markdown syntax. With the syntax, you can make text bold or italic, amongst many other things...
list
with
items
Link to interesting resources or images:
Blockquotes if you like them
This line is part of the same blockquote.
Mathematical formulas can also be incorporated (LaTeX it is...)
$$\frac{dBZV}{dt}=BZV_{in} - k_1 .BZV$$
$$\frac{dOZ}{dt}=k_2 .(OZ_{sat}-OZ) - k_1 .BZV$$
Or tables:
course | points
--- | ---
Math | 8
Chemistry | 4
or tables with Latex..
Symbool | verklaring
--- | ---
$BZV_{(t=0)}$ | initiële biochemische zuurstofvraag (7.33 mg.l-1)
$OZ_{(t=0)}$ | initiële opgeloste zuurstof (8.5 mg.l-1)
$BZV_{in}$ | input BZV(1 mg.l-1.min-1)
$OZ_{sat}$ | saturatieconcentratie opgeloste zuurstof (11 mg.l-1)
$k_1$ | bacteriële degradatiesnelheid (0.3 min-1)
$k_2$ | reäeratieconstante (0.4 min-1)
Code can also be incorporated, but than just to illustrate:
python
BOT = 12
print(BOT)
See also: https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet
HTML
You can also use HTML commands, just check this cell:
<h3> html-adapted titel with <h3> </h3>
<p></p>
<b> Bold text <b> </b> of <i>or italic <i> </i>
Headings of different sizes: section
subsection
subsubsection
Raw Text
Notebook handling ESSENTIALS
Completion: TAB
The TAB button is essential: It provides you all possible actions you can do after loading in a library AND it is used for automatic autocompletion:
End of explanation
round(3.2)
os.mkdir
# An alternative is to put a question mark behind the command
os.mkdir?
Explanation: Help: SHIFT + TAB
The SHIFT-TAB combination is ultra essential to get information/help about the current operation
End of explanation
import glob
glob.glob??
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: What happens if you put two question marks behind the command?
</div>
End of explanation
%psearch os.*dir
Explanation: edit mode to command mode
edit mode means you're editing a cell, i.e. with your cursor inside a cell to type content --> <font color="green">green colored side</font>
command mode means you're NOT editing(!), i.e. NOT with your cursor inside a cell to type content --> <font color="blue">blue colored side</font>
To start editing, click inside a cell or
<img src="../../img/enterbutton.png" alt="Key enter" style="width:150px">
To stop editing,
<img src="../../img/keyescape.png" alt="Key A" style="width:150px">
new cell A-bove
<img src="../../img/keya.png" alt="Key A" style="width:150px">
Create a new cell above with the key A... when in command mode
new cell B-elow
<img src="../../img/keyb.png" alt="Key B" style="width:150px">
Create a new cell below with the key B... when in command mode
CTRL + SHIFT + P
Just do it!
Trouble...
<div class="alert alert-danger">
<b>NOTE</b>: When you're stuck, or things do crash:
<ul>
<li> first try **Kernel** > **Interrupt** -> your cell should stop running
<li> if no succes -> **Kernel** > **Restart** -> restart your notebook
</ul>
</div>
Overload?!?
<img src="../../img/toomuch.jpg" alt="Key A" style="width:500px">
<br><br>
<center>No stress, just go to </center>
<br>
<center><p style="font-size: 200%;text-align: center;margin:500">Help > Keyboard shortcuts</p></center>
Stackoverflow is really, really, really nice!
http://stackoverflow.com/questions/tagged/python
Google search is with you!
<big><center>REMEMBER: To run a cell: <strike>push the start triangle in the menu or</strike> type SHIFT + ENTER
some MAGIC...
%psearch
End of explanation
%%timeit
mylist = range(1000)
for i in mylist:
i = i**2
import numpy as np
%%timeit
np.arange(1000)**2
Explanation: %%timeit
End of explanation
%lsmagic
Explanation: %lsmagic
End of explanation
%whos
Explanation: %whos
End of explanation
from IPython.display import FileLink, FileLinks
FileLinks('.', recursive=False)
Explanation: Let's get started!
End of explanation |
6,619 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Monte Carlo simulation
Please cite
Step1: Station coordinates and thresholds from a set of log files
Specify
Step2: Station coordinates from csv file
Input network title and csv file here
Step3: Setting up and checking station locations
Step4: Setting up grid
Input desired grid boundaries and interval here in meters from the center of the network (no point located over the center!)
Step5: General calculations at grid points
Set number of iterations and solution requirements here (minimum number of contributing stations, maximum reduced chi squared value)
This fuction will return the dimensions of the covariance ellipses for solutions at each point at 'ntsd' standard deviations in the 'evalues' array (width (m), height (m), angle) and the standard deviation of altitude solution in the 'svalues' array (m)
If a source is not sampled by enough stations for a solution a RuntimeWarning will be generated, but this will not negatively impact the following calculations
Step6: Detection efficiency
Step7: Plotting horizontal errors by ellipse over detection efficiency
Step8: Plotting horizontal errors by ellipse over standard deviation of altitude solutions | Python Code:
%pylab inline
import pyproj as proj4
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import datetime
# import read_logs
from mpl_toolkits.basemap import Basemap
from coordinateSystems import TangentPlaneCartesianSystem, GeographicSystem, MapProjection
import scipy.stats as st
from mpl_toolkits.basemap import Basemap
from matplotlib.patches import Ellipse
import parsed_functions as pf
import simulation_ellipse as se
sq = np.load('source_quantiles',fix_imports='True', encoding='latin1') # in Watts
fde = 100-np.load('fde.csv',fix_imports='True', encoding='latin1') # Corresponding flash DE
c0 = 3.0e8 # m/s
dt_rms = 23.e-9 # seconds
lma_digitizer_window = 40.0e-9 # seconds per sample
Explanation: Monte Carlo simulation
Please cite: V. C. Chmielewski and E. C. Bruning (2016), Lightning Mapping Array flash detection performance with variable receiver thresholds, J. Geophys. Res. Atmos., 121, 8600-8614, doi:10.1002/2016JD025159
If any results from this model are presented.
Contact:
[email protected]
End of explanation
# import os
# # start_time = datetime.datetime(2014,5,26,2) #25 set
# # end_time = datetime.datetime(2014,5,26,3,50)
# useddir = '/Users/Vanna/Documents/logs/'
# exclude = np.array(['W','A',])
# days = np.array([start_time+datetime.timedelta(days=i) for i in range((end_time-start_time).days+1)])
# days_string = np.array([i.strftime("%y%m%d") for i in days])
# logs = pd.DataFrame()
# dir = os.listdir(useddir)
# for file in dir:
# if np.any(file[2:] == days_string) & np.all(exclude!=file[1]):
# print file
# logs = logs.combine_first(read_logs.parsing(useddir+file,T_set='True'))
# aves = logs[start_time:end_time].mean()
# aves = np.array(aves).reshape(4,len(aves)/4).T
Explanation: Station coordinates and thresholds from a set of log files
Specify:
start time
end time
the directory holding the log files
any stations you wish to exclude from the analysis
End of explanation
Network = 'grid_LMA' # name of network in the csv file
stations = pd.read_csv('network.csv') # network csv file with one or multiple networks
stations.set_index('network').loc[Network]
aves = np.array(stations.set_index('network').loc[Network])[:,:-1].astype('float')
Explanation: Station coordinates from csv file
Input network title and csv file here
End of explanation
center = (np.mean(aves[:,1]), np.mean(aves[:,2]), np.mean(aves[:,0]))
geo = GeographicSystem()
tanp = TangentPlaneCartesianSystem(center[0], center[1], center[2])
mapp = MapProjection
projl = MapProjection(projection='laea', lat_0=center[0], lon_0=center[1])
alt, lat, lon = aves[:,:3].T
stations_ecef = np.array(geo.toECEF(lon, lat, alt)).T
stations_local = tanp.toLocal(stations_ecef.T).T
center_ecef = np.array(geo.toECEF(center[1],center[0],center[2]))
ordered_threshs = aves[:,-1]
plt.scatter(stations_local[:,0]/1000., stations_local[:,1]/1000., c=aves[:,3])
plt.colorbar()
circle=plt.Circle((0,0),30,color='k',fill=False)
# plt.xlim(-80,80)
# plt.ylim(-80,80)
# fig = plt.gcf()
# fig.gca().add_artist(circle)
plt.show()
Explanation: Setting up and checking station locations
End of explanation
xmin, xmax, xint = -300001, 299999, 20000
ymin, ymax, yint = -300001, 299999, 20000
# alts = np.arange(500,20500,500.)
alts = np.array([7000])
initial_points = np.array(np.meshgrid(np.arange(xmin,xmax+xint,xint),
np.arange(ymin,ymax+yint,yint), alts))
x,y,z=initial_points.reshape((3,int(np.size(initial_points)/3)))
points2 = tanp.toLocal(np.array(projl.toECEF(x,y,z))).T
tanp_all = []
for i in range(len(aves[:,0])):
tanp_all = tanp_all + [TangentPlaneCartesianSystem(aves[i,1],aves[i,2],aves[i,0])]
Explanation: Setting up grid
Input desired grid boundaries and interval here in meters from the center of the network (no point located over the center!)
End of explanation
iterations=500
evalues = np.zeros((np.shape(points2)[0],3))
svalues = np.zeros((np.shape(points2)[0],1))
# # for r,theta,z errors and standard deviations and overall detection efficiency
for i in range(len(x)):
evalues[i],svalues[i]= se.black_boxtesting(points2[i,0], points2[i,1], points2[i,2], iterations,
stations_local,ordered_threshs,stations_ecef,center_ecef,
tanp_all,
c0,dt_rms,tanp,projl,
chi2_filter=5.,min_stations=6,ntsd=3
)
Explanation: General calculations at grid points
Set number of iterations and solution requirements here (minimum number of contributing stations, maximum reduced chi squared value)
This fuction will return the dimensions of the covariance ellipses for solutions at each point at 'ntsd' standard deviations in the 'evalues' array (width (m), height (m), angle) and the standard deviation of altitude solution in the 'svalues' array (m)
If a source is not sampled by enough stations for a solution a RuntimeWarning will be generated, but this will not negatively impact the following calculations
End of explanation
# Currently hard-coded to calculate over a 300 x 300 km grid around the network
latp, lonp, sde, fde_a, minp = pf.quick_method(
# input array must be in N x (lat, lon, alt, threshold)
np.array([aves[:,1],aves[:,2],aves[:,0],aves[:,3]]).transpose(),
sq, fde,
xint=5000, # Grid spacing
altitude=7000, # Altitude of grid MSL
station_requirement=6, # Minimum number of stations required to trigger
mindist = 300000 # Grid ends 300 km from the most distant station in each direction
)
Explanation: Detection efficiency
End of explanation
domain = (xmax-xint/2.)
maps = Basemap(projection='laea',lat_0=center[0],lon_0=center[1],width=domain*2,height=domain*2)
ax = plt.subplot(111)
x, y = maps(lonp, latp)
# s = plt.pcolormesh(x,y,np.ma.masked_where(sde==0,sde),cmap = 'magma') # Source detection efficiency
s = plt.pcolormesh(x,y,np.ma.masked_where(fde_a==0,fde_a),cmap = 'magma') # Flash detection efficiency
plt.colorbar(label='Flash Detection Efficiency (%)')
s.set_clim(vmin=0,vmax=100)
for i in range(len(evalues[:,0])):
ell = Ellipse(xy=(points2[i,0]+domain, points2[i,1]+domain),
width=evalues[i,0], height=evalues[i,1],
angle=evalues[i,2], color='black')
ell.set_facecolor('none')
ax.add_artist(ell)
plt.scatter(stations_local[:,0]+domain, stations_local[:,1]+domain, color='m', s=2)
maps.drawstates()
plt.tight_layout()
plt.show()
Explanation: Plotting horizontal errors by ellipse over detection efficiency
End of explanation
domain = (xmax-xint/2.)
maps = Basemap(projection='laea',lat_0=center[0],lon_0=center[1],width=domain*2,height=domain*2)
ax = plt.subplot(111)
s = plt.pcolormesh(np.arange(-xmax-xint/2.,xmax+3*xint/2.,xint)+domain,
np.arange(-xmax-xint/2.,xmax+3*xint/2.,xint)+domain,
np.ma.masked_where(svalues==0,svalues).reshape((31,31)),
cmap = 'viridis_r')
s.set_clim(vmin=0,vmax=5000)
plt.colorbar(label = 'Altitude standard deviation (m)')
for i in range(len(evalues[:,0])):
ell = Ellipse(xy=(points2[i,0]+domain, points2[i,1]+domain),
width=evalues[i,0], height=evalues[i,1],
angle=evalues[i,2], color='black')
ell.set_facecolor('none')
ax.add_artist(ell)
plt.scatter(stations_local[:,0]+domain, stations_local[:,1]+domain, color='m', s=2)
maps.drawstates()
plt.tight_layout()
plt.show()
Explanation: Plotting horizontal errors by ellipse over standard deviation of altitude solutions
End of explanation |
6,620 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
proper reading of biom table (output
Step1: biom table
Step2: mapping file
Step3: add two ratio variables of Vitamin D | Python Code:
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
Explanation: proper reading of biom table (output: biomtable.txt)
proper distinguishment between categorical and continous variables in mapping file
(output: mapping_cleaned_MrOS.txt)
End of explanation
# convert biom table to tab delimited file in bash with 'taxonomy' information remained
# biom convert -i reference-hit.tax.biom -o table.from_biom.txt --to-tsv --header-key 'taxonomy'
bt = pd.read_csv('../../data/table.from_biom.txt', sep='\t', index_col='#OTU ID', skiprows=1)
print(bt.shape)
bt.head()
print(bt.taxonomy.str.len().min())
print(bt.taxonomy.str.len().max())
bt.to_csv('../data/biomtable.txt', sep='\t')
Explanation: biom table
End of explanation
mf = pd.read_csv('../data/mapping_MrOS.txt', sep='\t', dtype=str, index_col='#SampleID')
print(mf.shape)
mf.head()
vars_cat = np.array(['BarcodeSequence', 'LinkerPrimerSequence', 'Experiment_Design_Description',
'Library_Construction_Protocol', 'Linker', 'Platform', 'Center_Name', 'Center_Project', 'Instrument_Model',
'Title', 'Anonymized_Name', 'Scientific_Name', 'Taxon_ID', 'Sample_Type', 'Geo_Loc_Name', 'Elevation', 'Env_Biome',
'Env_Feature', 'Env_Material', 'Env_Package', 'Collection_Timestamp', 'DNA_Extracted', 'Physical_Specimen_Location',
'Physical_Specimen_Remaining', 'Age_Units', 'Host_Subject_ID', 'Host_Taxid','Host_Scientific_Name', 'Host_Common_Name',
'Life_Stage', 'Sex', 'Height_Units', 'Weight_Units', 'Body_Habitat', 'Body_Site', 'Body_Product', 'GIERACE', 'SITE',
'TUDRAMT', 'TURSMOKE', 'M1ADEPR', 'M1VITMND', 'M1ANTIB', 'M1PROBI', 'OHSEAS', 'VDstatus', 'Description',
'OHV1D2CT', 'OHVD2CT'])
vars_cts = np.array(['Latitude', 'Longitude', 'Age', 'Height', 'Weight', 'BMI', 'PASCORE', 'DTVITD',
'OHV1D3', 'OHV24D3', 'OHVD3', 'OHVD2', 'OHV1D2', 'OHVDTOT', 'OHV1DTOT'])
# convert vars_cts to numeric and vars_cat to factors
df = mf.copy()
df[vars_cts] = df[vars_cts].apply(pd.to_numeric, errors='coerce')
df[vars_cat] = df[vars_cat].apply(lambda x: x.astype('category'))
# convert all pg/ml to ng/ml note: 1 ng/ml = 1000 pg/ml
df.OHV1D3 = df.OHV1D3/1000
df.OHV1D2 = df.OHV1D2/1000
df.OHV1DTOT = df.OHV1DTOT/1000
#df.M1ANTIB.value_counts()
Explanation: mapping file
End of explanation
# df['ratio_activation'] = df.OHV1D3/(df.OHVD3*1000) # pg/ml vs. ng/ml
# df['ratio_catabolism'] = df.OHV24D3/df.OHVD3 # both ng/ml
df['ratio_activation'] = df.OHV1D3/df.OHVD3
df['ratio_catabolism'] = df.OHV24D3/df.OHVD3
vars_cts = np.append(vars_cts, ['ratio_activation', 'ratio_catabolism'])
df[vars_cts].describe()
df[vars_cat].describe()
df[vars_cts].isnull().sum()
# for i in range(len(vars_cat)):
# print(df[vars_cat[i]].value_counts())
# check
print(mf.shape)
print(df.shape)
df.to_csv('../data/mapping_cleaned_MrOS.txt', sep= '\t', index=True)
Explanation: add two ratio variables of Vitamin D
End of explanation |
6,621 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
General Concepts
Step1: Let's get started with some basic imports
Step2: If running in IPython notebooks, you may see a "ShimWarning" depending on the version of Jupyter you are using - this is safe to ignore.
PHOEBE 2 uses constants defined in the IAU 2015 Resolution which conflict with the constants defined in astropy. As a result, you'll see the warnings as phoebe.u and phoebe.c "hijacks" the values in astropy.units and astropy.constants.
Whenever providing units, please make sure to use phoebe.u instead of astropy.units, otherwise the conversions may be inconsistent.
Logger
Before starting any script, it is a good habit to initialize a logger and define which levels of information you want printed to the command line (clevel) and dumped to a file (flevel). A convenience function is provided at the top-level via phoebe.logger to initialize the logger with any desired level.
The levels from most to least information are
Step3: All of these arguments are optional and will default to clevel='WARNING' if not provided. There is therefore no need to provide a filename if you don't provide a value for flevel.
So with this logger, anything with INFO, WARNING, ERROR, or CRITICAL levels will be printed to the screen. All messages of any level will be written to a file named 'tutorial.log' in the current directory.
Note
Step4: This object holds all the parameters and their respective values. We'll see in this tutorial and the next tutorial on constraints how to search through these parameters and set their values.
Step5: Next, we need to define our datasets via b.add_dataset. This will be the topic of the following tutorial on datasets.
Step6: We'll then want to run our forward model to create a synthetic model of the observables defined by these datasets using b.run_compute, which will be the topic of the computing observables tutorial.
Step7: We can access the value of any parameter, including the arrays in the synthetic model just generated. To export arrays to a file, we could call b.export_arrays
Step8: We can then plot the resulting model with b.plot, which will be covered in the plotting tutorial.
Step9: And then lastly, if we wanted to solve the inverse problem and "fit" parameters to observational data, we may want to add distributions to our system so that we can run estimators, optimizers, or samplers.
Default Binary Bundle
For this tutorial, let's start over and discuss this b object in more detail and how to access and change the values of the input parameters.
Everything for our system will be stored in this single Python object that we call the Bundle which we'll call b (short for bundle).
Step10: The Bundle is just a collection of Parameter objects along with some callable methods. Here we can see that the default binary Bundle consists of over 100 individual parameters.
Step11: If we want to view or edit a Parameter in the Bundle, we first need to know how to access it. Each Parameter object has a number of tags which can be used to filter (similar to a database query). When filtering the Bundle, a ParameterSet is returned - this is essentially just a subset of the Parameters in the Bundle and can be further filtered until eventually accessing a single Parameter.
Step12: Here we filtered on the context tag for all Parameters with context='compute' (i.e. the options for computing a model). If we want to see all the available options for this tag in the Bundle, we can use the plural form of the tag as a property on the Bundle or any ParameterSet.
Step13: Although there is no strict hierarchy or order to the tags, it can be helpful to think of the context tag as the top-level tag and is often very helpful to filter by the appropriate context first.
Other tags currently include
Step14: This then tells us what can be used to filter further.
Step15: The qualifier tag is the shorthand name of the Parameter itself. If you don't know what you're looking for, it is often useful to list all the qualifiers of the Bundle or a given ParameterSet.
Step16: Now that we know the options for the qualifier within this filter, we can choose to filter on one of those. Let's look filter by the 'ntriangles' qualifier.
Step17: Once we filter far enough to get to a single Parameter, we can use get_parameter to return the Parameter object itself (instead of a ParameterSet).
Step18: As a shortcut, get_parameter also takes filtering keywords. So the above line is also equivalent to the following
Step19: Each Parameter object contains several keys that provide information about that Parameter. The keys "description" and "value" are always included, with additional keys available depending on the type of Parameter.
Step20: We can also see a top-level view of the filtered parameters and descriptions (note
Step21: Since the Parameter for ntriangles is a FloatParameter, it also includes a key for the allowable limits.
Step22: In this case, we're looking at the Parameter called ntriangles with the component tag set to 'primary'. This Parameter therefore defines how many triangles should be created when creating the mesh for the star named 'primary'. By default, this is set to 1500 triangles, with allowable values above 100.
If we wanted a finer mesh, we could change the value.
Step23: If we choose the distortion_method qualifier from that same ParameterSet, we'll see that it has a few different keys in addition to description and value.
Step24: Since the distortion_method Parameter is a ChoiceParameter, it contains a key for the allowable choices.
Step25: We can only set a value if it is contained within this list - if you attempt to set a non-valid value, an error will be raised.
Step26: Parameter types include
Step27: However, this dictionary-style twig access will never return a ParameterSet with a single Parameter, instead it will return the Parameter itself. This can be seen in the different output between the following two lines
Step28: Because of this, this dictionary-style twig access can also set the value directly
Step29: And can even provide direct access to the keys/attributes of the Parameter (value, description, limits, etc)
Step30: As with the tags, you can call .twigs on any ParameterSet to see the "smallest unique twigs" of the contained Parameters | Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
Explanation: General Concepts: The PHOEBE Bundle
HOW TO RUN THIS FILE: if you're running this in a Jupyter notebook or Google Colab session, you can click on a cell and then shift+Enter to run the cell and automatically select the next cell. Alt+Enter will run a cell and create a new cell below it. Ctrl+Enter will run a cell but keep it selected. To restart from scratch, restart the kernel/runtime.
All of these tutorials assume basic comfort with Python in general - particularly with the concepts of lists, dictionaries, and objects as well as basic comfort with using the numpy and matplotlib packages. This tutorial introduces all the general concepts of accessing parameters within the Bundle.
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
import phoebe
from phoebe import u # units
Explanation: Let's get started with some basic imports:
End of explanation
logger = phoebe.logger(clevel='WARNING')
Explanation: If running in IPython notebooks, you may see a "ShimWarning" depending on the version of Jupyter you are using - this is safe to ignore.
PHOEBE 2 uses constants defined in the IAU 2015 Resolution which conflict with the constants defined in astropy. As a result, you'll see the warnings as phoebe.u and phoebe.c "hijacks" the values in astropy.units and astropy.constants.
Whenever providing units, please make sure to use phoebe.u instead of astropy.units, otherwise the conversions may be inconsistent.
Logger
Before starting any script, it is a good habit to initialize a logger and define which levels of information you want printed to the command line (clevel) and dumped to a file (flevel). A convenience function is provided at the top-level via phoebe.logger to initialize the logger with any desired level.
The levels from most to least information are:
DEBUG
INFO
WARNING
ERROR
CRITICAL
End of explanation
b = phoebe.default_binary()
Explanation: All of these arguments are optional and will default to clevel='WARNING' if not provided. There is therefore no need to provide a filename if you don't provide a value for flevel.
So with this logger, anything with INFO, WARNING, ERROR, or CRITICAL levels will be printed to the screen. All messages of any level will be written to a file named 'tutorial.log' in the current directory.
Note: the logger messages are not included in the outputs shown below.
Overview
As a quick overview of what's to come, here is a quick preview of some of the steps used when modeling a binary system with PHOEBE. Each of these steps will be explained in more detail throughout these tutorials.
First we need to create our binary system. For the sake of most of these tutorials, we'll use the default detached binary available through the phoebe.default_binary constructor.
End of explanation
b.set_value(qualifier='teff', component='primary', value=6500)
Explanation: This object holds all the parameters and their respective values. We'll see in this tutorial and the next tutorial on constraints how to search through these parameters and set their values.
End of explanation
b.add_dataset('lc', compute_times=phoebe.linspace(0,1,101))
Explanation: Next, we need to define our datasets via b.add_dataset. This will be the topic of the following tutorial on datasets.
End of explanation
b.run_compute()
Explanation: We'll then want to run our forward model to create a synthetic model of the observables defined by these datasets using b.run_compute, which will be the topic of the computing observables tutorial.
End of explanation
print(b.get_value(qualifier='fluxes', context='model'))
Explanation: We can access the value of any parameter, including the arrays in the synthetic model just generated. To export arrays to a file, we could call b.export_arrays
End of explanation
afig, mplfig = b.plot(show=True)
Explanation: We can then plot the resulting model with b.plot, which will be covered in the plotting tutorial.
End of explanation
b = phoebe.default_binary()
Explanation: And then lastly, if we wanted to solve the inverse problem and "fit" parameters to observational data, we may want to add distributions to our system so that we can run estimators, optimizers, or samplers.
Default Binary Bundle
For this tutorial, let's start over and discuss this b object in more detail and how to access and change the values of the input parameters.
Everything for our system will be stored in this single Python object that we call the Bundle which we'll call b (short for bundle).
End of explanation
b
Explanation: The Bundle is just a collection of Parameter objects along with some callable methods. Here we can see that the default binary Bundle consists of over 100 individual parameters.
End of explanation
b.filter(context='compute')
Explanation: If we want to view or edit a Parameter in the Bundle, we first need to know how to access it. Each Parameter object has a number of tags which can be used to filter (similar to a database query). When filtering the Bundle, a ParameterSet is returned - this is essentially just a subset of the Parameters in the Bundle and can be further filtered until eventually accessing a single Parameter.
End of explanation
b.contexts
Explanation: Here we filtered on the context tag for all Parameters with context='compute' (i.e. the options for computing a model). If we want to see all the available options for this tag in the Bundle, we can use the plural form of the tag as a property on the Bundle or any ParameterSet.
End of explanation
b.filter(context='compute').components
Explanation: Although there is no strict hierarchy or order to the tags, it can be helpful to think of the context tag as the top-level tag and is often very helpful to filter by the appropriate context first.
Other tags currently include:
* kind
* figure
* component
* feature
* dataset
* distribution
* compute
* model
* solver
* solution
* time
* qualifier
Accessing the plural form of the tag as an attribute also works on a filtered ParameterSet
End of explanation
b.filter(context='compute').filter(component='primary')
Explanation: This then tells us what can be used to filter further.
End of explanation
b.filter(context='compute', component='primary').qualifiers
Explanation: The qualifier tag is the shorthand name of the Parameter itself. If you don't know what you're looking for, it is often useful to list all the qualifiers of the Bundle or a given ParameterSet.
End of explanation
b.filter(context='compute', component='primary', qualifier='ntriangles')
Explanation: Now that we know the options for the qualifier within this filter, we can choose to filter on one of those. Let's look filter by the 'ntriangles' qualifier.
End of explanation
b.filter(context='compute', component='primary', qualifier='ntriangles').get_parameter()
Explanation: Once we filter far enough to get to a single Parameter, we can use get_parameter to return the Parameter object itself (instead of a ParameterSet).
End of explanation
b.get_parameter(context='compute', component='primary', qualifier='ntriangles')
Explanation: As a shortcut, get_parameter also takes filtering keywords. So the above line is also equivalent to the following:
End of explanation
b.get_parameter(context='compute', component='primary', qualifier='ntriangles').get_value()
b.get_parameter(context='compute', component='primary', qualifier='ntriangles').get_description()
Explanation: Each Parameter object contains several keys that provide information about that Parameter. The keys "description" and "value" are always included, with additional keys available depending on the type of Parameter.
End of explanation
print(b.filter(context='compute', component='primary').info)
Explanation: We can also see a top-level view of the filtered parameters and descriptions (note: the syntax with @ symbols will be explained further in the section on twigs below.
End of explanation
b.get_parameter(context='compute', component='primary', qualifier='ntriangles').get_limits()
Explanation: Since the Parameter for ntriangles is a FloatParameter, it also includes a key for the allowable limits.
End of explanation
b.get_parameter(context='compute', component='primary', qualifier='ntriangles').set_value(2000)
b.get_parameter(context='compute', component='primary', qualifier='ntriangles')
Explanation: In this case, we're looking at the Parameter called ntriangles with the component tag set to 'primary'. This Parameter therefore defines how many triangles should be created when creating the mesh for the star named 'primary'. By default, this is set to 1500 triangles, with allowable values above 100.
If we wanted a finer mesh, we could change the value.
End of explanation
b.get_parameter(context='compute', component='primary', qualifier='distortion_method')
b.get_parameter(context='compute', component='primary', qualifier='distortion_method').get_value()
b.get_parameter(context='compute', component='primary', qualifier='distortion_method').get_description()
Explanation: If we choose the distortion_method qualifier from that same ParameterSet, we'll see that it has a few different keys in addition to description and value.
End of explanation
b.get_parameter(context='compute', component='primary', qualifier='distortion_method').get_choices()
Explanation: Since the distortion_method Parameter is a ChoiceParameter, it contains a key for the allowable choices.
End of explanation
try:
b.get_parameter(context='compute', component='primary', qualifier='distortion_method').set_value('blah')
except Exception as e:
print(e)
b.get_parameter(context='compute', component='primary', qualifier='distortion_method').set_value('rotstar')
b.get_parameter(context='compute', component='primary', qualifier='distortion_method').get_value()
Explanation: We can only set a value if it is contained within this list - if you attempt to set a non-valid value, an error will be raised.
End of explanation
b.filter(context='compute', component='primary')
b['primary@compute']
b['compute@primary']
Explanation: Parameter types include:
* IntParameter
* FloatParameter
* FloatArrayParameter
* BoolParameter
* StringParameter
* ChoiceParameter
* SelectParameter
* DictParameter
* ConstraintParameter
* DistributionParameter
* HierarchyParameter
* UnitParameter
* JobParameter
these Parameter types and their available options are all described in great detail in Advanced: Parameter Types
Twigs
As a shortcut to needing to filter by all these tags, the Bundle and ParameterSets can be filtered through what we call "twigs" (as in a Bundle of twigs). These are essentially a single string-representation of the tags, separated by @ symbols.
This is very useful as a shorthand when working in an interactive Python console, but somewhat obfuscates the names of the tags and can make it difficult if you use them in a script and make changes earlier in the script.
For example, the following lines give identical results:
End of explanation
b.filter(context='compute', component='primary', qualifier='distortion_method')
b['distortion_method@primary@compute']
Explanation: However, this dictionary-style twig access will never return a ParameterSet with a single Parameter, instead it will return the Parameter itself. This can be seen in the different output between the following two lines:
End of explanation
b['distortion_method@primary@compute'] = 'roche'
print(b['distortion_method@primary@compute'])
Explanation: Because of this, this dictionary-style twig access can also set the value directly:
End of explanation
print(b['value@distortion_method@primary@compute'])
print(b['description@distortion_method@primary@compute'])
Explanation: And can even provide direct access to the keys/attributes of the Parameter (value, description, limits, etc)
End of explanation
b['compute'].twigs
Explanation: As with the tags, you can call .twigs on any ParameterSet to see the "smallest unique twigs" of the contained Parameters
End of explanation |
6,622 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Solving the differential equations
Step1: Solving the two differential equations given
Step3: To solve these, I first define a derivative function
Step5: Then I use odeint to solve the equations | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
Explanation: Solving the differential equations
End of explanation
gamma = 4.4983169634398596e-06
Explanation: Solving the two differential equations given:
$$ \ddot{\mathbf{r}} = -\gamma \left{ \frac{M}{r^3}\mathbf{r} -\frac{S}{\rho^3}\boldsymbol{\rho} + \frac{S}{R^3}\boldsymbol\Re \right} $$
$$ \ddot{\boldsymbol\Re} = -\gamma \frac{M+S}{R^3}\boldsymbol\Re$$
$\gamma$ is the Gravitational constant.
$M$ is the central mass of the main galaxy and $S$ is the central mass of the disrupting galaxy
$\mathbf{r}$ is the radius vector from mass $M$ to massless point particle $m$, representing a single (massless) star in the outer disk of the main galaxy.
$\boldsymbol\Re$ is the radius vector from $M$ to $S$
$\boldsymbol{\rho} = \boldsymbol{\Re} - \boldsymbol{r}$
End of explanation
def derivs(solarray, t, M, S):
Computes the derivatives of the equations dictating the behavior of the stars orbiting galaxy M and the
disrupting galaxy, S
Parameters
--------------
solarray : solution array for the differential equations
t : array of time values
M : central mass of main galaxy
S : central mass of disrupting galaxy
Returns
--------------
derivarray : an array of the velocities and accelerations of galaxy S and stars, m
derivarray = np.zeros(len(solarray))
R_x = solarray[0]
R_y = solarray[1]
R = np.sqrt(solarray[0]**2+solarray[1]**2)
vR_x = solarray[2]
vR_y = solarray[3]
dR_x = vR_x
dR_y = vR_y
dvR_x = ((-gamma*(M+S)*R_x)/R**3)
dvR_y = ((-gamma*(M+S)*R_y)/R**3)
derivarray[0] = dR_x
derivarray[1] = dR_y
derivarray[2] = dvR_x
derivarray[3] = dvR_y
for n in range(1,int(len(solarray)/4)):
r_x = solarray[4*n]
r_y = solarray[4*n+1]
r = np.sqrt(r_x**2+r_y**2)
vr_x = solarray[4*n+2]
vr_y = solarray[4*n+3]
p_x = R_x - r_x
p_y = R_y - r_y
p = np.sqrt(p_x**2+p_y**2)
dr_x = vr_x
dr_y = vr_y
dvr_x = -gamma*((M/r**3)*r_x-(S/p**3)*p_x+(S/R**3)*R_x)
dvr_y = -gamma*((M/r**3)*r_y-(S/p**3)*p_y+(S/R**3)*R_y)
derivarray[4*n] = dr_x
derivarray[4*n+1] = dr_y
derivarray[4*n+2] = dvr_x
derivarray[4*n+3] = dvr_y
return derivarray
a = derivs([-24,70,200,-200,-3.5,3.5,-200,-200,-3.5,-3.5,200,-200],1,1e11,1e11)
assert(a.shape==(12,))
assert(a.ndim==1)
Explanation: To solve these, I first define a derivative function:
End of explanation
def equationsolver(ic,max_time,time_step,M,S):
Solves the differential equations using odeint and the derivs function defined above
Parameters
-------------
ic : initial conditions
max_time : maximum time to be used for time array
time_step : step size of time array
M : central mass of main galaxy
S : central mass of disrupting galaxy
Returns
------------
sol : solution array for the differential equations
t = np.linspace(0,max_time,time_step)
sol = odeint(derivs, ic, t, args=(M,S),atol=1e-3,rtol=1e-3)
return sol
b = equationsolver([-24,70,200,-200,-3.5,3.5,-200,-200,-3.5,-3.5,200,-200],1,100,1e11,1e11)
assert(b.shape==(100,12))
assert(b.ndim==2)
Explanation: Then I use odeint to solve the equations
End of explanation |
6,623 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Distributed Estimation
This notebook goes through a couple of examples to show how to use distributed_estimation. We import the DistributedModel class and make the exog and endog generators.
Step3: Next we generate some random data to serve as an example.
Step4: This is the most basic fit, showing all of the defaults, which are to use OLS as the model class, and the debiasing procedure.
Step5: Then we run through a slightly more complicated example which uses the GLM model class.
Step6: We can also change the estimation_method and the join_method. The below example show how this works for the standard OLS case. Here we using a naive averaging approach instead of the debiasing procedure.
Step7: Finally, we can also change the results_class used. The following example shows how this work for a simple case with an unregularized model and naive averaging. | Python Code:
import numpy as np
from scipy.stats.distributions import norm
from statsmodels.base.distributed_estimation import DistributedModel
def _exog_gen(exog, partitions):
partitions exog data
n_exog = exog.shape[0]
n_part = np.ceil(n_exog / partitions)
ii = 0
while ii < n_exog:
jj = int(min(ii + n_part, n_exog))
yield exog[ii:jj, :]
ii += int(n_part)
def _endog_gen(endog, partitions):
partitions endog data
n_endog = endog.shape[0]
n_part = np.ceil(n_endog / partitions)
ii = 0
while ii < n_endog:
jj = int(min(ii + n_part, n_endog))
yield endog[ii:jj]
ii += int(n_part)
Explanation: Distributed Estimation
This notebook goes through a couple of examples to show how to use distributed_estimation. We import the DistributedModel class and make the exog and endog generators.
End of explanation
X = np.random.normal(size=(1000, 25))
beta = np.random.normal(size=25)
beta *= np.random.randint(0, 2, size=25)
y = norm.rvs(loc=X.dot(beta))
m = 5
Explanation: Next we generate some random data to serve as an example.
End of explanation
debiased_OLS_mod = DistributedModel(m)
debiased_OLS_fit = debiased_OLS_mod.fit(
zip(_endog_gen(y, m), _exog_gen(X, m)), fit_kwds={"alpha": 0.2}
)
Explanation: This is the most basic fit, showing all of the defaults, which are to use OLS as the model class, and the debiasing procedure.
End of explanation
from statsmodels.genmod.generalized_linear_model import GLM
from statsmodels.genmod.families import Gaussian
debiased_GLM_mod = DistributedModel(
m, model_class=GLM, init_kwds={"family": Gaussian()}
)
debiased_GLM_fit = debiased_GLM_mod.fit(
zip(_endog_gen(y, m), _exog_gen(X, m)), fit_kwds={"alpha": 0.2}
)
Explanation: Then we run through a slightly more complicated example which uses the GLM model class.
End of explanation
from statsmodels.base.distributed_estimation import _est_regularized_naive, _join_naive
naive_OLS_reg_mod = DistributedModel(
m, estimation_method=_est_regularized_naive, join_method=_join_naive
)
naive_OLS_reg_params = naive_OLS_reg_mod.fit(
zip(_endog_gen(y, m), _exog_gen(X, m)), fit_kwds={"alpha": 0.2}
)
Explanation: We can also change the estimation_method and the join_method. The below example show how this works for the standard OLS case. Here we using a naive averaging approach instead of the debiasing procedure.
End of explanation
from statsmodels.base.distributed_estimation import (
_est_unregularized_naive,
DistributedResults,
)
naive_OLS_unreg_mod = DistributedModel(
m,
estimation_method=_est_unregularized_naive,
join_method=_join_naive,
results_class=DistributedResults,
)
naive_OLS_unreg_params = naive_OLS_unreg_mod.fit(
zip(_endog_gen(y, m), _exog_gen(X, m)), fit_kwds={"alpha": 0.2}
)
Explanation: Finally, we can also change the results_class used. The following example shows how this work for a simple case with an unregularized model and naive averaging.
End of explanation |
6,624 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Outline
Glossary
7. Observing Systems
Previous
Step1: Import section specific modules | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
Explanation: Outline
Glossary
7. Observing Systems
Previous: 7.7 Propagation Effects
Next: 7.x Further Reading and References
Import standard modules:
End of explanation
HTML('../style/code_toggle.html')
Explanation: Import section specific modules:
End of explanation |
6,625 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'snu', 'sandbox-1', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: SNU
Source ID: SANDBOX-1
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:38
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
6,626 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ABC calibration of $I_\text{to}$ in standardised model to unified dataset.
Step1: Initial set-up
Load experiments used for unified dataset calibration
Step2: Plot steady-state and tau functions of original model
Step3: Combine model and experiments to produce
Step4: Set up prior ranges for each parameter in the model.
See the modelfile for further information on specific parameters. Prepending `log_' has the effect of setting the parameter in log space.
Step5: Run ABC calibration
Step6: Analysis of results | Python Code:
import os, tempfile
import logging
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from ionchannelABC import theoretical_population_size
from ionchannelABC import IonChannelDistance, EfficientMultivariateNormalTransition, IonChannelAcceptor
from ionchannelABC.experiment import setup
from ionchannelABC.visualization import plot_sim_results, plot_kde_matrix_custom
import myokit
from pyabc import Distribution, RV, History, ABCSMC
from pyabc.epsilon import MedianEpsilon
from pyabc.sampler import MulticoreEvalParallelSampler, SingleCoreSampler
from pyabc.populationstrategy import ConstantPopulationSize
Explanation: ABC calibration of $I_\text{to}$ in standardised model to unified dataset.
End of explanation
from experiments.ito_wang import wang_act, wang_inact
from experiments.ito_courtemanche import (courtemanche_kin,
courtemanche_rec,
courtemanche_deact,
courtemanche_act_kin,
courtemanche_inact_kin)
modelfile = 'models/standardised_ito.mmt'
Explanation: Initial set-up
Load experiments used for unified dataset calibration:
- Steady-state activation [Wang1993]
- Activation time constant [Courtemanche1998]
- Deactivation time constant [Courtemanche1998]
- Steady-state inactivation [Wang1993]
- Inactivation time constant [Courtemanche1998]
- Recovery time constant [Courtemanche1998]
End of explanation
from ionchannelABC.visualization import plot_variables
sns.set_context('poster')
V = np.arange(-80, 40, 0.01)
sta_par_map = {'ri': 'ito.r_ss',
'si': 'ito.s_ss',
'rt': 'ito.tau_r',
'st': 'ito.tau_s'}
f, ax = plot_variables(V, sta_par_map, 'models/standardised_ito.mmt', figshape=(2,2))
Explanation: Plot steady-state and tau functions of original model
End of explanation
observations, model, summary_statistics = setup(modelfile,
wang_act,
wang_inact,
courtemanche_kin,
courtemanche_deact,
courtemanche_rec)
assert len(observations)==len(summary_statistics(model({})))
Explanation: Combine model and experiments to produce:
- observations dataframe
- model function to run experiments and return traces
- summary statistics function to accept traces
End of explanation
limits = {'log_ito.p_1': (-7, 3),
'ito.p_2': (1e-7, 0.4),
'log_ito.p_3': (-7, 3),
'ito.p_4': (1e-7, 0.4),
'log_ito.p_5': (-7, 3),
'ito.p_6': (1e-7, 0.4),
'log_ito.p_7': (-7, 3),
'ito.p_8': (1e-7, 0.4)}
prior = Distribution(**{key: RV("uniform", a, b - a)
for key, (a,b) in limits.items()})
# Test this works correctly with set-up functions
assert len(observations) == len(summary_statistics(model(prior.rvs())))
Explanation: Set up prior ranges for each parameter in the model.
See the modelfile for further information on specific parameters. Prepending `log_' has the effect of setting the parameter in log space.
End of explanation
db_path = ("sqlite:///" + os.path.join(tempfile.gettempdir(), "standardised_ito.db"))
logging.basicConfig()
abc_logger = logging.getLogger('ABC')
abc_logger.setLevel(logging.DEBUG)
eps_logger = logging.getLogger('Epsilon')
eps_logger.setLevel(logging.DEBUG)
pop_size = theoretical_population_size(2, len(limits))
print("Theoretical minimum population size is {} particles".format(pop_size))
abc = ABCSMC(models=model,
parameter_priors=prior,
distance_function=IonChannelDistance(
exp_id=list(observations.exp_id),
variance=list(observations.variance),
delta=0.05),
population_size=ConstantPopulationSize(1000),
summary_statistics=summary_statistics,
transitions=EfficientMultivariateNormalTransition(),
eps=MedianEpsilon(initial_epsilon=100),
sampler=MulticoreEvalParallelSampler(n_procs=16),
acceptor=IonChannelAcceptor())
obs = observations.to_dict()['y']
obs = {str(k): v for k, v in obs.items()}
abc_id = abc.new(db_path, obs)
history = abc.run(minimum_epsilon=0., max_nr_populations=100, min_acceptance_rate=0.01)
history = abc.run(minimum_epsilon=0., max_nr_populations=100, min_acceptance_rate=0.01)
Explanation: Run ABC calibration
End of explanation
history = History('sqlite:///results/standardised/ito/standardised_ito.db')
df, w = history.get_distribution(m=0)
df.describe()
sns.set_context('poster')
mpl.rcParams['font.size'] = 14
mpl.rcParams['legend.fontsize'] = 14
g = plot_sim_results(modelfile,
wang_act,
wang_inact,
courtemanche_kin,
courtemanche_deact,
courtemanche_rec,
df=df, w=w)
plt.tight_layout()
import pandas as pd
N = 100
sta_par_samples = df.sample(n=N, weights=w, replace=True)
sta_par_samples = sta_par_samples.set_index([pd.Index(range(N))])
sta_par_samples = sta_par_samples.to_dict(orient='records')
sns.set_context('poster')
mpl.rcParams['font.size'] = 14
mpl.rcParams['legend.fontsize'] = 14
f, ax = plot_variables(V, sta_par_map,
'models/standardised_ito.mmt',
[sta_par_samples],
figshape=(2,2))
plt.tight_layout()
m,_,_ = myokit.load(modelfile)
sns.set_context('paper')
g = plot_kde_matrix_custom(df, w, limits=limits)
plt.tight_layout()
Explanation: Analysis of results
End of explanation |
6,627 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Facies classification using Machine Learning- Majority voting
Contest entry by Priyanka Raghavan and Steve Hall
This notebook demonstrates how to train a machine learning algorithm to predict facies from well log data. The dataset we will use comes from a class excercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007).
The dataset we will use is log data from nine wells that have been labeled with a facies type based on oberservation of core. We will use this log data to train a support vector machine to classify facies types. We will use simple logistics regression to classify wells scikit-learn.
First we will explore the dataset. We will load the training data from 9 wells, and take a look at what we have to work with. We will plot the data from a couple wells, and create cross plots to look at the variation within the data.
Next we will condition the data set. We will remove the entries that have incomplete data. The data will be scaled to have zero mean and unit variance. We will also split the data into training and test sets.
We will then be ready to build the classifier.
Finally, once we have a built and tuned the classifier, we can apply the trained model to classify facies in wells which do not already have labels. We will apply the classifier to two wells, but in principle you could apply the classifier to any number of wells that had the same log data.
Exploring the dataset
First, we will examine the data set we will use to train the classifier. The training data is contained in the file facies_vectors.csv. The dataset consists of 5 wireline log measurements, two indicator variables and a facies label at half foot intervals. In machine learning terminology, each log measurement is a feature vector that maps a set of 'features' (the log measurements) to a class (the facies type). We will use the pandas library to load the data into a dataframe, which provides a convenient data structure to work with well log data.
Step1: This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate.
The seven predictor variables are
Step2: This is a quick view of the statistical distribution of the input variables. Looking at the count values, there are 3232 feature vectors in the training set.
Remove a single well to use as a blind test later. For that let us look at distribution of facies across wells
Step3: Based on graphs above NEWBY has a good distribution of wells and is taken out as blind well to be tested. Also since training data has null, remove them from data.
Step4: Let's take a look at the data from individual wells in a more familiar log plot form. We will create plots for the five well log variables, as well as a log for facies labels. The plots are based on the those described in Alessandro Amato del Monte's excellent tutorial.
Step5: Placing the log plotting code in a function will make it easy to plot the logs from multiples wells, and can be reused later to view the results when we apply the facies classification model to other wells. The function was written to take a list of colors and facies labels as parameters.
We then show log plots for wells SHRIMPLIN.
Step6: In addition to individual wells, we can look at how the various facies are represented by the entire training set. Let's plot a histogram of the number of training examples for each facies class.
Step7: This shows the distribution of examples by facies for the examples in the training set. Dolomite (facies 7) has the fewest with 81 examples. Depending on the performance of the classifier we are going to train, we may consider getting more examples of these facies.
Conditioning the data set
Now we extract just the feature variables we need to perform the classification. The predictor variables are the five wireline values and two geologic constraining variables. We also get a vector of the facies labels that correspond to each feature vector.
Step8: Scikit includes a preprocessing module that can 'standardize' the data (giving each variable zero mean and unit variance, also called whitening). Many machine learning algorithms assume features will be standard normally distributed data (ie
Step9: Scikit also includes a handy function to randomly split the training data into training and test sets. The test set contains a small subset of feature vectors that are not used to train the network. Because we know the true facies labels for these examples, we can compare the results of the classifier to the actual facies and determine the accuracy of the model. Let's use 20% of the data for the test set.
Step10: Training the classifier using Majority voting
Now we use the cleaned and conditioned training set to create a facies classifier. As mentioned above, we will use a type of machine learning model known as a Majority voting.
We trained classifier on four models KNeighbours, Random forest, logistic regression and Gradient boosting.
Step11: Now that the model has been trained on our data, we can use it to predict the facies of the feature vectors in the test set. Because we know the true facies labels of the vectors in the test set, we can use the results to evaluate the accuracy of the classifier.
We need some metrics to evaluate how good our classifier is doing. A confusion matrix is a table that can be used to describe the performance of a classification model. Scikit-learn allows us to easily create a confusion matrix by supplying the actual and predicted facies labels.
The confusion matrix is simply a 2D array. The entries of confusion matrix C[i][j] are equal to the number of observations predicted to have facies j, but are known to have facies i.
To simplify reading the confusion matrix, a function has been written to display the matrix along with facies labels and various error metrics. See the file classification_utilities.py in this repo for the display_cm() function.
Step12: The rows of the confusion matrix correspond to the actual facies labels. The columns correspond to the labels assigned by the classifier. For example, consider the first row. For the feature vectors in the test set that actually have label SS, 23 were correctly indentified as SS, 21 were classified as CSiS and 2 were classified as FSiS.
The entries along the diagonal are the facies that have been correctly classified. Below we define two functions that will give an overall value for how the algorithm is performing. The accuracy is defined as the number of correct classifications divided by the total number of classifications.
Step13: As noted above, the boundaries between the facies classes are not all sharp, and some of them blend into one another. The error within these 'adjacent facies' can also be calculated. We define an array to represent the facies adjacent to each other. For facies label i, adjacent_facies[i] is an array of the adjacent facies labels.
Step14: Using Voting classifier Now
The voting classifier is now used to vote and classify models
Step15: Applying the classification model to the blind data
We held a well back from the training, and stored it in a dataframe called blind
Step16: The label vector is just the Facies column
Step17: We can form the feature matrix by dropping some of the columns and making a new dataframe
Step18: Now we can transform this with the scaler we made before
Step19: Now it's a simple matter of making a prediction and storing it back in the dataframe
Step20: Let's see how we did with the confusion matrix
Step21: We managed 0.46 using the test data, but it was from the same wells as the training data. T
Step22: ...but does remarkably well on the adjacent facies predictions.
Step23: Applying the classification model to new data
Now that we have a trained facies classification model we can use it to identify facies in wells that do not have core data. In this case, we will apply the classifier to two wells, but we could use it on any number of wells for which we have the same set of well logs for input.
This dataset is similar to the training data except it does not have facies labels. It is loaded into a dataframe called test_data.
Step24: The data needs to be scaled using the same constants we used for the training data.
Step25: Finally we predict facies labels for the unknown data, and store the results in a Facies column of the test_data dataframe.
Step26: We can use the well log plot to view the classification results along with the well logs.
Step27: Finally we can write out a csv file with the well data along with the facies classification results. | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from mpl_toolkits.axes_grid1 import make_axes_locatable
from pandas import set_option
set_option("display.max_rows", 10)
pd.options.mode.chained_assignment = None
filename = 'facies_vectors.csv'
training_data = pd.read_csv(filename)
training_data
Explanation: Facies classification using Machine Learning- Majority voting
Contest entry by Priyanka Raghavan and Steve Hall
This notebook demonstrates how to train a machine learning algorithm to predict facies from well log data. The dataset we will use comes from a class excercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007).
The dataset we will use is log data from nine wells that have been labeled with a facies type based on oberservation of core. We will use this log data to train a support vector machine to classify facies types. We will use simple logistics regression to classify wells scikit-learn.
First we will explore the dataset. We will load the training data from 9 wells, and take a look at what we have to work with. We will plot the data from a couple wells, and create cross plots to look at the variation within the data.
Next we will condition the data set. We will remove the entries that have incomplete data. The data will be scaled to have zero mean and unit variance. We will also split the data into training and test sets.
We will then be ready to build the classifier.
Finally, once we have a built and tuned the classifier, we can apply the trained model to classify facies in wells which do not already have labels. We will apply the classifier to two wells, but in principle you could apply the classifier to any number of wells that had the same log data.
Exploring the dataset
First, we will examine the data set we will use to train the classifier. The training data is contained in the file facies_vectors.csv. The dataset consists of 5 wireline log measurements, two indicator variables and a facies label at half foot intervals. In machine learning terminology, each log measurement is a feature vector that maps a set of 'features' (the log measurements) to a class (the facies type). We will use the pandas library to load the data into a dataframe, which provides a convenient data structure to work with well log data.
End of explanation
training_data['Well Name'] = training_data['Well Name'].astype('category')
training_data['Formation'] = training_data['Formation'].astype('category')
training_data['Well Name'].unique()
training_data.describe()
Explanation: This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate.
The seven predictor variables are:
* Five wire line log curves include gamma ray (GR), resistivity logging (ILD_log10),
photoelectric effect (PE), neutron-density porosity difference and average neutron-density porosity (DeltaPHI and PHIND). Note, some wells do not have PE.
* Two geologic constraining variables: nonmarine-marine indicator (NM_M) and relative position (RELPOS)
The nine discrete facies (classes of rocks) are:
1. Nonmarine sandstone
2. Nonmarine coarse siltstone
3. Nonmarine fine siltstone
4. Marine siltstone and shale
5. Mudstone (limestone)
6. Wackestone (limestone)
7. Dolomite
8. Packstone-grainstone (limestone)
9. Phylloid-algal bafflestone (limestone)
These facies aren't discrete, and gradually blend into one another. Some have neighboring facies that are rather close. Mislabeling within these neighboring facies can be expected to occur. The following table lists the facies, their abbreviated labels and their approximate neighbors.
Facies |Label| Adjacent Facies
:---: | :---: |:--:
1 |SS| 2
2 |CSiS| 1,3
3 |FSiS| 2
4 |SiSh| 5
5 |MS| 4,6
6 |WS| 5,7
7 |D| 6,8
8 |PS| 6,7,9
9 |BS| 7,8
Let's clean up this dataset. The 'Well Name' and 'Formation' columns can be turned into a categorical data type.
End of explanation
# 1=sandstone 2=c_siltstone 3=f_siltstone
# 4=marine_silt_shale 5=mudstone 6=wackestone 7=dolomite
# 8=packstone 9=bafflestone
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00',
'#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',
'WS', 'D','PS', 'BS']
#facies_color_map is a dictionary that maps facies labels
#to their respective colors
facies_color_map = {}
for ind, label in enumerate(facies_labels):
facies_color_map[label] = facies_colors[ind]
def label_facies(row, labels):
return labels[ row['Facies'] -1]
#training_data.loc[:,'FaciesLabels'] = training_data.apply(lambda row: label_facies(row, facies_labels), axis=1)
faciesVals = training_data['Facies'].values
well = training_data['Well Name'].values
mpl.rcParams['figure.figsize'] = (20.0, 10.0)
for w_idx, w in enumerate(np.unique(well)):
ax = plt.subplot(3, 4, w_idx+1)
hist = np.histogram(faciesVals[well == w], bins=np.arange(len(facies_labels)+1)+.5)
plt.bar(np.arange(len(hist[0])), hist[0], color=facies_colors, align='center')
ax.set_xticks(np.arange(len(hist[0])))
ax.set_xticklabels(facies_labels)
ax.set_title(w)
Explanation: This is a quick view of the statistical distribution of the input variables. Looking at the count values, there are 3232 feature vectors in the training set.
Remove a single well to use as a blind test later. For that let us look at distribution of facies across wells
End of explanation
PE_mask = training_data['PE'].notnull().values
training_data = training_data[PE_mask]
blind = training_data[training_data['Well Name'] == 'NEWBY']
training_data = training_data[training_data['Well Name'] != 'NEWBY']
training_data.loc[:,'FaciesLabels'] = training_data.apply(lambda row: label_facies(row, facies_labels), axis=1)
Explanation: Based on graphs above NEWBY has a good distribution of wells and is taken out as blind well to be tested. Also since training data has null, remove them from data.
End of explanation
def make_facies_log_plot(logs, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=6, figsize=(8, 12))
ax[0].plot(logs.GR, logs.Depth, '-g')
ax[1].plot(logs.ILD_log10, logs.Depth, '-')
ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
ax[4].plot(logs.PE, logs.Depth, '-', color='black')
im=ax[5].imshow(cluster, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[5])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-1):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel('Facies')
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([])
ax[5].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
Explanation: Let's take a look at the data from individual wells in a more familiar log plot form. We will create plots for the five well log variables, as well as a log for facies labels. The plots are based on the those described in Alessandro Amato del Monte's excellent tutorial.
End of explanation
make_facies_log_plot(
training_data[training_data['Well Name'] == 'SHRIMPLIN'],
facies_colors)
Explanation: Placing the log plotting code in a function will make it easy to plot the logs from multiples wells, and can be reused later to view the results when we apply the facies classification model to other wells. The function was written to take a list of colors and facies labels as parameters.
We then show log plots for wells SHRIMPLIN.
End of explanation
#count the number of unique entries for each facies, sort them by
#facies number (instead of by number of entries)
facies_counts = training_data['Facies'].value_counts().sort_index()
#use facies labels to index each count
facies_counts.index = facies_labels
facies_counts.plot(kind='bar',color=facies_colors,
title='Distribution of Training Data by Facies')
facies_counts
Explanation: In addition to individual wells, we can look at how the various facies are represented by the entire training set. Let's plot a histogram of the number of training examples for each facies class.
End of explanation
correct_facies_labels = training_data['Facies'].values
feature_vectors = training_data.drop(['Formation', 'Well Name', 'Depth','Facies','FaciesLabels'], axis=1)
feature_vectors.describe()
Explanation: This shows the distribution of examples by facies for the examples in the training set. Dolomite (facies 7) has the fewest with 81 examples. Depending on the performance of the classifier we are going to train, we may consider getting more examples of these facies.
Conditioning the data set
Now we extract just the feature variables we need to perform the classification. The predictor variables are the five wireline values and two geologic constraining variables. We also get a vector of the facies labels that correspond to each feature vector.
End of explanation
from sklearn import preprocessing
scaler = preprocessing.StandardScaler().fit(feature_vectors)
scaled_features = scaler.transform(feature_vectors)
feature_vectors
Explanation: Scikit includes a preprocessing module that can 'standardize' the data (giving each variable zero mean and unit variance, also called whitening). Many machine learning algorithms assume features will be standard normally distributed data (ie: Gaussian with zero mean and unit variance). The factors used to standardize the training set must be applied to any subsequent feature set that will be input to the classifier. The StandardScalar class can be fit to the training set, and later used to standardize any training data.
End of explanation
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
scaled_features, correct_facies_labels, test_size=0.2, random_state=42)
Explanation: Scikit also includes a handy function to randomly split the training data into training and test sets. The test set contains a small subset of feature vectors that are not used to train the network. Because we know the true facies labels for these examples, we can compare the results of the classifier to the actual facies and determine the accuracy of the model. Let's use 20% of the data for the test set.
End of explanation
from sklearn import neighbors
clf = neighbors.KNeighborsClassifier(n_neighbors=10,weights='distance',algorithm='kd_tree')
clf.fit(X_train,y_train)
predicted_labels = clf.predict(X_test)
Explanation: Training the classifier using Majority voting
Now we use the cleaned and conditioned training set to create a facies classifier. As mentioned above, we will use a type of machine learning model known as a Majority voting.
We trained classifier on four models KNeighbours, Random forest, logistic regression and Gradient boosting.
End of explanation
from sklearn.metrics import confusion_matrix
from classification_utilities import display_cm, display_adj_cm
conf = confusion_matrix(y_test, predicted_labels)
display_cm(conf, facies_labels, hide_zeros=True)
Explanation: Now that the model has been trained on our data, we can use it to predict the facies of the feature vectors in the test set. Because we know the true facies labels of the vectors in the test set, we can use the results to evaluate the accuracy of the classifier.
We need some metrics to evaluate how good our classifier is doing. A confusion matrix is a table that can be used to describe the performance of a classification model. Scikit-learn allows us to easily create a confusion matrix by supplying the actual and predicted facies labels.
The confusion matrix is simply a 2D array. The entries of confusion matrix C[i][j] are equal to the number of observations predicted to have facies j, but are known to have facies i.
To simplify reading the confusion matrix, a function has been written to display the matrix along with facies labels and various error metrics. See the file classification_utilities.py in this repo for the display_cm() function.
End of explanation
def accuracy(conf):
total_correct = 0.
nb_classes = conf.shape[0]
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
acc = total_correct/sum(sum(conf))
return acc
Explanation: The rows of the confusion matrix correspond to the actual facies labels. The columns correspond to the labels assigned by the classifier. For example, consider the first row. For the feature vectors in the test set that actually have label SS, 23 were correctly indentified as SS, 21 were classified as CSiS and 2 were classified as FSiS.
The entries along the diagonal are the facies that have been correctly classified. Below we define two functions that will give an overall value for how the algorithm is performing. The accuracy is defined as the number of correct classifications divided by the total number of classifications.
End of explanation
adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]])
def accuracy_adjacent(conf, adjacent_facies):
nb_classes = conf.shape[0]
total_correct = 0.
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
for j in adjacent_facies[i]:
total_correct += conf[i][j]
return total_correct / sum(sum(conf))
print('Facies classification accuracy = %f' % accuracy(conf))
print('Adjacent facies classification accuracy = %f' % accuracy_adjacent(conf, adjacent_facies))
#Now do random forest
from sklearn.ensemble import RandomForestClassifier
RFC = RandomForestClassifier(n_estimators=150,
min_samples_leaf= 50,class_weight="balanced",oob_score=True,random_state=50
)
RFC.fit(X_train,y_train)
rfpredicted_labels = RFC.predict(X_test)
RFconf = confusion_matrix(y_test, rfpredicted_labels)
display_cm(RFconf, facies_labels, hide_zeros=True)
print('Facies classification accuracy = %f' % accuracy(RFconf))
print('Adjacent facies classification accuracy = %f' % accuracy_adjacent(RFconf, adjacent_facies))
#Now do Gradient Boosting
from sklearn.ensemble import GradientBoostingClassifier
gbModel = GradientBoostingClassifier(loss='deviance', n_estimators=100, learning_rate=0.1, max_depth=3, random_state=None, max_leaf_nodes=None, verbose=1)
gbModel.fit(X_train,y_train)
gbpredicted_labels = gbModel.predict(X_test)
gbconf = confusion_matrix(y_test, gbpredicted_labels)
display_cm(gbconf, facies_labels, hide_zeros=True)
print('Facies classification accuracy = %f' % accuracy(gbconf))
print('Adjacent facies classification accuracy = %f' % accuracy_adjacent(gbconf, adjacent_facies))
from sklearn import linear_model
lgr = linear_model.LogisticRegression(class_weight='balanced',multi_class='ovr',solver='sag',max_iter=1000,random_state=40,C=1e5)
lgr.fit(X_train,y_train)
lgrpredicted_labels = lgr.predict(X_test)
lgrconf = confusion_matrix(y_test, lgrpredicted_labels)
display_cm(lgrconf, facies_labels, hide_zeros=True)
print('Facies classification accuracy = %f' % accuracy(lgrconf))
print('Adjacent facies classification accuracy = %f' % accuracy_adjacent(lgrconf, adjacent_facies))
Explanation: As noted above, the boundaries between the facies classes are not all sharp, and some of them blend into one another. The error within these 'adjacent facies' can also be calculated. We define an array to represent the facies adjacent to each other. For facies label i, adjacent_facies[i] is an array of the adjacent facies labels.
End of explanation
from sklearn.ensemble import VotingClassifier
vtclf = VotingClassifier(estimators=[
('KNN', clf), ('RFC', RFC), ('GBM', gbModel),('LR',lgr)],
voting='soft')
vtclf.fit(X_train,y_train)
vtclfpredicted_labels = vtclf.predict(X_test)
vtclfconf = confusion_matrix(y_test, vtclfpredicted_labels)
display_cm(vtclfconf, facies_labels, hide_zeros=True)
print('Facies classification accuracy = %f' % accuracy(vtclfconf))
print('Adjacent facies classification accuracy = %f' % accuracy_adjacent(vtclfconf, adjacent_facies))
Explanation: Using Voting classifier Now
The voting classifier is now used to vote and classify models
End of explanation
blind
Explanation: Applying the classification model to the blind data
We held a well back from the training, and stored it in a dataframe called blind:
End of explanation
y_blind = blind['Facies'].values
Explanation: The label vector is just the Facies column:
End of explanation
well_features = blind.drop(['Facies', 'Formation', 'Well Name', 'Depth'], axis=1)
Explanation: We can form the feature matrix by dropping some of the columns and making a new dataframe:
End of explanation
X_blind = scaler.transform(well_features)
Explanation: Now we can transform this with the scaler we made before:
End of explanation
y_pred = vtclf.predict(X_blind)
blind['Prediction'] = y_pred
Explanation: Now it's a simple matter of making a prediction and storing it back in the dataframe:
End of explanation
cv_conf = confusion_matrix(y_blind, y_pred)
print('Optimized facies classification accuracy = %.2f' % accuracy(cv_conf))
print('Optimized adjacent facies classification accuracy = %.2f' % accuracy_adjacent(cv_conf, adjacent_facies))
Explanation: Let's see how we did with the confusion matrix:
End of explanation
display_cm(cv_conf, facies_labels,
display_metrics=True, hide_zeros=True)
Explanation: We managed 0.46 using the test data, but it was from the same wells as the training data. T
End of explanation
display_adj_cm(cv_conf, facies_labels, adjacent_facies,
display_metrics=True, hide_zeros=True)
def compare_facies_plot(logs, compadre, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster1 = np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
cluster2 = np.repeat(np.expand_dims(logs[compadre].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=7, figsize=(9, 12))
ax[0].plot(logs.GR, logs.Depth, '-g')
ax[1].plot(logs.ILD_log10, logs.Depth, '-')
ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
ax[4].plot(logs.PE, logs.Depth, '-', color='black')
im1 = ax[5].imshow(cluster1, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
im2 = ax[6].imshow(cluster2, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[6])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im2, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-2):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel('Facies')
ax[6].set_xlabel(compadre)
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([])
ax[5].set_xticklabels([])
ax[6].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
compare_facies_plot(blind, 'Prediction', facies_colors)
Explanation: ...but does remarkably well on the adjacent facies predictions.
End of explanation
well_data = pd.read_csv('validation_data_nofacies.csv')
well_data['Well Name'] = well_data['Well Name'].astype('category')
well_features = well_data.drop(['Formation', 'Well Name', 'Depth'], axis=1)
Explanation: Applying the classification model to new data
Now that we have a trained facies classification model we can use it to identify facies in wells that do not have core data. In this case, we will apply the classifier to two wells, but we could use it on any number of wells for which we have the same set of well logs for input.
This dataset is similar to the training data except it does not have facies labels. It is loaded into a dataframe called test_data.
End of explanation
X_unknown = scaler.transform(well_features)
Explanation: The data needs to be scaled using the same constants we used for the training data.
End of explanation
#predict facies of unclassified data
y_unknown = vtclf.predict(X_unknown)
vtclf.score(X_unknown)
well_data['Facies'] = y_unknown
well_data
well_data['Well Name'].unique()
Explanation: Finally we predict facies labels for the unknown data, and store the results in a Facies column of the test_data dataframe.
End of explanation
make_facies_log_plot(
well_data[well_data['Well Name'] == 'STUART'],
facies_colors=facies_colors)
make_facies_log_plot(
well_data[well_data['Well Name'] == 'CRAWFORD'],
facies_colors=facies_colors)
Explanation: We can use the well log plot to view the classification results along with the well logs.
End of explanation
well_data.to_csv('well_data_with_facies.csv')
Explanation: Finally we can write out a csv file with the well data along with the facies classification results.
End of explanation |
6,628 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Graded = 7/7
HOMEWORK 06
You'll be using the Dark Sky Forecast API from Forecast.io, available at https
Step1: 2) What's the current wind speed? How much warmer does it feel than it actually is?
Step2: 3) The first daily forecast is the forecast for today. For the place you decided on up above, how much of the moon is currently visible?
Step3: 4) What's the difference between the high and low temperatures for today?
Step4: 5) Loop through the daily forecast, printing out the next week's worth of predictions. I'd like to know the high temperature for each day, and whether it's hot, warm, or cold, based on what temperatures you think are hot, warm or cold.
Step5: 6) What's the weather looking like for the rest of today in Miami, Florida? I'd like to know the temperature for every hour, and if it's going to have cloud cover of more than 0.5 say "{temperature} and cloudy" instead of just the temperature.
Step6: 7) What was the temperature in Central Park on Christmas Day, 1980? How about 1990? 2000?
Tip | Python Code:
import requests
url="https://api.forecast.io/forecast/64f4867f7d4c86182f3d1c6ed881dbfc/17.3850,78.4867"
response=requests.get(url)
data=response.json()
data.keys()
data['currently'].keys()
Explanation: Graded = 7/7
HOMEWORK 06
You'll be using the Dark Sky Forecast API from Forecast.io, available at https://developer.forecast.io. It's a pretty simple API, but be sure to read the documentation!
1) Make a request from the Forecast.io API for where you were born (or lived, or want to visit!).
Tip: Once you've imported the JSON into a variable, check the timezone's name to make sure it seems like it got the right part of the world!
Tip 2: How is north vs. south and east vs. west latitude/longitude represented? Is it the normal North/South/East/West?
End of explanation
print("The current wind speed is",data['currently']['windSpeed'],"miles per hour.")
apparentTemperature=data['currently']['apparentTemperature']
temperature=data['currently']['temperature']
if apparentTemperature-temperature > 0:
print("It feels", "%.2f" %(apparentTemperature-temperature),"degrees warmer.")
else:
print("It feels", "%.2f" %(temperature-apparentTemperature),"degrees cooler.")
Explanation: 2) What's the current wind speed? How much warmer does it feel than it actually is?
End of explanation
data['daily'].keys()
mooncover=data['daily']['data'][0]['moonPhase']
if mooncover == 0:
print("Today is a new moon day. The mooncover is",mooncover)
if mooncover > 0 and mooncover <0.25:
print("The moon is in waxing crescent phase. The mooncover is",mooncover)
if mooncover == 0.25:
print("Today is a first quarter moon. The mooncover is",mooncover)
if mooncover > 0.25 and mooncover <0.5:
print("The moon is in waxing gibbous phase. The mooncover is",mooncover)
if mooncover == 0.5:
print("Today is a full moon day. The mooncover is",mooncover)
if mooncover > 0.5 and mooncover<0.75:
print("The moon is in waning gibbous phase. The mooncover is",mooncover)
if mooncover == 0.75:
print("Today is a last quarter moon. The mooncover is",mooncover)
if mooncover >0.75:
print("The moon is in waning crescent phase. The mooncover is",mooncover)
Explanation: 3) The first daily forecast is the forecast for today. For the place you decided on up above, how much of the moon is currently visible?
End of explanation
low=data['daily']['data'][0]['temperatureMin']
print("The low temperature is",low)
high=data['daily']['data'][0]['temperatureMax']
print("The high temperature is",high)
print("There is a difference of",high-low,"degrees between the high and low temperatures today.")
Explanation: 4) What's the difference between the high and low temperatures for today?
End of explanation
x=0
hotlimit=86
warmlimit=68
for date in data['daily']['data']:
if data['daily']['data'][x]['temperatureMax'] > hotlimit:
print("The high for day",x,"of the mext week is",data['daily']['data'][x]['temperatureMax'],"Fahrenheit")
print("It's a hot day. ")
if data['daily']['data'][x]['temperatureMax'] < hotlimit and data['daily']['data'][x]['temperatureMax'] > warmlimit :
print("The high for day",x,"of the mext week is",data['daily']['data'][x]['temperatureMax'],"Fahrenheit")
print("It's a warm day. ")
if data['daily']['data'][x]['temperatureMax'] < warmlimit:
print("The high for day",x,"of the mext week is",data['daily']['data'][x]['temperatureMax'],"Fahrenheit")
print("It's a cold day. ")
x=x+1
Explanation: 5) Loop through the daily forecast, printing out the next week's worth of predictions. I'd like to know the high temperature for each day, and whether it's hot, warm, or cold, based on what temperatures you think are hot, warm or cold.
End of explanation
import requests
url="https://api.forecast.io/forecast/64f4867f7d4c86182f3d1c6ed881dbfc/25.7617,-80.1918"
miamiresponse=requests.get(url)
miamidata=miamiresponse.json()
miamidata.keys()
noofhoursinaday=0
miamidata['hourly']['data']
for count in miamidata['hourly']['data']:
if miamidata['hourly']['data'][noofhoursinaday]['cloudCover'] > 0.5:
print("The temperature for hour",noofhoursinaday+1,"is",miamidata['hourly']['data'][noofhoursinaday]['temperature'],"degrees F and cloudy.")
else:
print("The temperature for hour",noofhoursinaday+1,"is",miamidata['hourly']['data'][noofhoursinaday]['temperature'],"degrees F.")
noofhoursinaday=noofhoursinaday+1
if noofhoursinaday>23:
break
Explanation: 6) What's the weather looking like for the rest of today in Miami, Florida? I'd like to know the temperature for every hour, and if it's going to have cloud cover of more than 0.5 say "{temperature} and cloudy" instead of just the temperature.
End of explanation
import requests
timestamp='346550400'
url="https://api.forecast.io/forecast/64f4867f7d4c86182f3d1c6ed881dbfc/40.7829,-73.9654,"+timestamp
cpresponse=requests.get(url)
cpdata=cpresponse.json()
print("The temperature at midnight of Christmas in 1980 was",cpdata['currently']['temperature'],"degrees F")
import requests
timestamp='662083200'
url="https://api.forecast.io/forecast/64f4867f7d4c86182f3d1c6ed881dbfc/40.7829,-73.9654,"+timestamp
cpresponse=requests.get(url)
cpdata=cpresponse.json()
print("The temperature at midnight of Christmas in 1990 was",cpdata['currently']['temperature'],"degrees F")
import requests
timestamp='977702400'
url="https://api.forecast.io/forecast/64f4867f7d4c86182f3d1c6ed881dbfc/40.7829,-73.9654,"+timestamp
cpresponse=requests.get(url)
cpdata=cpresponse.json()
print("The temperature at midnight of Christmas in 2000 was",cpdata['currently']['temperature'],"degrees F")
Explanation: 7) What was the temperature in Central Park on Christmas Day, 1980? How about 1990? 2000?
Tip: You'll need to use UNIX time, which is the number of seconds since January 1, 1970. Google can help you convert a normal date!
Tip: You'll want to use Forecast.io's "time machine" API at https://developer.forecast.io/docs/v2
End of explanation |
6,629 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predicting uncertainties on measured bandpowers
This notebook shows you how to predict the errors on auto-correlation and cross-correlation bandpowers.
You need to install this Python package (orphics), which should be achievable with a simple pip command as shown in README.md.
Apart from that, the only non-trivial dependency (other than things like scipy, matplotlib, etc.) is the Python wrapper for CAMB.
On most systems with gcc 4.9+, you can install pycamb with
Step1: Let's do a forecast for the errors on the measured bandpowers. Errors are determined by
Step2: Is sample cross-covariance important?
The above error bars had contributions from sample variance and measurement noise in the galaxy survey. Often, a CMB survey has a set of mocks (which captures $C_L^{kk}$ and $N_L^{kk}$ contributions) and a galaxy survey has an independent set of mocks (which captures $C_L^{ss}$ and $N_L^{ss}$). Since they are independently produced, cross-correlations between the mocks will capture everything except the $C_L^{ks}$ term. Is this term important?
We can check this by calculating theory errors with and without the sample covariance term. To do this, we just set clkg to zero. Note that this will predict zero S/N, but that's OK, we just need to check the effect on the error bars.
Step3: What impact does this have on error bars? | Python Code:
%load_ext autoreload
%autoreload 2
from __future__ import print_function
from orphics import cosmology,io
import numpy as np
import matplotlib.pyplot as plt
# First initialize a cosmology object with default params
lc = cosmology.LimberCosmology(lmax=3000,pickling=True)
# Let's define a mock dndz
def dndz(z):
z0 = 1./3.
ans = (z**2.)* np.exp(-1.0*z/z0)/ (2.*z0**3.)
return ans
z_edges = np.arange(0.,3.0,0.1)
zcents = (z_edges[1:]+z_edges[:-1])/2.
# Let's add this dndz to the cosmology object. By default, LimberCosmology doesn't allow you to reuse tag names, but here we force it to since this is a Python notebook!
lc.addNz(tag="g",zedges=z_edges,nz=dndz(zcents),ignore_exists=True)
ellrange = np.arange(0,2000,1)
# And generate the bandpower predictions
lc.generateCls(ellrange)
clkk = lc.getCl("cmb","cmb")
clkg = lc.getCl("cmb","g")
clgg = lc.getCl("g","g")
pl = io.Plotter(yscale='log',ylabel='$C_L$',xlabel='$L$')
pl.add(ellrange,clkk)
pl.add(ellrange,clgg)
pl.add(ellrange,clkg)
pl.done()
Explanation: Predicting uncertainties on measured bandpowers
This notebook shows you how to predict the errors on auto-correlation and cross-correlation bandpowers.
You need to install this Python package (orphics), which should be achievable with a simple pip command as shown in README.md.
Apart from that, the only non-trivial dependency (other than things like scipy, matplotlib, etc.) is the Python wrapper for CAMB.
On most systems with gcc 4.9+, you can install pycamb with:
pip install camb --user
See https://camb.readthedocs.io/en/latest/ for details.
End of explanation
# Initialize a simple lens forecaster
Nlkk = clkk*0. # Assume no noise in CMB lensing map
lf = cosmology.LensForecast()
lf.loadKK(ellrange,clkk,ellrange,Nlkk)
lf.loadKS(ellrange,clkg)
lf.loadSS(ellrange,clgg,ngal=20.)
ell_edges = np.arange(100,2000,50) # define some broad L bins
ells = (ell_edges[:-1]+ell_edges[1:])/2.
fsky = 40./41250. #0.05
# Get S/N and errors
sn,errs = lf.sn(ell_edges,fsky,"ks")
print("Expected S/N :",sn)
pl = io.Plotter(xlabel='$L$',ylabel='$\sigma(C_L)$')
pl.add_err(ells,ells*0.,yerr=errs)
pl.hline()
pl.done()
Explanation: Let's do a forecast for the errors on the measured bandpowers. Errors are determined by:
Sample variance (which is just given by the power-spectra of the fields) -- $C_L^{kk}$, $C_L^{ss}$, $C_L^{ks}$
Measurement noise $N_L^{kk}$,$N_L^{ss}$
Fraction of sky measured $f_{\mathrm{sky}}$
Given these, we can predict the S/N of the amplitude of the measurement of any of $C_L^{kk}$, $C_L^{ss}$, $C_L^{ks}$ and the error bars on binned bandpowers.
We will assume for now that the galaxy survey has 20 galaxies per arcminute square, and the CMB lensing survey has no noise. This sets a lower bound on the measurement noise contribution when an HSC-like survey is cross-correlated with a CMB lensing map.
End of explanation
lf2 = cosmology.LensForecast()
lf2.loadKK(ellrange,clkk,ellrange,clkk*0.)
lf2.loadKS(ellrange,clkg*0.)
lf2.loadSS(ellrange,clgg,ngal=20.)
sn2,errs2 = lf2.sn(ell_edges,fsky,"ks")
print("Expected S/N :",sn2)
pl = io.Plotter(xlabel='$L$',ylabel='$\sigma(C_L)$')
pl.add_err(ells,ells*0.,yerr=errs)
pl.add_err(ells+20,ells*0.,yerr=errs2)
pl.hline()
pl.done()
Explanation: Is sample cross-covariance important?
The above error bars had contributions from sample variance and measurement noise in the galaxy survey. Often, a CMB survey has a set of mocks (which captures $C_L^{kk}$ and $N_L^{kk}$ contributions) and a galaxy survey has an independent set of mocks (which captures $C_L^{ss}$ and $N_L^{ss}$). Since they are independently produced, cross-correlations between the mocks will capture everything except the $C_L^{ks}$ term. Is this term important?
We can check this by calculating theory errors with and without the sample covariance term. To do this, we just set clkg to zero. Note that this will predict zero S/N, but that's OK, we just need to check the effect on the error bars.
End of explanation
pl = io.Plotter(xlabel='$L$',ylabel='$\\frac{\\Delta \\sigma(C_L)}{\\sigma(C_L)}$')
pl.add(ells,(errs2-errs)/errs)
pl.hline()
pl.done()
Explanation: What impact does this have on error bars?
End of explanation |
6,630 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
You'll be using the Dark Sky Forecast API from Forecast.io, available at https
Step1: 2) What's the current wind speed? How much warmer does it feel than it actually is?
Step2: 3) The first daily forecast is the forecast for today. For the place you decided on up above, how much of the moon is currently visible?
Step3: 4) What's the difference between the high and low temperatures for today?
Step4: 5) Loop through the daily forecast, printing out the next week's worth of predictions. I'd like to know the high temperature for each day, and whether it's hot, warm, or cold, based on what temperatures you think are hot, warm or cold.
Step5: 6) What's the weather looking like for the rest of today in Miami, Florida? I'd like to know the temperature for every hour, and if it's going to have cloud cover of more than 0.5 say "{temperature} and cloudy" instead of just the temperature.
Step6: 7) What was the temperature in Central Park on Christmas Day, 1980? How about 1990? 2000?
Tip | Python Code:
apikey = '34b41fe7b9db6c1bd5f8ea3492bca332'
coordinates = {'San Antonio': '29.4241,-98.4936', 'Miami': '25.7617,-80.1918', 'Central Park': '40.7829,-73.9654'}
import requests
url = 'https://api.forecast.io/forecast/' + apikey + '/' + coordinates['San Antonio']
response = requests.get(url)
data = response.json()
# #Is it in my time zone?
# #temp. Answer: dict
# print(type(data))
# #temp. Answer: ['offset', 'latitude', 'hourly', 'flags', 'minutely', 'longitude', 'timezone', 'daily', 'currently']
# print(data.keys())
# #temp. Answer: dict
# print(type(data['currently']))
# #temp. Answer: ['windSpeed', 'time', 'dewPoint', 'icon', 'temperature', 'apparentTemperature', 'precipProbability',
#'visibility', 'cloudCover', 'nearestStormDistance', 'pressure', 'windBearing', 'ozone', 'humidity', 'precipIntensity',
#'summary', 'nearestStormBearing']
# print(data['currently'].keys())
# #temp. It's in my time zone!
# print(data['currently']['time'])
#Oh, this would have been easier:
#temp. Answer: America/Chicago
print(data['timezone'])
Explanation: You'll be using the Dark Sky Forecast API from Forecast.io, available at https://developer.forecast.io. It's a pretty simple API, but be sure to read the documentation!
1) Make a request from the Forecast.io API for where you were born (or lived, or want to visit!).
Tip: Once you've imported the JSON into a variable, check the timezone's name to make sure it seems like it got the right part of the world!
Tip 2: How is north vs. south and east vs. west latitude/longitude represented? Is it the normal North/South/East/West?
End of explanation
print('The current wind speed is', data['currently']['windSpeed'], 'miles per hour.')
print('It feels', round(data['currently']['apparentTemperature'] - data['currently']['temperature'], 2), 'degrees Fahrenheit warmer than it actually is.')
Explanation: 2) What's the current wind speed? How much warmer does it feel than it actually is?
End of explanation
# #temp. Answer: dict
# print(type(data['daily']))
# #temp. Answer: ['summary', 'data', 'icon']
# print(data['daily'].keys())
# #temp. Answer: list
# print(type(data['daily']['data']))
# #temp. It's a list of dictionaries
# #this time means Wed, 08 Jun 2016 05:00:00 GMT, which is currently today
# print(data['daily']['data'][0])
# #this time means Thu, 09 Jun 2016 05:00:00 GMT
# print(data['daily']['data'][1])
# #temp. Answer: 8
# print(len(data['daily']['data']))
# #temp. Answer: ['windSpeed', 'time', 'sunsetTime', 'precipIntensityMaxTime', 'apparentTemperatureMax', 'windBearing',
# #'temperatureMinTime', 'precipIntensityMax', 'precipProbability', 'sunriseTime', 'temperatureMin',
# #'apparentTemperatureMaxTime', 'precipIntensity', 'apparentTemperatureMinTime', 'temperatureMax', 'dewPoint',
# #'temperatureMaxTime', 'icon', 'moonPhase', 'precipType', 'visibility', 'cloudCover', 'pressure',
# #'apparentTemperatureMin', 'ozone', 'humidity', 'summary']
# print(data['daily']['data'][0].keys())
today_moon = data['daily']['data'][0]['moonPhase']
print(100 * (1 - abs(1 - (today_moon * 2))), 'percent of the moon is visible today.')
Explanation: 3) The first daily forecast is the forecast for today. For the place you decided on up above, how much of the moon is currently visible?
End of explanation
print('The difference between today\'s high and low temperatures is', round(data['daily']['data'][0]['temperatureMax'] - data['daily']['data'][0]['temperatureMin'], 2), 'degrees Fahrenheit.')
Explanation: 4) What's the difference between the high and low temperatures for today?
End of explanation
daily_forecast = data['daily']['data']
print('Starting with today\'s, the forecasts for the next week are for highs of:')
for day in daily_forecast:
if 85 <= day['temperatureMax']:
warmth = 'hot'
elif 70 <= day['temperatureMax'] < 85:
warmth = 'warm'
else:
warmth = 'cold'
print(day['temperatureMax'], 'degrees Fahrenheit, a pretty', warmth, 'day.')
Explanation: 5) Loop through the daily forecast, printing out the next week's worth of predictions. I'd like to know the high temperature for each day, and whether it's hot, warm, or cold, based on what temperatures you think are hot, warm or cold.
End of explanation
fl_url = 'https://api.forecast.io/forecast/' + apikey + '/' + coordinates['Miami']
fl_response = requests.get(url)
fl_data = fl_response.json()
# #temp. Answer: dict
# print(type(fl_data['hourly']))
# #temp. Answer: ['summary', 'data', 'icon']
# print(fl_data['hourly'].keys())
# #temp. Answer: list
# print(type(fl_data['hourly']['data']))
# #temp. Answer: 49
# print(len(fl_data['hourly']['data']))
# #temp. It's a list of dictionaries
# #the top of this hour
# print(fl_data['hourly']['data'][0])
# #the top of next hour
# print(fl_data['hourly']['data'][1])
# #temp. Answer: ['precipType', 'time', 'apparentTemperature', 'windSpeed', 'icon', 'summary', 'precipProbability',
# #'visibility', 'cloudCover', 'pressure', 'windBearing', 'ozone', 'humidity', 'precipIntensity', 'temperature',
# #'dewPoint']
# print(fl_data['hourly']['data'][0].keys())
# # how many hours are left in the day in EDT: (24 - ((time % 86400)/3600 - 4))
# times = [1465423200, 1465426800]
# for time in times:
# print (24 - ((time % 86400)/3600 - 4))
hourly_data = fl_data['hourly']['data']
hours_left = range(int(24 - ((hourly_data[0]['time'] % 86400)/3600 - 4)))
print('Starting with this hour, the hourly forecasts for the rest of the day are for:')
for hour in hours_left:
if hourly_data[hour]['cloudCover'] > .5:
print(hourly_data[hour]['temperature'], 'degrees Fahrenheit and cloudy')
else:
print(hourly_data[hour]['temperature'], 'degrees Fahrenheit')
Explanation: 6) What's the weather looking like for the rest of today in Miami, Florida? I'd like to know the temperature for every hour, and if it's going to have cloud cover of more than 0.5 say "{temperature} and cloudy" instead of just the temperature.
End of explanation
decades = range(3)
for decade in decades:
cp_url = 'https://api.forecast.io/forecast/' + apikey + '/' + coordinates['Central Park'] + ',' + str(10 * decade + 1980) + '-12-25T12:00:00'
cp_response = requests.get(cp_url)
cp_data = cp_response.json()
print('On Christmas Day in', str(1980 + decade * 10) + ', the high in Central Park was', cp_data['daily']['data'][0]['temperatureMax'], 'degrees Fahrenheit.')
Explanation: 7) What was the temperature in Central Park on Christmas Day, 1980? How about 1990? 2000?
Tip: You'll need to use UNIX time, which is the number of seconds since January 1, 1970. Google can help you convert a normal date!
Tip: You'll want to use Forecast.io's "time machine" API at https://developer.forecast.io/docs/v2
End of explanation |
6,631 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Import pixiedust
Start by importing pixiedust which if all bootstrap and install steps were run correctly.
You should see below for opening the pixiedust database successfully with no errors.
Depending on the version of pixiedust that gets installed, it may ask you to update.
If so, run this first cell.
Step1: Creating the SQLContext and inspecting pyspark Context
Pixiedust imports pyspark and the SparkContext + SparkSession should be already available through the "sc" and "spark" variables respectively.
Step2: Download GDELT Data
Download the data necessary to perform Kmeans
Step3: Create datastores and ingest gdelt data.
The ingest process may take a few minutes. If the '*' is present left of the cell the command is still running. Output will not appear below under the process is finished.
Step4: Run KMeans
Running the KMeans process may take a few minutes you should be able to track the progress of the task via the console or Spark History Server once the job begins.
Step5: Load resulting Centroids into DataFrame
Step6: Parse DataFrame data into lat/lon columns and display centroids on map
Using pixiedust's built in map visualization we can display data on a map assuming it has the following properties.
- Keys
Step7: Export KMeans Hulls to DataFrame
If you have some more complex data to visualize pixiedust may not be the best option.
The Kmeans hull generation outputs polygons that would be difficult for pixiedust to display without
creating a special plugin.
Instead, we can use another map renderer to visualize our data. For the Kmeans hulls we will use folium to visualize the data. Folium allows us to easily add wms layers to our notebook, and we can combine that with GeoWaves geoserver functionality to render the hulls and centroids.
Step8: Visualize results using geoserver and wms
folium provides an easy way to visualize leaflet maps in jupyter notebooks.
When the data is too complicated or big to work within the simple framework pixiedust provides for map display we can instead turn to geoserver and wms to render our layers. First we configure geoserver then setup wms layers for folium to display the kmeans results on the map. | Python Code:
#!pip install --user --upgrade pixiedust
import pixiedust
import geowave_pyspark
pixiedust.enableJobMonitor()
Explanation: Import pixiedust
Start by importing pixiedust which if all bootstrap and install steps were run correctly.
You should see below for opening the pixiedust database successfully with no errors.
Depending on the version of pixiedust that gets installed, it may ask you to update.
If so, run this first cell.
End of explanation
# Print Spark info and create sql_context
print('Spark Version: {0}'.format(sc.version))
print('Python Version: {0}'.format(sc.pythonVer))
print('Application Name: {0}'.format(sc.appName))
print('Application ID: {0}'.format(sc.applicationId))
print('Spark Master: {0}'.format( sc.master))
Explanation: Creating the SQLContext and inspecting pyspark Context
Pixiedust imports pyspark and the SparkContext + SparkSession should be already available through the "sc" and "spark" variables respectively.
End of explanation
%%bash
cd /mnt/tmp
wget s3.amazonaws.com/geowave/latest/scripts/emr/quickstart/geowave-env.sh
source /mnt/tmp/geowave-env.sh
mkdir gdelt
cd gdelt
wget http://data.gdeltproject.org/events/md5sums
for file in `cat md5sums | cut -d' ' -f3 | grep "^${TIME_REGEX}"` ; \
do wget http://data.gdeltproject.org/events/$file ; done
md5sum -c md5sums 2>&1 | grep "^${TIME_REGEX}"
Explanation: Download GDELT Data
Download the data necessary to perform Kmeans
End of explanation
%%bash
# We have to source here again because bash runs in a separate sub process each cell.
source /mnt/tmp/geowave-env.sh
# clear old potential runs
geowave store clear gdelt
geowave store rm gdelt
geowave store clear kmeans_gdelt
geowave store rm kmeans_gdelt
# configure geowave connection params for hbase stores "gdelt" and "kmeans"
geowave store add gdelt --gwNamespace geowave.gdelt -t hbase --zookeeper $HOSTNAME:2181
geowave store add kmeans_gdelt --gwNamespace geowave.kmeans -t hbase --zookeeper $HOSTNAME:2181
# configure a spatial index
geowave index add gelt gdeltspatial -t spatial --partitionStrategy round_robin --numPartitions $NUM_PARTITIONS
# run the ingest for a 10x10 deg bounding box over Europe
geowave ingest localtogw /mnt/tmp/gdelt gdelt gdeltspatial -f gdelt \
--gdelt.cql "BBOX(geometry, 0, 50, 10, 60)"
Explanation: Create datastores and ingest gdelt data.
The ingest process may take a few minutes. If the '*' is present left of the cell the command is still running. Output will not appear below under the process is finished.
End of explanation
%%bash
# clear out potential old runs
geowave store clear kmeans_gdelt
# configure a spatial index
geowave index add kmeans_gdelt gdeltspatial -t spatial --partitionStrategy round_robin --numPartitions $NUM_PARTITIONS
#grab classes from jvm
# Pull classes to desribe core GeoWave classes
hbase_options_class = sc._jvm.org.locationtech.geowave.datastore.hbase.cli.config.HBaseRequiredOptions
query_options_class = sc._jvm.org.locationtech.geowave.core.store.query.QueryOptions
byte_array_class = sc._jvm.org.locationtech.geowave.core.index.ByteArrayId
# Pull core GeoWave Spark classes from jvm
geowave_rdd_class = sc._jvm.org.locationtech.geowave.analytic.spark.GeoWaveRDD
rdd_loader_class = sc._jvm.org.locationtech.geowave.analytic.spark.GeoWaveRDDLoader
rdd_options_class = sc._jvm.org.locationtech.geowave.analytic.spark.RDDOptions
sf_df_class = sc._jvm.org.locationtech.geowave.analytic.spark.sparksql.SimpleFeatureDataFrame
kmeans_runner_class = sc._jvm.org.locationtech.geowave.analytic.spark.kmeans.KMeansRunner
datastore_utils_class = sc._jvm.org.locationtech.geowave.core.store.util.DataStoreUtils
spatial_encoders_class = sc._jvm.org.locationtech.geowave.analytic.spark.sparksql.GeoWaveSpatialEncoders
spatial_encoders_class.registerUDTs()
#Setup input datastore options
input_store = hbase_options_class()
input_store.setZookeeper(os.environ['HOSTNAME'] + ':2181')
input_store.setGeowaveNamespace('geowave.gdelt')
#Setup output datastore options
output_store = hbase_options_class()
output_store.setZookeeper(os.environ['HOSTNAME'] + ':2181')
output_store.setGeowaveNamespace('geowave.kmeans')
#Create a instance of the runner, and datastore options
kmeans_runner = kmeans_runner_class()
input_store_plugin = input_store.createPluginOptions()
output_store_plugin = output_store.createPluginOptions()
#Set the appropriate properties
kmeans_runner.setSparkSession(sc._jsparkSession)
kmeans_runner.setAdapterId('gdeltevent')
kmeans_runner.setInputDataStore(input_store_plugin)
kmeans_runner.setOutputDataStore(output_store_plugin)
kmeans_runner.setCqlFilter("BBOX(geometry, 0, 50, 10, 60)")
kmeans_runner.setCentroidTypeName('mycentroids_gdelt')
kmeans_runner.setHullTypeName('myhulls_gdelt')
kmeans_runner.setGenerateHulls(True)
kmeans_runner.setComputeHullData(True)
#Execute the kmeans runner
kmeans_runner.run()
Explanation: Run KMeans
Running the KMeans process may take a few minutes you should be able to track the progress of the task via the console or Spark History Server once the job begins.
End of explanation
# Create the dataframe and get a rdd for the output of kmeans
adapter_id = byte_array_class('mycentroids_gdelt')
query_adapter = datastore_utils_class.getDataAdapter(output_store_plugin, adapter_id)
query_options = query_options_class(query_adapter)
# Create RDDOptions for loader
rdd_options = rdd_options_class()
rdd_options.setQueryOptions(query_options)
output_rdd = rdd_loader_class.loadRDD(sc._jsc.sc(), output_store_plugin, rdd_options)
# Create a SimpleFeatureDataFrame from the GeoWaveRDD
sf_df = sf_df_class(spark._jsparkSession)
sf_df.init(output_store_plugin, adapter_id)
df = sf_df.getDataFrame(output_rdd)
# Convert Java DataFrame to Python DataFrame
import pyspark.mllib.common as convert
py_df = convert._java2py(sc, df)
py_df.createOrReplaceTempView('mycentroids')
df = spark.sql("select * from mycentroids")
display(df)
Explanation: Load resulting Centroids into DataFrame
End of explanation
# Convert the string point information into lat long columns and create a new dataframe for those.
import pyspark
def parseRow(row):
lat=row.geom.y
lon=row.geom.x
return pyspark.sql.Row(lat=lat,lon=lon,ClusterIndex=row.ClusterIndex)
row_rdd = df.rdd
new_rdd = row_rdd.map(lambda row: parseRow(row))
new_df = new_rdd.toDF()
display(new_df)
Explanation: Parse DataFrame data into lat/lon columns and display centroids on map
Using pixiedust's built in map visualization we can display data on a map assuming it has the following properties.
- Keys: put your latitude and longitude fields here. They must be floating values. These fields must be named latitude, lat or y and longitude, lon or x.
- Values: the field you want to use to thematically color the map. Only one field can be used.
Also you will need a access token from whichever map renderer you choose to use with pixiedust (mapbox, google).
Follow the instructions in the token help on how to create and use the access token.
End of explanation
# Create the dataframe and get a rdd for the output of kmeans
# Grab adapter and setup query options for rdd load
adapter_id = byte_array_class('myhulls_gdelt')
query_adapter = datastore_utils_class.getDataAdapter(output_store_plugin, adapter_id)
query_options = query_options_class(query_adapter)
# Use GeoWaveRDDLoader to load an RDD
rdd_options = rdd_options_class()
rdd_options.setQueryOptions(query_options)
output_rdd_hulls = rdd_loader_class.loadRDD(sc._jsc.sc(), output_store_plugin, rdd_options)
# Create a SimpleFeatureDataFrame from the GeoWaveRDD
sf_df_hulls = sf_df_class(spark._jsparkSession)
sf_df_hulls.init(output_store_plugin, adapter_id)
df_hulls = sf_df_hulls.getDataFrame(output_rdd_hulls)
# Convert Java DataFrame to Python DataFrame
import pyspark.mllib.common as convert
py_df_hulls = convert._java2py(sc, df_hulls)
# Create a sql table view of the hulls data
py_df_hulls.createOrReplaceTempView('myhulls')
# Run SQL Query on Hulls data
df_hulls = spark.sql("select * from myhulls order by Density")
display(df_hulls)
Explanation: Export KMeans Hulls to DataFrame
If you have some more complex data to visualize pixiedust may not be the best option.
The Kmeans hull generation outputs polygons that would be difficult for pixiedust to display without
creating a special plugin.
Instead, we can use another map renderer to visualize our data. For the Kmeans hulls we will use folium to visualize the data. Folium allows us to easily add wms layers to our notebook, and we can combine that with GeoWaves geoserver functionality to render the hulls and centroids.
End of explanation
%%bash
# set up geoserver
geowave config geoserver "$HOSTNAME:8000"
# add the centroids layer
geowave gs layer add kmeans_gdelt -id mycentroids_gdelt
geowave gs style set mycentroids_gdelt --styleName point
# add the hulls layer
geowave gs layer add kmeans_gdelt -id myhulls_gdelt
geowave gs style set myhulls_gdelt --styleName line
import owslib
from owslib.wms import WebMapService
url = "http://" + os.environ['HOSTNAME'] + ":8000/geoserver/geowave/wms"
web_map_services = WebMapService(url)
#print layers available wms
print('\n'.join(web_map_services.contents.keys()))
import folium
#grab wms info for centroids
layer = 'mycentroids_gdelt'
wms = web_map_services.contents[layer]
#build center of map off centroid bbox
lon = (wms.boundingBox[0] + wms.boundingBox[2]) / 2.
lat = (wms.boundingBox[1] + wms.boundingBox[3]) / 2.
center = [lat, lon]
m = folium.Map(location = center,zoom_start=3)
name = wms.title
centroids = folium.raster_layers.WmsTileLayer(
url=url,
name=name,
fmt='image/png',
transparent=True,
layers=layer,
overlay=True,
COLORSCALERANGE='1.2,28',
)
centroids.add_to(m)
layer = 'myhulls_gdelt'
wms = web_map_services.contents[layer]
name = wms.title
hulls = folium.raster_layers.WmsTileLayer(
url=url,
name=name,
fmt='image/png',
transparent=True,
layers=layer,
overlay=True,
COLORSCALERANGE='1.2,28',
)
hulls.add_to(m)
m
Explanation: Visualize results using geoserver and wms
folium provides an easy way to visualize leaflet maps in jupyter notebooks.
When the data is too complicated or big to work within the simple framework pixiedust provides for map display we can instead turn to geoserver and wms to render our layers. First we configure geoserver then setup wms layers for folium to display the kmeans results on the map.
End of explanation |
6,632 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step1: SavedModel 形式の使用
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step2: 実行例として、グレース・ホッパーの画像と Keras の次元トレーニング済み画像分類モデルを使用します(使いやすいため)。カスタムモデルも使用できますが、これについては後半で説明します。
Step3: この画像の予測トップは「軍服」です。
Step4: save-path は、TensorFlow Serving が使用する規則に従っており、最後のパスコンポーネント(この場合 1/)はモデルのバージョンを指します。Tensorflow Serving のようなツールで、相対的な鮮度を区別させることができます。
tf.saved_model.load で SavedModel を Python に読み込み直し、ホッパー将官の画像がどのように分類されるかを確認できます。
Step5: インポートされるシグネチャは、必ずディクショナリを返します。シグネチャ名と出力ディクショナリキーをカスタマイズするには、「エクスポート中のシグネチャの指定」を参照してください。
Step6: SavedModel から推論を実行すると、元のモデルと同じ結果が得られます。
Step7: TensorFlow Serving での SavedModel の実行
SavedModels は Python から使用可能(詳細は以下参照)ですが、本番環境では通常、Python コードを使用せずに、推論専用のサービスが使用されます。これは、TensorFlow Serving を使用して SavedModel から簡単にセットアップできます。
エンドツーエンドのtensorflow-servingの例については、 TensorFlow Serving RESTチュートリアルをご覧ください。
ディスク上の SavedModel 形式
SavedModel は、変数の値や語彙など、シリアル化されたシグネチャとそれらを実行するために必要な状態を含むディレクトリです。
Step8: saved_model.pb ファイルは、実際の TensorFlow プログラムまたはモデル、およびテンソル入力を受け入れてテンソル出力を生成する関数を識別する一連の名前付きシグネチャを保存します。
SavedModel には、複数のモデルバリアント(saved_model_cli への --tag_set フラグで識別される複数の v1.MetaGraphDefs)が含まれることがありますが、それは稀なことです。複数のモデルバリアントを作成する API には、tf.Estimator.experimental_export_all_saved_models と TensorFlow 1.x の tf.saved_model.Builder があります。
Step9: variables ディレクトリには、標準のトレーニングチェックポイントが含まれます(「トレーニングチェックポイントガイド」を参照してください)。
Step10: assets ディレクトリには、語彙テーブルを初期化するためのテキストファイルなど、TensorFlow グラフが使用するファイルが含まれます。この例では使用されません。
SavedModel には、SavedModel で何をするかといった消費者向けの情報など、TensorFlow グラフで使用されないファイルに使用する assets.extra ディレクトリがある場合があります。TensorFlow そのものでは、このディレクトリは使用されません。
カスタムモデルの保存
tf.saved_model.save は、tf.Module オブジェクトと、tf.keras.Layer や tf.keras.Model などのサブクラスの保存をサポートしています。
tf.Module の保存と復元の例を見てみましょう。
Step11: tf.Module を保存すると、すべての tf.Variable 属性、tf.function でデコレートされたメソッド、および再帰トラバースで見つかった tf.Module が保存されます(この再帰トラバースについては、「チェックポイントのチュートリアル」を参照してください)。ただし、Python の属性、関数、およびデータは失われます。つまり、tf.function が保存されても、Python コードは保存されません。
Python コードが保存されないのであれば、SavedModel は関数をどのようにして復元するのでしょうか。
簡単に言えば、tf.function は、Python コードをトレースして ConcreteFunction(tf.Graph のコーラブルラッパー)を生成することで機能します。tf.function を保存すると、実際には tf.function の ConcreteFunctions のキャッシュを保存しているのです。
tf.function と ConcreteFunctions の関係に関する詳細は、「tf.function ガイド」をご覧ください。
Step12: カスタムモデルの読み込みと使用
Python に SavedModel を読み込むと、すべての tf.Variable 属性、tf.function でデコレートされたメソッド、および tf.Module は、保存された元の tf.Module と同じオブジェクト構造で復元されます。
Step13: Python コードは保存されないため、新しい入力シグネチャで tf.function で呼び出しても失敗します。
python
imported(tf.constant([3.]))
<pre>ValueError
Step14: 一般的な微調整
Keras の SavedModel は、より高度な微調整の事例に対処できる、プレーンな __call__ よりも詳細な内容を提供します。TensorFlow Hub は、微調整の目的で共有される SavedModel に、該当する場合は次の項目を提供することをお勧めします。
モデルに、フォワードパスがトレーニングと推論で異なるドロップアウトまたはほかのテクニックが使用されている場合(バッチの正規化など)、__call__ メソッドは、オプションのPython 重視の training= 引数を取ります。この引数は、デフォルトで False になりますが、True に設定することができます。
__call__ 属性の隣には、対応する変数リストを伴う .variable と .trainable_variable 属性があります。もともとトレーニング可能であっても、微調整中には凍結されるべき変数は、.trainable_variables から省略されます。
レイヤとサブモデルの属性として重みの正規化を表現する Keras のようなフレームワークのために、.regularization_losses 属性も使用できます。この属性は、値が合計損失に追加することを目的とした引数無しの関数のリストを保有します。
最初の MobileNet の例に戻ると、これらの一部が動作していることを確認できます。
Step15: エクスポート中のシグネチャの指定
TensorFlow Serving や saved_model_cli のようなツールは、SavedModel と対話できます。これらのツールがどの ConcreteFunctions を使用するか判定できるように、サービングシグネチャを指定する必要があります。tf.keras.Model は、サービングシグネチャを自動的に指定しますが、カスタムモジュールに対して明示的に宣言する必要があります。
重要
Step16: サービングシグネチャを宣言するには、signatures kwarg を使用して ConcreteFunction 指定します。単一のシグネチャを指定する場合、シグネチャキーは 'serving_default' となり、定数 tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY として保存されます。
Step17: 複数のシグネチャをエクスポートするには、シグネチャキーのディクショナリを ConcreteFunction に渡します。各シグネチャキーは 1 つの ConcreteFunction に対応します。
Step18: デフォルトでは、出力されたテンソル名は、output_0 というようにかなり一般的な名前です。出力の名前を制御するには、出力名を出力にマッピングするディクショナリを返すように tf.function を変更します。入力の名前は Python 関数の引数名から取られます。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
import os
import tempfile
from matplotlib import pyplot as plt
import numpy as np
import tensorflow as tf
tmpdir = tempfile.mkdtemp()
physical_devices = tf.config.list_physical_devices('GPU')
for device in physical_devices:
tf.config.experimental.set_memory_growth(device, True)
file = tf.keras.utils.get_file(
"grace_hopper.jpg",
"https://storage.googleapis.com/download.tensorflow.org/example_images/grace_hopper.jpg")
img = tf.keras.utils.load_img(file, target_size=[224, 224])
plt.imshow(img)
plt.axis('off')
x = tf.keras.utils.img_to_array(img)
x = tf.keras.applications.mobilenet.preprocess_input(
x[tf.newaxis,...])
Explanation: SavedModel 形式の使用
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/guide/saved_model"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a> </td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/guide/saved_model.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/guide/saved_model.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/guide/saved_model.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a></td>
</table>
SavedModel には、トレーニング済みのパラメータ(tf.Variable)や計算を含む完全な TensorFlow プログラムが含まれます。実行するために元のモデルのビルディングコードを必要としないため、TFLite、TensorFlow.js、TensorFlow Serving、または TensorFlow Hub との共有やデプロイに便利です。
以下の API を使用して、SavedModel 形式でのモデルの保存と読み込みを行えます。
低レベルの tf.saved_model API。このドキュメントでは、この API の使用方法を詳しく説明しています。
保存: tf.saved_model.save(model, path_to_dir)
読み込み: model = tf.saved_model.load(path_to_dir)
高レベルの tf.keras.Model API。Keras の保存とシリアル化ガイドをご覧ください。
トレーニング中の重みの保存/読み込みのみを実行する場合は、チェックポイントガイドをご覧ください。
Keras を使った SavedModel の作成
簡単な導入として、このセクションでは、事前にトレーニング済みの Keras モデルをエクスポートし、それを使って画像分類リクエストを送信します。SavedModels のほかの作成方法については、このガイドの残りの部分で説明します。
End of explanation
labels_path = tf.keras.utils.get_file(
'ImageNetLabels.txt',
'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt')
imagenet_labels = np.array(open(labels_path).read().splitlines())
pretrained_model = tf.keras.applications.MobileNet()
result_before_save = pretrained_model(x)
decoded = imagenet_labels[np.argsort(result_before_save)[0,::-1][:5]+1]
print("Result before saving:\n", decoded)
Explanation: 実行例として、グレース・ホッパーの画像と Keras の次元トレーニング済み画像分類モデルを使用します(使いやすいため)。カスタムモデルも使用できますが、これについては後半で説明します。
End of explanation
mobilenet_save_path = os.path.join(tmpdir, "mobilenet/1/")
tf.saved_model.save(pretrained_model, mobilenet_save_path)
Explanation: この画像の予測トップは「軍服」です。
End of explanation
loaded = tf.saved_model.load(mobilenet_save_path)
print(list(loaded.signatures.keys())) # ["serving_default"]
Explanation: save-path は、TensorFlow Serving が使用する規則に従っており、最後のパスコンポーネント(この場合 1/)はモデルのバージョンを指します。Tensorflow Serving のようなツールで、相対的な鮮度を区別させることができます。
tf.saved_model.load で SavedModel を Python に読み込み直し、ホッパー将官の画像がどのように分類されるかを確認できます。
End of explanation
infer = loaded.signatures["serving_default"]
print(infer.structured_outputs)
Explanation: インポートされるシグネチャは、必ずディクショナリを返します。シグネチャ名と出力ディクショナリキーをカスタマイズするには、「エクスポート中のシグネチャの指定」を参照してください。
End of explanation
labeling = infer(tf.constant(x))[pretrained_model.output_names[0]]
decoded = imagenet_labels[np.argsort(labeling)[0,::-1][:5]+1]
print("Result after saving and loading:\n", decoded)
Explanation: SavedModel から推論を実行すると、元のモデルと同じ結果が得られます。
End of explanation
!ls {mobilenet_save_path}
Explanation: TensorFlow Serving での SavedModel の実行
SavedModels は Python から使用可能(詳細は以下参照)ですが、本番環境では通常、Python コードを使用せずに、推論専用のサービスが使用されます。これは、TensorFlow Serving を使用して SavedModel から簡単にセットアップできます。
エンドツーエンドのtensorflow-servingの例については、 TensorFlow Serving RESTチュートリアルをご覧ください。
ディスク上の SavedModel 形式
SavedModel は、変数の値や語彙など、シリアル化されたシグネチャとそれらを実行するために必要な状態を含むディレクトリです。
End of explanation
!saved_model_cli show --dir {mobilenet_save_path} --tag_set serve
Explanation: saved_model.pb ファイルは、実際の TensorFlow プログラムまたはモデル、およびテンソル入力を受け入れてテンソル出力を生成する関数を識別する一連の名前付きシグネチャを保存します。
SavedModel には、複数のモデルバリアント(saved_model_cli への --tag_set フラグで識別される複数の v1.MetaGraphDefs)が含まれることがありますが、それは稀なことです。複数のモデルバリアントを作成する API には、tf.Estimator.experimental_export_all_saved_models と TensorFlow 1.x の tf.saved_model.Builder があります。
End of explanation
!ls {mobilenet_save_path}/variables
Explanation: variables ディレクトリには、標準のトレーニングチェックポイントが含まれます(「トレーニングチェックポイントガイド」を参照してください)。
End of explanation
class CustomModule(tf.Module):
def __init__(self):
super(CustomModule, self).__init__()
self.v = tf.Variable(1.)
@tf.function
def __call__(self, x):
print('Tracing with', x)
return x * self.v
@tf.function(input_signature=[tf.TensorSpec([], tf.float32)])
def mutate(self, new_v):
self.v.assign(new_v)
module = CustomModule()
Explanation: assets ディレクトリには、語彙テーブルを初期化するためのテキストファイルなど、TensorFlow グラフが使用するファイルが含まれます。この例では使用されません。
SavedModel には、SavedModel で何をするかといった消費者向けの情報など、TensorFlow グラフで使用されないファイルに使用する assets.extra ディレクトリがある場合があります。TensorFlow そのものでは、このディレクトリは使用されません。
カスタムモデルの保存
tf.saved_model.save は、tf.Module オブジェクトと、tf.keras.Layer や tf.keras.Model などのサブクラスの保存をサポートしています。
tf.Module の保存と復元の例を見てみましょう。
End of explanation
module_no_signatures_path = os.path.join(tmpdir, 'module_no_signatures')
module(tf.constant(0.))
print('Saving model...')
tf.saved_model.save(module, module_no_signatures_path)
Explanation: tf.Module を保存すると、すべての tf.Variable 属性、tf.function でデコレートされたメソッド、および再帰トラバースで見つかった tf.Module が保存されます(この再帰トラバースについては、「チェックポイントのチュートリアル」を参照してください)。ただし、Python の属性、関数、およびデータは失われます。つまり、tf.function が保存されても、Python コードは保存されません。
Python コードが保存されないのであれば、SavedModel は関数をどのようにして復元するのでしょうか。
簡単に言えば、tf.function は、Python コードをトレースして ConcreteFunction(tf.Graph のコーラブルラッパー)を生成することで機能します。tf.function を保存すると、実際には tf.function の ConcreteFunctions のキャッシュを保存しているのです。
tf.function と ConcreteFunctions の関係に関する詳細は、「tf.function ガイド」をご覧ください。
End of explanation
imported = tf.saved_model.load(module_no_signatures_path)
assert imported(tf.constant(3.)).numpy() == 3
imported.mutate(tf.constant(2.))
assert imported(tf.constant(3.)).numpy() == 6
Explanation: カスタムモデルの読み込みと使用
Python に SavedModel を読み込むと、すべての tf.Variable 属性、tf.function でデコレートされたメソッド、および tf.Module は、保存された元の tf.Module と同じオブジェクト構造で復元されます。
End of explanation
optimizer = tf.optimizers.SGD(0.05)
def train_step():
with tf.GradientTape() as tape:
loss = (10. - imported(tf.constant(2.))) ** 2
variables = tape.watched_variables()
grads = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(grads, variables))
return loss
for _ in range(10):
# "v" approaches 5, "loss" approaches 0
print("loss={:.2f} v={:.2f}".format(train_step(), imported.v.numpy()))
Explanation: Python コードは保存されないため、新しい入力シグネチャで tf.function で呼び出しても失敗します。
python
imported(tf.constant([3.]))
<pre>ValueError: Could not find matching function to call for canonicalized inputs ((<tf.Tensor 'args_0:0' shape=(1,) dtype=float32>,), {}). Only existing signatures are [((TensorSpec(shape=(), dtype=tf.float32, name=u'x'),), {})].
</pre>
基本の微調整
変数オブジェクトを使用でき、インポートされた関数を通じてバックプロパゲーションできます。単純なケースの場合、SavedModel をファインチューニング(再トレーニング)するには、これで十分です。
End of explanation
loaded = tf.saved_model.load(mobilenet_save_path)
print("MobileNet has {} trainable variables: {}, ...".format(
len(loaded.trainable_variables),
", ".join([v.name for v in loaded.trainable_variables[:5]])))
trainable_variable_ids = {id(v) for v in loaded.trainable_variables}
non_trainable_variables = [v for v in loaded.variables
if id(v) not in trainable_variable_ids]
print("MobileNet also has {} non-trainable variables: {}, ...".format(
len(non_trainable_variables),
", ".join([v.name for v in non_trainable_variables[:3]])))
Explanation: 一般的な微調整
Keras の SavedModel は、より高度な微調整の事例に対処できる、プレーンな __call__ よりも詳細な内容を提供します。TensorFlow Hub は、微調整の目的で共有される SavedModel に、該当する場合は次の項目を提供することをお勧めします。
モデルに、フォワードパスがトレーニングと推論で異なるドロップアウトまたはほかのテクニックが使用されている場合(バッチの正規化など)、__call__ メソッドは、オプションのPython 重視の training= 引数を取ります。この引数は、デフォルトで False になりますが、True に設定することができます。
__call__ 属性の隣には、対応する変数リストを伴う .variable と .trainable_variable 属性があります。もともとトレーニング可能であっても、微調整中には凍結されるべき変数は、.trainable_variables から省略されます。
レイヤとサブモデルの属性として重みの正規化を表現する Keras のようなフレームワークのために、.regularization_losses 属性も使用できます。この属性は、値が合計損失に追加することを目的とした引数無しの関数のリストを保有します。
最初の MobileNet の例に戻ると、これらの一部が動作していることを確認できます。
End of explanation
assert len(imported.signatures) == 0
Explanation: エクスポート中のシグネチャの指定
TensorFlow Serving や saved_model_cli のようなツールは、SavedModel と対話できます。これらのツールがどの ConcreteFunctions を使用するか判定できるように、サービングシグネチャを指定する必要があります。tf.keras.Model は、サービングシグネチャを自動的に指定しますが、カスタムモジュールに対して明示的に宣言する必要があります。
重要: モデルを TensorFlow 2.x と Python 以外の環境にエクスポートする必要がない限り、おそらく明示的にシグネチャをエクスポートする必要はありません。特定の関数に入力シグネチャを強要する方法を探している場合は、tf.function への input_signature 引数をご覧ください。
デフォルトでは、シグネチャはカスタム tf.Module で宣言されません。
End of explanation
module_with_signature_path = os.path.join(tmpdir, 'module_with_signature')
call = module.__call__.get_concrete_function(tf.TensorSpec(None, tf.float32))
tf.saved_model.save(module, module_with_signature_path, signatures=call)
imported_with_signatures = tf.saved_model.load(module_with_signature_path)
list(imported_with_signatures.signatures.keys())
Explanation: サービングシグネチャを宣言するには、signatures kwarg を使用して ConcreteFunction 指定します。単一のシグネチャを指定する場合、シグネチャキーは 'serving_default' となり、定数 tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY として保存されます。
End of explanation
module_multiple_signatures_path = os.path.join(tmpdir, 'module_with_multiple_signatures')
signatures = {"serving_default": call,
"array_input": module.__call__.get_concrete_function(tf.TensorSpec([None], tf.float32))}
tf.saved_model.save(module, module_multiple_signatures_path, signatures=signatures)
imported_with_multiple_signatures = tf.saved_model.load(module_multiple_signatures_path)
list(imported_with_multiple_signatures.signatures.keys())
Explanation: 複数のシグネチャをエクスポートするには、シグネチャキーのディクショナリを ConcreteFunction に渡します。各シグネチャキーは 1 つの ConcreteFunction に対応します。
End of explanation
class CustomModuleWithOutputName(tf.Module):
def __init__(self):
super(CustomModuleWithOutputName, self).__init__()
self.v = tf.Variable(1.)
@tf.function(input_signature=[tf.TensorSpec([], tf.float32)])
def __call__(self, x):
return {'custom_output_name': x * self.v}
module_output = CustomModuleWithOutputName()
call_output = module_output.__call__.get_concrete_function(tf.TensorSpec(None, tf.float32))
module_output_path = os.path.join(tmpdir, 'module_with_output_name')
tf.saved_model.save(module_output, module_output_path,
signatures={'serving_default': call_output})
imported_with_output_name = tf.saved_model.load(module_output_path)
imported_with_output_name.signatures['serving_default'].structured_outputs
Explanation: デフォルトでは、出力されたテンソル名は、output_0 というようにかなり一般的な名前です。出力の名前を制御するには、出力名を出力にマッピングするディクショナリを返すように tf.function を変更します。入力の名前は Python 関数の引数名から取られます。
End of explanation |
6,633 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Short Introduction to netCDF
What is netCDF?
"NetCDF is an abstraction that supports a view of data as a collection of self-describing, portable objects that can be accessed through a simple interface. Array values may be accessed directly, without knowing details of how the data are stored. Auxiliary information about the data, such as what units are used, may be stored with the data. Generic utilities and application programs can access netCDF datasets and transform, combine, analyze, or display specified fields of the data. The development of such applications has led to improved accessibility of data and improved re-usability of software for array-oriented data management, analysis, and display." from http
Step1: Getting to know your data
Step2: Using the meta-data in a program
Step3: Reading in the data
time
latitude
longitude
Sea surface temperature
netCDF variable is used to read and write array data. They are similar to numpy array
Step4: Computing on the data
Step5: Plotting a contour plot
Step6: Adding a color bar and label | Python Code:
from netCDF4 import Dataset
import numpy as np
import numpy.ma as ma
filename = "tos_O1_2001-2002.nc"
ds = Dataset(filename, mode="r")
Explanation: A Short Introduction to netCDF
What is netCDF?
"NetCDF is an abstraction that supports a view of data as a collection of self-describing, portable objects that can be accessed through a simple interface. Array values may be accessed directly, without knowing details of how the data are stored. Auxiliary information about the data, such as what units are used, may be stored with the data. Generic utilities and application programs can access netCDF datasets and transform, combine, analyze, or display specified fields of the data. The development of such applications has led to improved accessibility of data and improved re-usability of software for array-oriented data management, analysis, and display." from http://www.unidata.ucar.edu/software/netcdf/docs/netcdf_introduction.html
Data Heirarchy
netCDF files are arranged in a organized 'Dataset' heiarchy consisting of groups, dimensions, variables and attributes. For todays example we will ignore groups, but think of groups as sub-containers which hold their own dimensions, variables and attributes. Dimensions, variables and attributes will be discussed below but a general layout of a possible netCDF dataset may look like;
<img src="netCDF_Heirarchy.png">
netCDF Files Operations
The first thing we need to do is create a new netCDF 'Dataset' by opening a netCDF file for reading from or writing to.
We will use three parameters to create our Dataset file; file name, mode, and format. (eg. Dataset('filename', mode, format))
mode
'w' will write a new file. Note it will over write any file with the same file name (this can be be prevented by setting the 'clobber' parameter to false)
'a' appends data to a file
'r' reads data from a file (default setting)
format
'NETCDF3_CLASSIC',
'NETCDF3_64BIT',
'NETCDF4_CLASSIC' uses HDF5 for the underlying storage layer but enforces the classic netCDF 3 data model.
'NETCDF4' also uses HDF5 but uses the new netCDF4 model(default setting)
Goal
Want we want to do is open one netCDF Dataset, grab the data of interest, and plot it with matplotlib
End of explanation
print(ds)
Explanation: Getting to know your data
End of explanation
md = ds.__dict__
for k in md:
print("{0}: {1}".format(k, md[k]))
print(ds.dimensions)
print(ds.variables['tos'])
Explanation: Using the meta-data in a program
End of explanation
time = ds.variables['time']
print(time)
print(time[:])
lats = ds.variables['lat'][:]
lons = ds.variables['lon'][:]
tos = ds.variables["tos"][:,:,:]
print(tos[0,:,:])
Explanation: Reading in the data
time
latitude
longitude
Sea surface temperature
netCDF variable is used to read and write array data. They are similar to numpy array
End of explanation
print ('time from {0} to {1}'.format(time[0], time[-1]))
tos_min = ma.min(tos)
tos_max = ma.max(tos)
tos_avg = ma.average(tos)
tos_med = ma.median(tos)[0]
print("Sea surface temperature: min={0}, max={1}".format(tos_min, tos_max))
print("Sea surface temperature: average={0}, median={1}".format(tos_avg, tos_med))
Explanation: Computing on the data
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
cp = plt.contour(tos[12,:,:], 100)
Explanation: Plotting a contour plot
End of explanation
cp = plt.contour(tos[12,:,:], 100)
cbar = plt.colorbar(cp)
cbar.set_label("Temperature [K]")
plt.title("Sea surface temperature")
from mpl_toolkits.basemap import Basemap
fig=plt.figure(figsize=(16,16))
# Create the map
m = Basemap(llcrnrlon=np.min(lons),llcrnrlat=np.min(lats),\
urcrnrlon=np.max(lons),urcrnrlat=np.max(lats),\
projection='merc',resolution='l')
m.drawcoastlines(linewidth=2)
m.fillcontinents(color='gray')
m.drawcountries(linewidth=1)
plons, plats = np.meshgrid(lons, lats)
x, y = m(plons, plats)
cp = m.pcolor(x,y,tos[12,:,:])
cbar = plt.colorbar(cp)
cbar.set_label("Temperature [K]")
plt.title("Sea surface temperature")
plt.show()
Explanation: Adding a color bar and label
End of explanation |
6,634 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Freedom exploration
Data exploration
Step1: <a id='data_exploration'></a>
Data exploration
Step2: We are dealing with many NaN values, but its not clear how to treat them all.
I will take the NaNs into account afterwards.
<a id='deaths_per_area'></a>
Deaths per reporting area (all ages) | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib as plt
import seaborn as sns
filename='TABLE_III._Deaths_in_122_U.S._cities.csv'
df = pd.read_csv(filename)
df = df[:1000]
Explanation: Freedom exploration
Data exploration
End of explanation
df.describe()
Explanation: <a id='data_exploration'></a>
Data exploration
End of explanation
# Get a subsample of the dataset with deaths for all ages
# Do the report only for 2016 for simplicity
df_2016 = df[df['MMWR YEAR']==2016]
death_per_area = df_2016[['Reporting Area','All causes, by age (years), All Ages**']]
death_per_area.head()
death_per_area.columns=['Reporting Area','Deaths']
# 2. Drop NaNs:
print(len(death_per_area))
death_per_area = death_per_area.dropna()
print(len(death_per_area))
#sort them first in ascending order
death_per_area = death_per_area[:10]
death_per_area.head(20)
#This plot is too time consuming
# Initialize the matplotlib figure
#f, ax = plt.pyplot.subplots(figsize=(15, 6))
#Set context, increase font size
sns.set_context("poster", font_scale=1.5)
#Create a figure
plt.pyplot.figure(figsize=(15, 4))
#Define the axis object
ax = sns.barplot(x='Reporting Area', y='Deaths', data=death_per_area, palette="Blues_d")
#set parameters
ax.set(xlabel='Reporting Area', ylabel='Number of deaths', title= "Deaths per area")
plt.pyplot.xticks(rotation=45)
#show the plot
sns.plt.show()
df.mean()
means=df.mean().values[3:8]
categories=['>=65','45-64','25-44','1-24','LT-1']
categories_ids=[1,2,3,4,5]
means
# Initialize the matplotlib figure
#f, ax = plt.pyplot.subplots(figsize=(15, 6))
#Set context, increase font size
sns.set_context("poster", font_scale=1.5)
#Create a figure
plt.pyplot.figure(figsize=(15, 4))
#Define the axis object
ax = sns.barplot(x=categories, y=means, palette="Blues_d")
#set parameters
ax.set(xlabel='Age category', ylabel='Deaths mean', title= "Deaths per age category")
#show the plot
sns.plt.show()
Explanation: We are dealing with many NaN values, but its not clear how to treat them all.
I will take the NaNs into account afterwards.
<a id='deaths_per_area'></a>
Deaths per reporting area (all ages)
End of explanation |
6,635 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Project Euler
Step1: Add the products of all 3-digit numbers to the list
Step2: Create empty list of palindromes
Step3: Check if number is a palindrome, return list of palindromes
Step4: Print maximum from list of palindromes
Step5: Success! | Python Code:
products = []
Explanation: Project Euler: Problem 4
https://projecteuler.net/problem=4
A palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 × 99.
Find the largest palindrome made from the product of two 3-digit numbers.
Create empty list of products
End of explanation
for i in range(1, 1000):
for n in range(1,1000):
products.append(i * n)
Explanation: Add the products of all 3-digit numbers to the list
End of explanation
pals = []
Explanation: Create empty list of palindromes
End of explanation
def check_for_palindromes(products):
for number in products:
if number == int(str(number)[::-1]):
pals.append(number)
return(pals)
Explanation: Check if number is a palindrome, return list of palindromes
End of explanation
print(max(check_for_palindromes(products)))
Explanation: Print maximum from list of palindromes
End of explanation
# This cell will be used for grading, leave it at the end of the notebook.
Explanation: Success!
End of explanation |
6,636 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Important prerequisite
The package doesn't work with the newest version of Jupyter Notebook, run the following commands in your terminal before initiating the Notebook
pip install notebook==5.7.5
pip install tornado==4.5.3
Install dependencies
Don't worry about the error messages during installation, you will be fine.
Step1: List all semantic types used in BioThings Explorer
Step2: List all identifier types used in BioThings Explorer
Step3: List all predicates used in BioThings Explorer
Step4: List all associations between semantic types in BioThings Explorer
Step5: Filter all edges with "Gene" as the subject
Step6: Filter all edges with "ChemicalSubstance" as the object
Step7: Filter all edges connecting from "Gene" to "ChemicalSubstance"
Step8: Filter all edges representing "Gene" -> "targetedBy" -> "ChemicalSubstance" | Python Code:
# uncomment the following line if you haven't installed bte_schema
# !pip install git+https://github.com/kevinxin90/bte_schema#egg=bte_schema
# uncomment the following line if you haven't installed biothings_schema
#pip install git+https://github.com/biothings/biothings_schema.py#egg=biothings_schema.py
# import metadata module from biothings_explorer
from biothings_explorer.metadata import Metadata
# initialize Metadata module
metadata = Metadata()
Explanation: Important prerequisite
The package doesn't work with the newest version of Jupyter Notebook, run the following commands in your terminal before initiating the Notebook
pip install notebook==5.7.5
pip install tornado==4.5.3
Install dependencies
Don't worry about the error messages during installation, you will be fine.
End of explanation
metadata.list_all_semantic_types()
Explanation: List all semantic types used in BioThings Explorer
End of explanation
metadata.list_all_id_types()
Explanation: List all identifier types used in BioThings Explorer
End of explanation
metadata.list_all_predicates()
Explanation: List all predicates used in BioThings Explorer
End of explanation
metadata.list_all_associations()
Explanation: List all associations between semantic types in BioThings Explorer
End of explanation
metadata.registry.filter_edges(input_cls="Gene")
Explanation: Filter all edges with "Gene" as the subject
End of explanation
metadata.registry.filter_edges(output_cls="ChemicalSubstance")
Explanation: Filter all edges with "ChemicalSubstance" as the object
End of explanation
metadata.registry.filter_edges(input_cls="Gene", output_cls="ChemicalSubstance")
Explanation: Filter all edges connecting from "Gene" to "ChemicalSubstance"
End of explanation
metadata.registry.filter_edges(input_cls="Gene", output_cls="ChemicalSubstance", edge_label="bts:targetedBy")
Explanation: Filter all edges representing "Gene" -> "targetedBy" -> "ChemicalSubstance"
End of explanation |
6,637 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Resonant Sequences
(from http
Step3: Generating resonance sequences is fast
Try it!
Note
Step4: ..., but plotting can be slow for large N (N > 10)
Try it, but be patient ... (lots of lines to plot) | Python Code:
!date
%matplotlib inline
from __future__ import division
import math
def fareySequence(N, k=1):
Generate Farey sequence of order N, less than 1/k
# assert type(N) == int, "Order (N) must be an integer"
a, b = 0, 1
c, d = 1, N
seq = [(a,b)]
while c/d <= 1/k:
seq.append((c,d))
tmp = int(math.floor((N+b)/d))
a, b, c, d = c, d, tmp*c-a, tmp*d-b
return seq
def resonanceSequence(N, k):
Compute resonance sequence
Arguments:
- N (int): Order
- k (int): denominator of the farey frequency resonances are attached to
a, b = 0, 1
c, d = k, N-k
seq = [(a,b)]
while d >= 0:
seq.append((c,d))
tmp = int(math.floor((N+b+a)/(d+c)))
a, b, c, d = c, d, tmp*c-a, tmp*d-b
return seq
def plotResonanceDiagram(N, figsize=(10,10)):
import matplotlib.pyplot as plt
ALPHA = 0.5/N
plt.figure(figsize=figsize)
ticks = set([])
for h, k in fareySequence(N, 1):
ticks.add((h,k))
for a, b in resonanceSequence(N, k):
if b == 0:
x = np.array([h/k, h/k])
y = np.array([0, 1])
elif a== 0:
x = np.array([0, 1])
y = np.array([h/k, h/k])
else:
m = a/b
cp, cm = m*h/k, -m*h/k
x = np.array([0, h/k, 1])
y = np.array([cp, 0, cm+m])
plt.plot( x, y, 'b', alpha=ALPHA) # seqs. attached to horizontal axis
plt.plot( y, x, 'b', alpha=ALPHA) # seqs. attached to vertical axis
# also draw symetrical lines, to be fair (otherwise lines in the
# lower left traingle will be duplicated, but no the others)
plt.plot( x, 1-y, 'b', alpha=ALPHA)
plt.plot(1-y, x, 'b', alpha=ALPHA)
plt.xlim(0, 1)
plt.ylim(0, 1)
plt.xticks([h/k for h,k in ticks], [r"$\frac{{{:d}}}{{{:d}}}$".format(h,k) for h,k in ticks], fontsize=15)
plt.yticks([h/k for h,k in ticks], [r"${:d}/{:d}$".format(h,k) for h,k in ticks])
plt.title("N = {:d}".format(N))
Explanation: Resonant Sequences
(from http://journals.aps.org/prstab/abstract/10.1103/PhysRevSTAB.17.014001)
End of explanation
N = 5
for k in set([k for _,k in fareySequence(N, 1)]):
print "N={}, k={}:".format(N, k)
print "\t", resonanceSequence(N, k)
Explanation: Generating resonance sequences is fast
Try it!
Note: in the original paper there was a minor mistake. Eq. (8) read
$$
\Bigg( \Big\lfloor \frac{N+b+a}{d} \Big\rfloor c - a , \Big\lfloor \frac{N+b+a}{d} \Big\rfloor d - b \Bigg)
$$
but it should read
$$
\Bigg( \Big\lfloor \frac{N+b+a}{d+c} \Big\rfloor c - a , \Big\lfloor \frac{N+b+a}{d+c} \Big\rfloor d - b \Bigg)
$$
I've contacted Rogelio Tomás (the author) and he agreed with the correction (an erratum will be sent to the publication).
End of explanation
# from matplotlib2tikz import save as save_tikz
import numpy as np
import matplotlib.pyplot as plt
plotResonanceDiagram(10, figsize=(12,12))
# save_tikz('resonanceDiagram_N7.tikz')
def plotSolution(a,b,c,color='r'):
x = [c/a, 0, (c-b)/a, 1]
y = [0, c/b, 1, (c-a)/b]
plt.plot(x, y, color=color, alpha=0.5, linewidth=4)
# plot some example solutions
if True:
# solutions for (x,y) = (0.5, 1)
plotSolution( 4, -1, 1)
plotSolution(-2, 2, 1)
plotSolution(-2, 3, 2)
plotSolution( 4, 1, 3)
plotSolution( 2, 1, 2)
# solutions for (x,y) = (0.5, 0.5)
plotSolution( 3, -1, 1, 'g')
plotSolution(-1, 3, 1, 'g')
plotSolution( 3, 1, 2, 'g')
plotSolution( 1, 3, 2, 'g')
plotSolution( 1, 1, 1, 'g')
plt.sh
Explanation: ..., but plotting can be slow for large N (N > 10)
Try it, but be patient ... (lots of lines to plot)
End of explanation |
6,638 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Element
Import modules/packages
Step1: Model Machine
Step2: Get Element by type
Step3: Example
Step4: Investigate the equad
Step5: Dynamic field
Step6: Get values
If only readback value is of interest, Approach 1 is recommanded and most natural.
Step7: Set values
Always Approach 1 is recommanded. | Python Code:
import phantasy
Explanation: Element
Import modules/packages
End of explanation
mp = phantasy.MachinePortal(machine='FRIB_FE', segment='LEBT')
Explanation: Model Machine
End of explanation
mp.get_all_types()
Explanation: Get Element by type
End of explanation
equads = mp.get_elements(type='EQUAD')
equads
# first equad
equad0 = equads[0]
equad0
Explanation: Example: Electric static Quadrupole
Get the first equad
End of explanation
print("Index : %d" % equad0.index)
print("Name : %s" % equad0.name)
print("Family : %s" % equad0.family)
print("Location : (begin) %f (end) %f" % (equad0.sb, equad0.se))
print("Length : %f" % equad0.length)
print("Groups : %s" % equad0.group)
print("PVs : %s" % equad0.pv())
print("Tags : %s" % equad0.tags)
print("Fields : %s" % equad0.fields)
Explanation: Investigate the equad
End of explanation
equad0.V
Explanation: Dynamic field: V
All available dynamic fields could be retrieved by equad0.fields (for equad0 here, there is only one field, i.e. V).
End of explanation
# Approach 1: dynamic field feature (readback PV)
print("Readback: %f" % equad0.V)
# Approach 2: caget(pv_name)
pv_rdbk = equad0.pv(field='V', handle='readback')
print("Readback: %s" % phantasy.caget(pv_rdbk))
# Approach 3: CaField
v_field = equad0.get_field('V')
print("Readback: %f" % v_field.get(handle='readback'))
print("Setpoint: %f" % v_field.get(handle='setpoint'))
print("Readset : %f" % v_field.get(handle='readset'))
Explanation: Get values
If only readback value is of interest, Approach 1 is recommanded and most natural.
End of explanation
# Save orignal set value for 'V' field
v0 = equad0.get_field('V').get(handle='setpoint')
# Approach 1: dynamic field feature (setpoint PV)
equad0.V = 2000
# Approach 2: caput(pv_name)
pv_cset = equad0.pv(field='V', handle='setpoint')
phantasy.caput(pv_cset, 1000)
# Approach 3: CaField
v_field = equad0.get_field('V')
v_field.set(handle='setpoint', value=1500)
Explanation: Set values
Always Approach 1 is recommanded.
End of explanation |
6,639 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Feature Engineering
Learning Objectives
* Improve the accuracy of a model by using feature engineering
* Understand there's two places to do feature engineering in Tensorflow
1. Using the tf.feature_column module
2. In the input functions
Introduction
Up until now we've been focusing on Tensorflow mechanics to make sure our code works, we have neglected model performance, which at this point is 9.26 RMSE.
In this notebook we'll attempt to improve on that using feature engineering.
Step1: Load raw data
These are the same files created in the create_datasets.ipynb notebook
Step2: Train and Evaluate input functions
These are the same as before with one additional line of code
Step3: Feature Engineering
Step4: Feature Engineering
Step5: Gather list of feature columns
Ultimately our estimator expects a list of feature columns, so let's gather all our engineered features into a single list.
We cannot pass categorical or crossed feature columns directly into a DNN, Tensorflow will give us an error. We must first wrap them using either indicator_column() or embedding_column(). The former will pass through the one-hot encoded representation as is, the latter will embed the feature into a dense representation of specified dimensionality (the 4th root of the number of categories is a good starting point for number of dimensions). Read more about indicator and embedding columns here.
Step6: Serving Input Receiver function
Same as before except the received tensors are wrapped with add_engineered_features().
Step7: Train and Evaluate (500 train steps)
The same as before, we'll train the model for 500 steps (sidenote
Step8: Results
Our RMSE is now 5.94, our first significant improvement! If we look at the RMSE trend in TensorBoard it appears the model is still learning, so training past 500 steps would likely lower the RMSE even more. Let's run again, this time for 10x as many steps.
Train and Evaluate (5,000 train steps)
Now, just as above, we'll execute a longer trianing job with 5,000 train steps using our engineered features and assess the performance. | Python Code:
import tensorflow as tf
import numpy as np
import shutil
print(tf.__version__)
Explanation: Introduction to Feature Engineering
Learning Objectives
* Improve the accuracy of a model by using feature engineering
* Understand there's two places to do feature engineering in Tensorflow
1. Using the tf.feature_column module
2. In the input functions
Introduction
Up until now we've been focusing on Tensorflow mechanics to make sure our code works, we have neglected model performance, which at this point is 9.26 RMSE.
In this notebook we'll attempt to improve on that using feature engineering.
End of explanation
!gsutil cp gs://cloud-training-demos/taxifare/small/*.csv .
!ls -l *.csv
Explanation: Load raw data
These are the same files created in the create_datasets.ipynb notebook
End of explanation
CSV_COLUMN_NAMES = ["fare_amount","dayofweek","hourofday","pickuplon","pickuplat","dropofflon","dropofflat"]
CSV_DEFAULTS = [[0.0],[1],[0],[-74.0],[40.0],[-74.0],[40.7]]
def read_dataset(csv_path):
def _parse_row(row):
# Decode the CSV row into list of TF tensors
fields = tf.decode_csv(records = row, record_defaults = CSV_DEFAULTS)
# Pack the result into a dictionary
features = dict(zip(CSV_COLUMN_NAMES, fields))
# NEW: Add engineered features
features = add_engineered_features(features)
# Separate the label from the features
label = features.pop("fare_amount") # remove label from features and store
return features, label
# Create a dataset containing the text lines.
dataset = tf.data.Dataset.list_files(file_pattern = csv_path) # (i.e. data_file_*.csv)
dataset = dataset.flat_map(map_func = lambda filename:tf.data.TextLineDataset(filenames = filename).skip(count = 1))
# Parse each CSV row into correct (features,label) format for Estimator API
dataset = dataset.map(map_func = _parse_row)
return dataset
def train_input_fn(csv_path, batch_size = 128):
#1. Convert CSV into tf.data.Dataset with (features,label) format
dataset = read_dataset(csv_path)
#2. Shuffle, repeat, and batch the examples.
dataset = dataset.shuffle(buffer_size = 1000).repeat(count = None).batch(batch_size = batch_size)
return dataset
def eval_input_fn(csv_path, batch_size = 128):
#1. Convert CSV into tf.data.Dataset with (features,label) format
dataset = read_dataset(csv_path)
#2.Batch the examples.
dataset = dataset.batch(batch_size = batch_size)
return dataset
Explanation: Train and Evaluate input functions
These are the same as before with one additional line of code: a call to add_engineered_features() from within the _parse_row() function.
End of explanation
# 1. One hot encode dayofweek and hourofday
fc_dayofweek = tf.feature_column.categorical_column_with_identity(key = "dayofweek", num_buckets = 7)
fc_hourofday = tf.feature_column.categorical_column_with_identity(key = "hourofday", num_buckets = 24)
# 2. Bucketize latitudes and longitudes
NBUCKETS = 16
latbuckets = np.linspace(start = 38.0, stop = 42.0, num = NBUCKETS).tolist()
lonbuckets = np.linspace(start = -76.0, stop = -72.0, num = NBUCKETS).tolist()
fc_bucketized_plat = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "pickuplon"), boundaries = lonbuckets)
fc_bucketized_plon = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "pickuplat"), boundaries = latbuckets)
fc_bucketized_dlat = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "dropofflon"), boundaries = lonbuckets)
fc_bucketized_dlon = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "dropofflat"), boundaries = latbuckets)
# 3. Cross features to get combination of day and hour
fc_crossed_day_hr = tf.feature_column.crossed_column(keys = [fc_dayofweek, fc_hourofday], hash_bucket_size = 24 * 7)
Explanation: Feature Engineering: feature columns
There are two places in Tensorflow where we can do feature engineering. The first is using the tf.feature_column package. This allows us easily
bucketize continuous features
one hot encode categorical features
create feature crosses
For details on the possible tf.feature_column transformations and when to use each see the official guide.
Let's use tf.feature_column to create a feature that shows the combination of day of week and hour of day. This will allow our model to easily learn the difference between say Wednesday at 5pm (rush hour, expect higher fares) and Sunday at 5pm (light traffic, expect lower fares).
Let's also use it to bucketize our latitudes and longitudes because treating them as continuous numbers is misleading to the model.
Exercise 1
In the code cell below you are asked to create some additional crossed feature columns for the model. For fc_crossed_dloc create a crossed column using the bucketized dropoff latitude and dropoff longitude. For fc_crossed_ploc create a crossed column using the pickup latitude and pickup longitude. For fc_crossed_pd_pair create a crossed feature column using the pickup location and the dropoff location.
End of explanation
def add_engineered_features(features):
features["dayofweek"] = features["dayofweek"] - 1 # subtract one since our days of week are 1-7 instead of 0-6
features["latdiff"] = features["pickuplat"] - features["dropofflat"] # East/West
features["londiff"] = features["pickuplon"] - features["dropofflon"] # North/South
features["euclidean_dist"] = tf.sqrt(x = features["latdiff"]**2 + features["londiff"]**2)
return features
Explanation: Feature Engineering: input functions
While feature columns are very powerful, what happens when we want to something that there isn't a feature column for?
Recall the input functions recieve csv data, format it, then pass it batch by batch to the model. We can also use input functions to inject arbitrary tensorflow code to manipulate the data.
However, we need to be careful that any transformations we do in one input function, we do for all, otherwise we'll have training-serving skew.
To guard against this we encapsulate all input function feature engineering in a single function, add_engineered_features(), and call this function from every input function.
Let's calculate the euclidean distance between the pickup and dropoff points and feed that as a new feature to our model.
Also it may be useful to know which cardinal direction that distance is in. I suspect that distance is cheaper to travel North/South because in Manhatten streets that run North/South have less stops than streets that run East/West.
End of explanation
feature_cols = [
#1. Engineered using tf.feature_column module
tf.feature_column.indicator_column(categorical_column = fc_crossed_day_hr),
fc_bucketized_plat,
fc_bucketized_plon,
fc_bucketized_dlat,
fc_bucketized_dlon,
#2. Engineered in input functions
tf.feature_column.numeric_column(key = "latdiff"),
tf.feature_column.numeric_column(key = "londiff"),
tf.feature_column.numeric_column(key = "euclidean_dist")
]
Explanation: Gather list of feature columns
Ultimately our estimator expects a list of feature columns, so let's gather all our engineered features into a single list.
We cannot pass categorical or crossed feature columns directly into a DNN, Tensorflow will give us an error. We must first wrap them using either indicator_column() or embedding_column(). The former will pass through the one-hot encoded representation as is, the latter will embed the feature into a dense representation of specified dimensionality (the 4th root of the number of categories is a good starting point for number of dimensions). Read more about indicator and embedding columns here.
End of explanation
def serving_input_receiver_fn():
receiver_tensors = {
'dayofweek' : tf.placeholder(dtype = tf.int32, shape = [None]), # shape is vector to allow batch of requests
'hourofday' : tf.placeholder(dtype = tf.int32, shape = [None]),
'pickuplon' : tf.placeholder(dtype = tf.float32, shape = [None]),
'pickuplat' : tf.placeholder(dtype = tf.float32, shape = [None]),
'dropofflat' : tf.placeholder(dtype = tf.float32, shape = [None]),
'dropofflon' : tf.placeholder(dtype = tf.float32, shape = [None]),
}
features = add_engineered_features(receiver_tensors) # 'features' is what is passed on to the model
return tf.estimator.export.ServingInputReceiver(features = features, receiver_tensors = receiver_tensors)
Explanation: Serving Input Receiver function
Same as before except the received tensors are wrapped with add_engineered_features().
End of explanation
%%time
OUTDIR = "taxi_trained_dnn/500"
shutil.rmtree(path = OUTDIR, ignore_errors = True) # start fresh each time
tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file
tf.logging.set_verbosity(v = tf.logging.INFO) # so loss is printed during training
model = tf.estimator.DNNRegressor(
hidden_units = [10,10], # specify neural architecture
feature_columns = feature_cols,
model_dir = OUTDIR,
config = tf.estimator.RunConfig(
tf_random_seed = 1, # for reproducibility
save_checkpoints_steps = 100 # checkpoint every N steps
)
)
# Add custom evaluation metric
def my_rmse(labels, predictions):
pred_values = tf.squeeze(input = predictions["predictions"], axis = -1)
return {"rmse": tf.metrics.root_mean_squared_error(labels = labels, predictions = pred_values)}
model = tf.contrib.estimator.add_metrics(estimator = model, metric_fn = my_rmse)
train_spec = tf.estimator.TrainSpec(
input_fn = lambda: train_input_fn("./taxi-train.csv"),
max_steps = 500)
exporter = tf.estimator.FinalExporter(name = "exporter", serving_input_receiver_fn = serving_input_receiver_fn) # export SavedModel once at the end of training
# Note: alternatively use tf.estimator.BestExporter to export at every checkpoint that has lower loss than the previous checkpoint
eval_spec = tf.estimator.EvalSpec(
input_fn = lambda: eval_input_fn("./taxi-valid.csv"),
steps = None,
start_delay_secs = 1, # wait at least N seconds before first evaluation (default 120)
throttle_secs = 1, # wait at least N seconds before each subsequent evaluation (default 600)
exporters = exporter) # export SavedModel once at the end of training
tf.estimator.train_and_evaluate(estimator = model, train_spec = train_spec, eval_spec = eval_spec)
Explanation: Train and Evaluate (500 train steps)
The same as before, we'll train the model for 500 steps (sidenote: how many epochs do 500 trains steps represent?). Let's see how the engineered features we've added affect the performance.
End of explanation
%%time
OUTDIR = "taxi_trained_dnn/5000"
shutil.rmtree(path = OUTDIR, ignore_errors = True) # start fresh each time
tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file
tf.logging.set_verbosity(v = tf.logging.INFO) # so loss is printed during training
model = tf.estimator.DNNRegressor(
hidden_units = [10,10], # specify neural architecture
feature_columns = feature_cols,
model_dir = OUTDIR,
config = tf.estimator.RunConfig(
tf_random_seed = 1, # for reproducibility
save_checkpoints_steps = 100 # checkpoint every N steps
)
)
# Add custom evaluation metric
def my_rmse(labels, predictions):
pred_values = tf.squeeze(input = predictions["predictions"], axis = -1)
return {"rmse": tf.metrics.root_mean_squared_error(labels = labels, predictions = pred_values)}
model = tf.contrib.estimator.add_metrics(estimator = model, metric_fn = my_rmse)
train_spec = tf.estimator.TrainSpec(
input_fn = lambda: train_input_fn("./taxi-train.csv"),
max_steps = 5000)
exporter = tf.estimator.FinalExporter(name = "exporter", serving_input_receiver_fn = serving_input_receiver_fn) # export SavedModel once at the end of training
# Note: alternatively use tf.estimator.BestExporter to export at every checkpoint that has lower loss than the previous checkpoint
eval_spec = tf.estimator.EvalSpec(
input_fn = lambda: eval_input_fn("./taxi-valid.csv"),
steps = None,
start_delay_secs = 1, # wait at least N seconds before first evaluation (default 120)
throttle_secs = 1, # wait at least N seconds before each subsequent evaluation (default 600)
exporters = exporter) # export SavedModel once at the end of training
tf.estimator.train_and_evaluate(estimator = model, train_spec = train_spec, eval_spec = eval_spec)
Explanation: Results
Our RMSE is now 5.94, our first significant improvement! If we look at the RMSE trend in TensorBoard it appears the model is still learning, so training past 500 steps would likely lower the RMSE even more. Let's run again, this time for 10x as many steps.
Train and Evaluate (5,000 train steps)
Now, just as above, we'll execute a longer trianing job with 5,000 train steps using our engineered features and assess the performance.
End of explanation |
6,640 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Time to add SciPy to our toolbox
In this session you will learn some of the functions available in SciPy package.
What is SciPy?
General purpose scientific library (that consist of bunch of sublibraries) and builds on NumPy arrays. It consists of several submodules
Step1: Read data from a binary file
Define a file name
Step2: Now use numpy.fromfile() function to read data.
Side note
Step3: Aside
Just as in Fortran, you can use direct access way of reading unformatted binary files with a given length of records. For example
Step4: You can now quickly plot the data, for example using matplotlib.pyplot.scatter function. Otherwise, skip the next cell.
Step5: Subset the data
The next step is to grid the observations onto a rectangular grid. Due to data sparsity over some regions, you can create a smaller sample first, e.g. just over Europe. You can loosely define the Europe region within 10W and 60E longitude and within 35N and 70N latitude. Then you can create a "condition" to subset the original data.
Step6: Check that subset works.
Step7: Define a rectangular grid with constant step
Step8: For the interpolation you can use scipy.interpolate.griddata function.
Read its docstring and complete the next code cell with the correct arguments.
Which interpolation method would you use?
Don't forget to use sub_idx! (Hint
Step9: How about other methods?
Try nearest or cubic
Step10: Plot results
Enough boring interpolation, time to plot what you got so far.
Create a grid of 4 subplots | Python Code:
# import os
# import numpy as np
# import matplotlib.pyplot as plt
# from scipy import interpolate # import submodule for interpolation and regridding
## make figures appear within the notebook
# %matplotlib inline
Explanation: Time to add SciPy to our toolbox
In this session you will learn some of the functions available in SciPy package.
What is SciPy?
General purpose scientific library (that consist of bunch of sublibraries) and builds on NumPy arrays. It consists of several submodules:
Special functions (scipy.special)
Integration (scipy.integrate)
Optimization (scipy.optimize)
Interpolation (scipy.interpolate)
Fourier Transforms (scipy.fftpack)
Signal Processing (scipy.signal)
Linear Algebra (scipy.linalg)
Sparse Eigenvalue Problems (scipy.sparse)
Statistics (scipy.stats)
Multi-dimensional image processing (scipy.ndimage)
File IO (scipy.io)
See full documentation.
Exercise 1
In the first exercise you are blending numpy, scipy and matplotlib together.
Task
Read synoptic observations from a binary file (numpy)
Interpolate them on a regular grid over Europe (scipy)
Plot the result (matplotlib)
Load all the necessary modules
End of explanation
## code is ready here, just uncomment
#
# file_name = os.path.join(os.path.pardir, 'data',
# 'surface_synoptic_20161109_0000_lon_lat_temperature_17097.bin')
Explanation: Read data from a binary file
Define a file name:
End of explanation
# nrec = 17097
# bin_data = np.fromfile( ..., dtype=...)
Explanation: Now use numpy.fromfile() function to read data.
Side note: If it was an unformatted sequential binary file from Fortran code, you could have used scipy.io.FortranFile.
Important:
* the binary file contains only the data itself
* the data array contains values of longitude, latitude and air temperature
* these data are from 17097 stations
* data are saved in single precision float format
End of explanation
# lons = bin_data[:nrec]
# lats =
# temp =
Explanation: Aside
Just as in Fortran, you can use direct access way of reading unformatted binary files with a given length of records. For example:
python
nx = 123 # points in longitude
ny = 456 # points in latitude
with open(my_file_name, 'rb') as f:
f.seek(4 * ny * nx) # Move nx*ny float32 records forward (skip the first slice)
arr = np.fromfile(f, dtype=np.float32, count=ny*nx) # Read the next slice of data
Check the type and length of bin_data. Is this what you expected?
How will you slice this array into coordinates and data values?
End of explanation
# plt.scatter(lons, lats, c=temp)
Explanation: You can now quickly plot the data, for example using matplotlib.pyplot.scatter function. Otherwise, skip the next cell.
End of explanation
# sub_idx = (lons > ...) & (lons < 60) & (lats > ...) & (lats < ...) # indices over "Europe"
Explanation: Subset the data
The next step is to grid the observations onto a rectangular grid. Due to data sparsity over some regions, you can create a smaller sample first, e.g. just over Europe. You can loosely define the Europe region within 10W and 60E longitude and within 35N and 70N latitude. Then you can create a "condition" to subset the original data.
End of explanation
# print('Original shape: {}'.format(lons.shape))
# print('Subset shape: {}'.format(lons[sub_idx].shape))
Explanation: Check that subset works.
End of explanation
# lon_step =
# lat_step =
# lon1d = np.arange(-10, 60.1, lon_step)
# lat1d = np.arange(35, 70.1, lat_step)
# lon2d, lat2d = np.meshgrid(lon1d, lat1d)
Explanation: Define a rectangular grid with constant step
End of explanation
# temp_grd_lin = interpolate.griddata((xpoints, ypoints), values, xi, method='linear')[source]
Explanation: For the interpolation you can use scipy.interpolate.griddata function.
Read its docstring and complete the next code cell with the correct arguments.
Which interpolation method would you use?
Don't forget to use sub_idx! (Hint: it should go to xpoints, ypoints and values)
Feel free to explore the scipy.interpolate submodule for other interpolation options.
End of explanation
# temp_grd_near = interpolate.griddata( ... )
# temp_grd_cub = interpolate.griddata( ... )
Explanation: How about other methods?
Try nearest or cubic
End of explanation
# fig, axs = plt.subplots(figsize=(15, 10), nrows=2, ncols=2) #
# print('2x2 subplot grid: {}'.format(axs.shape))
#
# h = axs[0, 0].scatter(lons[sub_idx], lats[sub_idx], c=temp[sub_idx],
# edgecolors='none', vmin=..., vmax=...)
# fig.colorbar(h, ax=axs[0, 0])
# axs[0, 0].set_title('Observations')
#
#
#
# h = axs[0, 1].pcolormesh(lon2d, lat2d, temp_grd_lin,
# vmin=..., vmax=...)
# fig.colorbar(h, ax=axs[0, 1])
# axs[0, 0].set_title('Linear')
#
#
#
# h = axs[1, 0] ... ( ... temp_grd_near)
#
#
# axs[1, 0].set_title('Nearest')
#
#
#
# axs[1, 1].set_title('Cubic')
Explanation: Plot results
Enough boring interpolation, time to plot what you got so far.
Create a grid of 4 subplots: one for the observations and 3 for the gridded data
Plot observations using scatter() function
Use contourf() or pcolormesh() to plot gridded data
For easier comparison, use the same colour limits in all 4 subplots
Attach a colorbar to each subplot, if you like
Set titles
Tweak/optimize the code as you want
Compare different interpolation methods
End of explanation |
6,641 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Experiments collected data
Data required to run this notebook are available for download at this link
Step1: Loading support data collected from the target
Step2: Trace analysis
We want to ensure that the task has the expected workload
Step3: Requires a lot of interactions and hand made measurements
We cannot easily annotate our findings to produre a sharable notebook
Using the TRAPpy Trace Plotter
An overall view on the trace is still useful to get a graps on what we are looking at.
Step4: Events Plotting
The sched_load_avg_task trace events reports this information
Using all the unix arsenal to parse and filter the trace
Step5: A graphical representation whould be really usefuly!
Using TRAPpy generated DataFrames
Generate DataFrames from Trace Events
Step6: Get the DataFrames for the events of interest
Step7: Plot the signals of interest
Step8: Use a set of standard plots
A graphical representation can always be on hand
Step9: Usually a common set of plots can be generated which capture the most useful information realted to a workload we are analysing
Example of task realted signals
Step10: Example of Clusters related singals
Step11: Take-away
In a single plot we can aggregate multiple informations which makes it easy to verify the expected behaviros.
With a set of properly defined plots we are able to condense mucy more sensible information which are easy to ready because they are "standard".<br>
We immediately capture what we are interested to evaluate!
Moreover, all he produced plots are available as high resolution images, ready to be shared and/or used in other reports.
Step12: Behavioral Analysis
Is the task starting on a big core?
We always expect a new task to be allocated on a big core.
To verify this condition we need to know what is the topology of the target.
This information is automatically collected by LISA when the workload is executed.<br>
Thus it can be used to write portable tests conditions.
Create a SchedAssert for the specific topology
Step13: Use the SchedAssert method to investigate properties of this task
Step14: Is the task generating the expected load?
We expect 35% load in the between 2 and 4 [s] of the execution
Identify the start of the first phase
Step15: Use the SchedAssert module to check the task load in that period
Step16: This test fails because we have not considered a scaling factor due running at a lower OPP.
To write a portable test we need to account for that condition!
Take OPP scaling into consideration
Step17: Write a more portable assertion
Step18: Is the task migrated once we exceed the LITTLE CPUs capacity?
Check that the task is switching the cluster once expected
Step19: Check that the task is running most of its time on the LITTLE cluster
Step20: Check that the util estimation is properly computed and CPU capacity matches
Step21: Check that the CPU capacity matches the task boosted value
Step22: Do it the TRAPpy way
Step23: Going further on events processing
What are the relative residency on different OPPs?
We are not limited to the usage of pre-defined functions. We can exploit the full power of PANDAS to process the DataFrames to extract all kind of information we want.
Use PANDAs APIs to filter and aggregate events
Step24: Use MathPlot Lib to generate all kind of plot from collected data
Step25: <br><br><br><br>
Advanced DataFrame usage
Step26: Setup the connection
Step27: Target control
Run custom commands
Step28: Example CPUFreq configuration
Step29: Example of CGruops configuration
Step30: Remote workloads execution
Generate RTApp configurations
Step31: Execution and tracing
Step32: Regression testing support
Writing and running regression tests using the LISA API
Defined configurations to test and workloads
Step33: Write Test Cases | Python Code:
res_dir = '../../results/SchedTuneAnalysis/'
!tree {res_dir}
noboost_trace = res_dir + 'trace_noboost.dat'
boost15_trace = res_dir + 'trace_boost15.dat'
boost25_trace = res_dir + 'trace_boost25.dat'
# trace_file = noboost_trace
trace_file = boost15_trace
# trace_file = boost25_trace
Explanation: Experiments collected data
Data required to run this notebook are available for download at this link:
https://www.dropbox.com/s/q9ulf3pusu0uzss/SchedTuneAnalysis.tar.xz?dl=0
This archive has to be extracted from within the LISA's results folder.
Initial set of data
End of explanation
import json
# Load the platform information
with open('../../results/SchedTuneAnalysis/platform.json', 'r') as fh:
platform = json.load(fh)
print "Platform descriptio collected from the target:"
print json.dumps(platform, indent=4)
from trappy.stats.Topology import Topology
# Create a topology descriptor
topology = Topology(platform['topology'])
Explanation: Loading support data collected from the target
End of explanation
# Let's look at the trace using kernelshark...
!kernelshark {trace_file} 2>/dev/null
Explanation: Trace analysis
We want to ensure that the task has the expected workload:<br>
- LITTLE CPU bandwidth of [10, 35 and 60]% every 2[ms]
- activations every 32ms
- always starts on a big core
Trace inspection
Using kernelshark
End of explanation
# Suport for FTrace events parsing and visualization
import trappy
# NOTE: The interactive trace visualization is available only if you run
# the workload to generate a new trace-file
trappy.plotter.plot_trace(trace_file)#, execnames="task_ramp")#, pids=[2221])
Explanation: Requires a lot of interactions and hand made measurements
We cannot easily annotate our findings to produre a sharable notebook
Using the TRAPpy Trace Plotter
An overall view on the trace is still useful to get a graps on what we are looking at.
End of explanation
# Get a list of first 5 "sched_load_avg_events" events
sched_load_avg_events = !(\
grep sched_load_avg_task {trace_file.replace('.dat', '.txt')} | \
head -n5 \
)
print "First 5 sched_load_avg events:"
for line in sched_load_avg_events:
print line
Explanation: Events Plotting
The sched_load_avg_task trace events reports this information
Using all the unix arsenal to parse and filter the trace
End of explanation
# Load the LISA::Trace parsing module
from trace import Trace
# Define which event we are interested into
trace = Trace(platform, trace_file, [
"sched_switch",
"sched_load_avg_cpu",
"sched_load_avg_task",
"sched_boost_cpu",
"sched_boost_task",
"cpu_frequency",
"cpu_capacity",
])
Explanation: A graphical representation whould be really usefuly!
Using TRAPpy generated DataFrames
Generate DataFrames from Trace Events
End of explanation
# Trace events are converted into tables, let's have a look at one
# of such tables
load_df = trace.data_frame.trace_event('sched_load_avg_task')
load_df.head()
df = load_df[load_df.comm.str.match('k.*')]
# df.head()
print df.comm.unique()
cap_df = trace.data_frame.trace_event('cpu_capacity')
cap_df.head()
Explanation: Get the DataFrames for the events of interest
End of explanation
# Signals can be easily plot using the ILinePlotter
trappy.ILinePlot(
# FTrace object
trace.ftrace,
# Signals to be plotted
signals=[
'cpu_capacity:capacity',
'sched_load_avg_task:util_avg'
],
# Generate one plot for each value of the specified column
pivot='cpu',
# Generate only plots which satisfy these filters
filters={
'comm': ['task_ramp'],
'cpu' : [2,5]
},
# Formatting style
per_line=2,
drawstyle='steps-post',
marker = '+',
sync_zoom=True,
group="GroupTag"
).view()
Explanation: Plot the signals of interest
End of explanation
trace = Trace(platform, boost15_trace,
["sched_switch",
"sched_overutilized",
"sched_load_avg_cpu",
"sched_load_avg_task",
"sched_boost_cpu",
"sched_boost_task",
"cpu_frequency",
"cpu_capacity",
],
tasks=['task_ramp'],
plots_prefix='boost15_'
)
Explanation: Use a set of standard plots
A graphical representation can always be on hand
End of explanation
trace.analysis.tasks.plotTasks(
signals=['util_avg', 'boosted_util', 'sched_overutilized', 'residencies'],
)
Explanation: Usually a common set of plots can be generated which capture the most useful information realted to a workload we are analysing
Example of task realted signals
End of explanation
trace.analysis.frequency.plotClusterFrequencies()
Explanation: Example of Clusters related singals
End of explanation
!tree {res_dir}
Explanation: Take-away
In a single plot we can aggregate multiple informations which makes it easy to verify the expected behaviros.
With a set of properly defined plots we are able to condense mucy more sensible information which are easy to ready because they are "standard".<br>
We immediately capture what we are interested to evaluate!
Moreover, all he produced plots are available as high resolution images, ready to be shared and/or used in other reports.
End of explanation
from bart.sched.SchedMultiAssert import SchedAssert
# Create an object to get/assert scheduling pbehaviors
sa = SchedAssert(trace_file, topology, execname='task_ramp')
Explanation: Behavioral Analysis
Is the task starting on a big core?
We always expect a new task to be allocated on a big core.
To verify this condition we need to know what is the topology of the target.
This information is automatically collected by LISA when the workload is executed.<br>
Thus it can be used to write portable tests conditions.
Create a SchedAssert for the specific topology
End of explanation
# Check on which CPU the task start its execution
if sa.assertFirstCpu(platform['clusters']['big']):#, window=(4,6)):
print "PASS: Task starts on big CPU: ", sa.getFirstCpu()
else:
print "FAIL: Task does NOT start on a big CPU!!!"
Explanation: Use the SchedAssert method to investigate properties of this task
End of explanation
# Let's find when the task starts
start = sa.getStartTime()
first_phase = (start, start+2)
print "The task starts execution at [s]: ", start
print "Window of interest: ", first_phase
Explanation: Is the task generating the expected load?
We expect 35% load in the between 2 and 4 [s] of the execution
Identify the start of the first phase
End of explanation
import operator
# Check the task duty cycle in the second step window
if sa.assertDutyCycle(10, operator.lt, window=first_phase):
print "PASS: Task duty-cycle is {}% in the [2,4] execution window"\
.format(sa.getDutyCycle(first_phase))
else:
print "FAIL: Task duty-cycle is {}% in the [2,4] execution window"\
.format(sa.getDutyCycle(first_phase))
Explanation: Use the SchedAssert module to check the task load in that period
End of explanation
# Get LITTLEs capacities ranges:
littles = platform['clusters']['little']
little_capacities = cap_df[cap_df.cpu.isin(littles)].capacity
min_cap = little_capacities.min()
max_cap = little_capacities.max()
print "LITTLEs capacities range: ", (min_cap, max_cap)
# Get min OPP correction factor
min_little_scale = 1.0 * min_cap / max_cap
print "LITTLE's min capacity scale: ", min_little_scale
# Scale the target duty-cycle according to the min OPP
target_dutycycle = 10 / min_little_scale
print "Scaled target duty-cycle: ", target_dutycycle
target_dutycycle = 1.01 * target_dutycycle
print "1% tolerance scaled duty-cycle: ", target_dutycycle
Explanation: This test fails because we have not considered a scaling factor due running at a lower OPP.
To write a portable test we need to account for that condition!
Take OPP scaling into consideration
End of explanation
# Add a 1% tolerance to our scaled target dutycycle
if sa.assertDutyCycle(1.01 * target_dutycycle, operator.lt, window=first_phase):
print "PASS: Task duty-cycle is {}% in the [2,4] execution window"\
.format(sa.getDutyCycle(first_phase) * min_little_scale)
else:
print "FAIL: Task duty-cycle is {}% in the [2,4] execution window"\
.format(sa.getDutyCycle(first_phase) * min_little_scale)
Explanation: Write a more portable assertion
End of explanation
# Consider a 100 [ms] window for the task to migrate
delta = 0.1
# Defined the window of interest
switch_window=(start+4-delta, start+4+delta)
if sa.assertSwitch("cluster",
platform['clusters']['little'],
platform['clusters']['big'],
window=switch_window):
print "PASS: Task switches to big within: ", switch_window
else:
print "PASS: Task DOES NO switches to big within: ", switch_window
Explanation: Is the task migrated once we exceed the LITTLE CPUs capacity?
Check that the task is switching the cluster once expected
End of explanation
import operator
if sa.assertResidency("cluster", platform['clusters']['little'], 66, operator.le, percent=True):
print "PASS: Task exectuion on LITTLEs is {:.1f}% (less than 66% of its execution time)".\
format(sa.getResidency("cluster", platform['clusters']['little'], percent=True))
else:
print "FAIL: Task run on LITTLE for MORE than 66% of its execution time"
Explanation: Check that the task is running most of its time on the LITTLE cluster
End of explanation
start = 2
last_phase = (start+4, start+6)
analyzer_config = {
"SCALE" : 1024,
"BOOST" : 15,
}
# Verify that the margin is properly computed for each event:
# margin := (scale - util) * boost
margin_check_statement = "(((SCALE - sched_boost_task:util) * BOOST) // 100) == sched_boost_task:margin"
from bart.common.Analyzer import Analyzer
# Create an Assertion Object
a = Analyzer(trace.ftrace,
analyzer_config,
window=last_phase,
filters={"comm": "task_ramp"})
if a.assertStatement(margin_check_statement):
print "PASS: Margin properly computed in : ", last_phase
else:
print "FAIL: Margin NOT properly computed in : ", last_phase
Explanation: Check that the util estimation is properly computed and CPU capacity matches
End of explanation
# Get the two dataset of interest
df1 = trace.data_frame.trace_event('cpu_capacity')[['cpu', 'capacity']]
df2 = trace.data_frame.trace_event('boost_task_rtapp')[['__cpu', 'boosted_util']]
# Join the information from these two
df3 = df2.join(df1, how='outer')
df3 = df3.fillna(method='ffill')
df3 = df3[df3.__cpu == df3.cpu]
#df3.ix[start+4:start+6,].head()
len(df3[df3.boosted_util >= df3.capacity])
Explanation: Check that the CPU capacity matches the task boosted value
End of explanation
# Create the TRAPpy class
trace.ftrace.add_parsed_event('rtapp_capacity_check', df3)
# Define pivoting value
trace.ftrace.rtapp_capacity_check.pivot = 'cpu'
# Create an Assertion
a = Analyzer(trace.ftrace,
{"CAP" : trace.ftrace.rtapp_capacity_check},
window=(start+4.1, start+6))
a.assertStatement("CAP:capacity >= CAP:boosted_util")
Explanation: Do it the TRAPpy way
End of explanation
import pandas as pd
# Focus on cpu_frequency events for CPU0
df = trace.data_frame.trace_event('cpu_frequency')
df = df[df.cpu == 0]
# Compute the residency on each OPP before switching to the next one
df.loc[:,'start'] = df.index
df.loc[:,'delta'] = (df['start'] - df['start'].shift()).fillna(0).shift(-1)
# Group by frequency and sum-up the deltas
freq_residencies = df.groupby('frequency')['delta'].sum()
print "Residency time per OPP:"
df = pd.DataFrame(freq_residencies)
df.head()
# Compute the relative residency time
tot = sum(freq_residencies)
#df = df.apply(lambda delta : 100*delta/tot)
for f in freq_residencies.index:
print "Freq {:10d}Hz : {:5.1f}%".format(f, 100*freq_residencies[f]/tot)
Explanation: Going further on events processing
What are the relative residency on different OPPs?
We are not limited to the usage of pre-defined functions. We can exploit the full power of PANDAS to process the DataFrames to extract all kind of information we want.
Use PANDAs APIs to filter and aggregate events
End of explanation
# Plot residency time
import matplotlib.pyplot as plt
fig, axes = plt.subplots(1, 1, figsize=(16, 5));
df.plot(kind='barh', ax=axes, title="Frequency residency", rot=45);
Explanation: Use MathPlot Lib to generate all kind of plot from collected data
End of explanation
# Setup a target configuration
conf = {
# Target is localhost
"platform" : 'linux',
"board" : "juno",
# Login credentials
"host" : "192.168.0.1",
"username" : "root",
"password" : "",
# Binary tools required to run this experiment
# These tools must be present in the tools/ folder for the architecture
"tools" : ['rt-app', 'taskset', 'trace-cmd'],
# Comment the following line to force rt-app calibration on your target
"rtapp-calib" : {
"0": 355, "1": 138, "2": 138, "3": 355, "4": 354, "5": 354
},
# FTrace events end buffer configuration
"ftrace" : {
"events" : [
"sched_switch",
"sched_wakeup",
"sched_wakeup_new",
"sched_overutilized",
"sched_contrib_scale_f",
"sched_load_avg_cpu",
"sched_load_avg_task",
"sched_tune_config",
"sched_tune_tasks_update",
"sched_tune_boostgroup_update",
"sched_tune_filter",
"sched_boost_cpu",
"sched_boost_task",
"sched_energy_diff",
"cpu_frequency",
"cpu_capacity",
],
"buffsize" : 10240
},
# Where results are collected
"results_dir" : "SchedTuneAnalysis",
# Devlib module required (or not required)
'modules' : [ "cpufreq", "cgroups" ],
#"exclude_modules" : [ "hwmon" ],
}
Explanation: <br><br><br><br>
Advanced DataFrame usage: filtering by columns/rows, merging tables, plotting data<br>
notebooks/tutorial/05_TrappyUsage.ipynb
<br><br><br><br>
Remote target connection and control
Using LISA APIs to control a remote device and run custom workloads
Configure the connection
End of explanation
# Support to access the remote target
from env import TestEnv
# Initialize a test environment using:
# the provided target configuration (my_target_conf)
# the provided test configuration (my_test_conf)
te = TestEnv(conf)
target = te.target
print "DONE"
Explanation: Setup the connection
End of explanation
# Enable Energy-Aware scheduler
target.execute("echo ENERGY_AWARE > /sys/kernel/debug/sched_features");
target.execute("echo UTIL_EST > /sys/kernel/debug/sched_features");
# Check which sched_feature are enabled
sched_features = target.read_value("/sys/kernel/debug/sched_features");
print "sched_features:"
print sched_features
Explanation: Target control
Run custom commands
End of explanation
target.cpufreq.set_all_governors('sched');
# Check which governor is enabled on each CPU
enabled_governors = target.cpufreq.get_all_governors()
print enabled_governors
Explanation: Example CPUFreq configuration
End of explanation
schedtune = target.cgroups.controller('schedtune')
# Configure a 50% boostgroup
boostgroup = schedtune.cgroup('/boosted')
boostgroup.set(boost=25)
# Dump the configuraiton of each groups
cgroups = schedtune.list_all()
for cgname in cgroups:
cgroup = schedtune.cgroup(cgname)
attrs = cgroup.get()
boost = attrs['boost']
print '{}:{:<15} boost: {}'.format(schedtune.kind, cgroup.name, boost)
Explanation: Example of CGruops configuration
End of explanation
# RTApp configurator for generation of PERIODIC tasks
from wlgen import RTA, Periodic, Ramp
# Create a new RTApp workload generator using the calibration values
# reported by the TestEnv module
rtapp = RTA(target, 'test', calibration=te.calibration())
# Ramp workload
ramp = Ramp(
start_pct=10,
end_pct=60,
delta_pct=25,
time_s=2,
period_ms=32
)
# Configure this RTApp instance to:
rtapp.conf(
# 1. generate a "profile based" set of tasks
kind = 'profile',
# 2. define the "profile" of each task
params = {
# 3. Composed task
'task_ramp': ramp.get(),
},
#loadref='big',
loadref='LITTLE',
run_dir=target.working_directory
);
Explanation: Remote workloads execution
Generate RTApp configurations
End of explanation
def execute(te, wload, res_dir, cg='/'):
logging.info('# Setup FTrace')
te.ftrace.start()
if te.emeter:
logging.info('## Start energy sampling')
te.emeter.reset()
logging.info('### Start RTApp execution')
wload.run(out_dir=res_dir, cgroup=cg)
if te.emeter:
logging.info('## Read energy consumption: %s/energy.json', res_dir)
nrg_report = te.emeter.report(out_dir=res_dir)
else:
nrg_report = None
logging.info('# Stop FTrace')
te.ftrace.stop()
trace_file = os.path.join(res_dir, 'trace.dat')
logging.info('# Save FTrace: %s', trace_file)
te.ftrace.get_trace(trace_file)
logging.info('# Save platform description: %s/platform.json', res_dir)
plt, plt_file = te.platform_dump(res_dir)
logging.info('# Report collected data:')
logging.info(' %s', res_dir)
!tree {res_dir}
return nrg_report, plt, plt_file, trace_file
nrg_report, plt, plt_file, trace_file = execute(te, rtapp, te.res_dir, cg=boostgroup.name)
Explanation: Execution and tracing
End of explanation
stune_smoke_test = '../../tests/stune/smoke_test_ramp.config'
!cat {stune_smoke_test}
Explanation: Regression testing support
Writing and running regression tests using the LISA API
Defined configurations to test and workloads
End of explanation
stune_smoke_test = '../../tests/stune/smoke_test_ramp.py'
!cat {stune_smoke_test}
Explanation: Write Test Cases
End of explanation |
6,642 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simple Aggregation
Thanks, Monte!
Step1: Pandas
Step2: What is the row sum?
Step3: Column sum?
Step4: Spark
Step5: How do we skip the header? How about using find()? What is Boolean value for true with find()?
Step6: Row Sum
Cast to integer and sum!
Step7: Column Sum
This one's a bit trickier, and portends ill for large, complex data sets (like example 4)...
Let's enumerate the list comprising each RDD "line" such that each value is indexed by the corresponding column number.
Step8: Column sum with Spark.sql.dataframe | Python Code:
import numpy as np
data = np.arange(1000).reshape(100,10)
print data.shape
Explanation: Simple Aggregation
Thanks, Monte!
End of explanation
import pandas as pd
pand_tmp = pd.DataFrame(data,
columns=['x{0}'.format(i) for i in range(data.shape[1])])
pand_tmp.head()
Explanation: Pandas
End of explanation
pand_tmp.sum(axis=1)
Explanation: What is the row sum?
End of explanation
pand_tmp.sum(axis=0)
pand_tmp.to_csv('numbers.csv', index=False)
Explanation: Column sum?
End of explanation
lines = sc.textFile('numbers.csv', 18)
for l in lines.take(3):
print l
type(lines.take(1))
Explanation: Spark
End of explanation
lines = lines.filter(lambda x: x.find('x') != 0)
for l in lines.take(2):
print l
data = lines.map(lambda x: x.split(','))
data.take(3)
Explanation: How do we skip the header? How about using find()? What is Boolean value for true with find()?
End of explanation
def row_sum(x):
int_x = map(lambda x: int(x), x)
return sum(int_x)
data_row_sum = data.map(row_sum)
print data_row_sum.collect()
print data_row_sum.count()
Explanation: Row Sum
Cast to integer and sum!
End of explanation
def col_key(x):
for i, value in enumerate(x):
yield (i, int(value))
tmp = data.flatMap(col_key)
tmp.take(12)
tmp.take(3)
tmp = tmp.groupByKey()
for i in tmp.take(2):
print i, type(i)
data_col_sum = tmp.map(lambda x: sum(x[1]))
for i in data_col_sum.take(2):
print i
print data_col_sum.collect()
print data_col_sum.count()
Explanation: Column Sum
This one's a bit trickier, and portends ill for large, complex data sets (like example 4)...
Let's enumerate the list comprising each RDD "line" such that each value is indexed by the corresponding column number.
End of explanation
from pyspark.sql.types import *
pyspark_df = sqlCtx.createDataFrame(pand_tmp)
pyspark_df.take(2)
for i in pyspark_df.columns:
print pyspark_df.groupBy().sum(i).collect()
Explanation: Column sum with Spark.sql.dataframe
End of explanation |
6,643 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
14.1
Задані виміри економіки країни
Step1: Модель міжгалузевої залежності цін
Нехай $X$ - це матриця в якій $x_{i j}$ елемент позначає витрати продукції $i$-ї галузі на потреби $j$-ї.
y - вектор $y_i$ елемент якого це обсяг $i$-ї продукції, що витрачається у невиробничій сфері.
Тоді загальний випуск $i$-ї галузі (позначимо $x_i$), при загальній кількості галузей $n$, можна записати як
Step2: 14.2
Step3: Аналіз матриці прямих виробничих витрат $A$
Step4: Продуктивність
Теорема
Для продуктивності моделі Леонтьєва необхідно й достататньо, щоб фробеніусове число $\lambda_A$ матриці $A$ задовольняло нерівності $\lambda_A < 1$
Step5: Матриця повних витрат
Матриця повних витрат $B$ обчислюється як сума ряду
Step6: Вектор кінцевого випуску
З моделі Леонтьєва | Python Code:
X = np.array([
[1320, 1170],
[1060, 965]
])
y = np.array([
[1075],
[1185]
])
s = np.array([0.45, 0.2])
Explanation: 14.1
Задані виміри економіки країни
End of explanation
x = (np.sum(X, axis=1).reshape(-1, 1) + y)
print(x)
A = X / x.T
print(A)
M = np.eye(A.shape[0]) - A.T
p = np.linalg.solve(M, s)
print("Ціни на промислову та сільськогосподарську продукцію: {}".format(p))
Explanation: Модель міжгалузевої залежності цін
Нехай $X$ - це матриця в якій $x_{i j}$ елемент позначає витрати продукції $i$-ї галузі на потреби $j$-ї.
y - вектор $y_i$ елемент якого це обсяг $i$-ї продукції, що витрачається у невиробничій сфері.
Тоді загальний випуск $i$-ї галузі (позначимо $x_i$), при загальній кількості галузей $n$, можна записати як:
\begin{equation}
x_i = \sum_{j=1}^n x_{i j} + y_i
\label{eq:sum_prod}
\tag{1}
\end{equation}
У моделі Леонтьєва вводиться припущення про пропорційну залежність між витратами та об'ємом виробництва, тобто вводяться лінійно-однорідні функції виробничих витрат:
\begin{equation}
x_{i j}=a_{i j} x_j => a_{i j} = \frac{x_{i j}}{ x_j}
\label{eq:leon_assum}
\tag{2}
\end{equation}
Модель міжгалузевої залежності цін описується рівнянням:
\begin{equation}
p = A^T p + s => (E - A ^ T) p = s
\label{eq:pr_dep}
\tag{3}
\end{equation}
Тут $p$ - вектор цін на товари, $s$ - вектор доданих вартостей в ціні
End of explanation
A = np.array([
[0.7, 0.05, 0.05],
[0.05, 0.75, 0.05],
[0.6, 0.15, 0.05]
])
y = np.array([
[1900],
[1500],
[1100]
])
Explanation: 14.2
End of explanation
w = linalg.eig(A, left=True)
eigvalues = w[0]
pol_coef = np.poly(w[0])
max_eig_ind = np.argmax(w[0])
fro_number = w[0][max_eig_ind]
fro_l = w[1][:, max_eig_ind] * (-1 if w[1][0, max_eig_ind] < 0 else 1)
fro_r = w[2][:, max_eig_ind] * (-1 if w[2][0, max_eig_ind] < 0 else 1)
print("Характеристичний поліном: {}".format(pol_coef))
print("Власні числа: {}".format(", ".join(map(lambda x: str(x.real), eigvalues))))
print("Число Фробеніуса: {}".format(fro_number.real))
print("Лівий вектор Фробеніуса: ({})".format(", ".join(map(lambda x: str(x.real), fro_l))))
print("Правий вектор Фробеніуса: ({})".format(", ".join(map(lambda x: str(x.real), fro_r))))
Explanation: Аналіз матриці прямих виробничих витрат $A$
End of explanation
print("Матриця A - {}продуктивна".format("" if fro_number.real < 1 else "не "))
Explanation: Продуктивність
Теорема
Для продуктивності моделі Леонтьєва необхідно й достататньо, щоб фробеніусове число $\lambda_A$ матриці $A$ задовольняло нерівності $\lambda_A < 1$
End of explanation
B = np.eye(A.shape[0])
B_pr = B.copy()
eps = 0.01
for i in range(100):
B_pr, B = B, B_pr
B = np.dot(B_pr, A) + np.eye(A.shape[0])
if np.max(np.fabs(B - B_pr)) < eps:
break
print("Матриця повних витрат")
print(B)
Explanation: Матриця повних витрат
Матриця повних витрат $B$ обчислюється як сума ряду:
\begin{align}
B &= E + A + A^2 + ... + A^N \
N &\rightarrow \inf
\end{align}
Для того, щоб дослідити ряд на збіжність та обчислити матрицю B побудуємо послідовність $B_i$, яка прямує до B:
\begin{align}
&B_0 = E \
&B_{i + 1} = B_i A + E
\end{align}
Тоді критерій збіжності набуває вигляду:
$$max( \bigl| B_i - B_{i + 1} \bigr| ) < \epsilon$$
End of explanation
x = np.linalg.solve(np.eye(A.shape[0]) - A, y)
x
Explanation: Вектор кінцевого випуску
З моделі Леонтьєва:
$$x = A x + y => (E-A) x = y$$
Розвязавши систему лінійних рівнянь отримуємо вектор кінцевого випуску $x$
End of explanation |
6,644 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting Started with Accelerated Computing
In this self-paced, hands-on lab, we will briefly explore some methods for accelerating applications on a GPU.
Lab created by Mark Ebersole (Follow @CUDAHamster on Twitter)
The following timer counts down to a five minute warning before the lab instance shuts down. You should get a pop up at the five minute warning reminding you to save your work!
<iframe id="timer" src="timer/timer.html" width="100%" height="120px"></iframe>
Before we begin, let's verify WebSockets are working on your system. To do this, execute the cell block below by giving it focus (clicking on it with your mouse), and hitting Ctrl-Enter, or pressing the play button in the toolbar above. If all goes well, you should see get some output returned below the grey cell. If not, please consult the Self-paced Lab Troubleshooting FAQ to debug the issue.
Step1: Let's execute the cell below to display information about the GPUs running on the server.
Step2: If you have never before taken an IPython Notebook based self-paced lab from NVIDIA, please watch this video. It will explain the infrastructure we are using for this lab, as well as give some tips on it's usage. If you've never taken a lab on this sytem before, its highly recommended you watch this short video first.<br><br>
<div align="center"><iframe width="640" height="390" src="http
Step3: Next we'll execute the compiled program and time how long it takes. Execute the below cell to do this.
NOTE
Step4: The real value should indicate the program took around 2.9 seconds to run. At this point, an output image has been generated and written to the file out_cpu.png. Let's use some Python code to display this image in your browser. After executing the cell below, you should see a line drawing of the original image shown above.
Step5: Now let's compile and run the GPU version of this program. The compile & run steps have been combined into a single cell which you can execute below.
Step6: By moving the computationally intensive portions of this program to the GPU, we were able to achieve a 5.3x speed-up (from 2.9s to 0.5s), even for this very simple application. This includes the time required to move the image data to GPU memory, process it, and then copy the result back to the CPU's memory in order to write the image to a file.
Use the below cell to confirm the same image was created by the GPU version of the functions.
Step7: You can compare the CPU and GPU versions of the application by executing the line below to show the differences. The GPU code will be on the right, and the CPU on the left. Changed lines are marked by a | and new lines are marked by a > on the GPU side.
Step8: You can see by the above sdiff only a few lines of code were added or modified in order to accelerage the functions on the GPU. This really shows the power of using libraries in your code - no need to reinvent the wheel!
Finally, if you wish to modify the code, simply click on the task1 folder on the left, and select either the lines_cpu.cpp or lines_gpu.cpp file. If you modify and save either file, you can reuse the corresponding cells above to compile & run the new code.
Note You are encouraged to finish the other tasks before coming back and experimenting with this code. This way you are less likely to run out of time before the lab ends.
<iframe id="task1" src="task1" width="100%" height="400px">
<p>Your browser does not support iframes.</p>
</iframe>
Compiler Directives
Now that we've seen how libraries can be used to help accelerate your code, let's move on to a more flexible approach; using compiler directives. Here we will provide hints to the compiler and let it accelerate the code for us. So while this is not quite as easy as using a library, it is more flexible and yet does not require you to modify the underlying source code.
Task #2
Open-specification OpenACC directives are a straightforward way to accelerate existing Fortran, C and C++ applications. With OpenACC directives, you provide hints via compiler directives (or 'pragmas') to tell the compiler where - and how - it should parallelize compute-intensive code for execution on an accelerator.
If you've done parallel programming using OpenMP, OpenACC is very similar
Step9: To run the task after you have successfully compiled, execute the next cell. You should see the GPU is about 3.7x faster than the 8-thread OpenMP verison of the code. Not bad for only adding three #pragma's.
Step10: The high-level flow recommended to take with OpenACC is as follows
Step11: Execute the following cell to run the CPU version of the step function. This should generate text output and a graph in about 15 seconds.
Step12: Now, let's accelerate the step function on the GPU. To do this, we're going to use a Python decorator. Using the @vectorize decorator, numba can compile the step function into a ufunc (universal function) that operates over NumPy arrays as fast as traditional ufuncs written in C!
@vectorize in numba works by running through all of the elements of the input arrays executing the scalar function on each set. This means that our step_gpu function needs to be a scalar function - taking scalar inputs and returning a scalar output. To accomplish this, the only thing we have to modify is to use math.exp which operates on scalars instead of np.exp which expects a NumPy array.
Since a compiler is trying to turn the step_gpu function into machine code, we need to provide it with some information about the data types being passed in. That's the first parameter you see being passed to @vectorize.
Finally, we are targeting the GPU with this decorator (the second parameter). And that's it! The compiler handles the work of generating the GPU code, performing any data movement required, and launching the work. Go ahead and execute the below cell to see what kind of speed up we get.
Step13: You should see about a 27% increase in speed.
In the interest of transperency, if you change the target to parallel instead of cuda, the compiler will target the multi-core CPU availalbe on this instance and you will get similar performance to what you just got on the GPU. The reason for this is we're only porting a very small amount of computation the GPU, and therefore not hiding the latency of transferring data around. If you decide to take the Python labs on nvidia.qwiklab.com, you will see how we can achieve much greater increases in performance of this algorithm by moving more computation to the GPU with both library calls and some CUDA code, and hiding the cost of trasnferring data.
CUDA
Programming for the GPU in a CUDA-enabled language is the most flexible of the three approaches. While CUDA was initially just a C compiler when it was first released, it has grown into the parallel computing platform for accessing the general purpose, massively parallel compute power of an NVIDIA GPU.
There is a growing list of languages that understand how to speak CUDA and target the GPU including but not limited to C, C++, Fortran, R, and Python. In this lab, you will write some CUDA code directly in Python. This code will be compiled using Continuum Analytics Numba compiler which contains CUDA Python support.
Task #4
This task does not require any modifications to get working and will be generating the Mandelbrot Set. It is designed to show you the speed-up you can get using CUDA Python to move computational intensive portions of code to the GPU.
Executing the below cell will run the same algorithm on first the GPU and then again on the CPU. Both of these examples are using code compiled from Python using the Numba compiler. The timing of the GPU includes all data transfers between the CPU memory and GPU memory in order to make a fair comparison. While it's not explicitly coded, the Numba compiler is able to recognize and handle the need for the gimage data to be tranferred to the GPU before create_fractal_gpu is called, and back when it's complete. The cuda.synchronize is there to ensure the timing information is accurate.
Feel free to change the numIters variable to decrease or increase the number of iterations performed. In addition you can modify the fractal grid points (starting values of -2.0, 1.0, -1.0, 1.0) to change the area of the fractal processed. As you increase the number of iterations, you should see the gap in performance between the GPU and CPU increasing as the amount of computation hides the data transfer latency.
You will notice that the GPU version adds [griddim, blockdim] before the parameter list. These values control how the parallel work is spread across the GPU and will be described in more detail in the next task. You should run the next cell twice, the first time may be slower due to the one-time compilation of the create_fractal_* functions
Step15: You should see around a 9x speed increase when moving from the GPU to the CPU when using the original parameters.
If you are interested in seeing the rest of the code used in the above example, please execute the next cell. This is not a requirement for the lab, but you may find it insightful after you perform the next task. In addition, at the end of this lab, you are presented with a section on downloading the code for offline viewing - but be careful you don't run out of time!
Step16: Task 5 - Hello World
For this task, you get to try your hand and writing some CUDA Python code. We are going to be using the following concepts
Step17: See the solution below
Once you have a solution generating the correct output, try increasing the number of blocks by a few and see if you understand the output you get. Be careful about making the number of blocks too big, as it may take a while to print out all those values! In addition, there is a limit on the number of overall threads, the number of blocks, and the number of threads per block you can request.
Learn More
If you are interested in learning more, you can use the following resources | Python Code:
print "The answer should be three: " + str(1+2)
Explanation: Getting Started with Accelerated Computing
In this self-paced, hands-on lab, we will briefly explore some methods for accelerating applications on a GPU.
Lab created by Mark Ebersole (Follow @CUDAHamster on Twitter)
The following timer counts down to a five minute warning before the lab instance shuts down. You should get a pop up at the five minute warning reminding you to save your work!
<iframe id="timer" src="timer/timer.html" width="100%" height="120px"></iframe>
Before we begin, let's verify WebSockets are working on your system. To do this, execute the cell block below by giving it focus (clicking on it with your mouse), and hitting Ctrl-Enter, or pressing the play button in the toolbar above. If all goes well, you should see get some output returned below the grey cell. If not, please consult the Self-paced Lab Troubleshooting FAQ to debug the issue.
End of explanation
!nvidia-smi
Explanation: Let's execute the cell below to display information about the GPUs running on the server.
End of explanation
!g++ task1/lines_cpu.cpp -lopencv_core -lopencv_highgui -lopencv_imgproc -o lines_cpu && echo "Compiled Successfully"
Explanation: If you have never before taken an IPython Notebook based self-paced lab from NVIDIA, please watch this video. It will explain the infrastructure we are using for this lab, as well as give some tips on it's usage. If you've never taken a lab on this sytem before, its highly recommended you watch this short video first.<br><br>
<div align="center"><iframe width="640" height="390" src="http://www.youtube.com/embed/ZMrDaLSFqpY" frameborder="0" allowfullscreen></iframe></div>
Introduction to GPU Computing
You may not realize it, but GPUs (GPU is short for Graphics Processing Unit) are good for much more than displaying great graphics in video games. In fact, there is a good chance that your daily life is being affected by GPU-accelerated computing.
GPU-accelerated computing is the use of a graphics processing unit (GPU) together with a CPU to accelerate scientific, engineering, mobile and enterprise applications. Pioneered by NVIDIA, GPUs now power energy-efficient datacenters in government labs, universities, enterprises, and small-and-medium businesses around the world.
How Applications Accelerate With GPUs
GPU-accelerated computing offers unprecedented application performance by offloading compute-intensive portions of the application to the GPU, while the remainder of the code still runs on the CPU. From a user's perspective, applications simply run significantly faster.
CPU Versus GPU
A simple way to understand the difference between a CPU and GPU is to compare how they process tasks. A CPU consists of a few cores optimized for sequential serial processing while a GPU consists of thousands of smaller, more efficient cores designed for handling multiple tasks simultaneously.
GPUs have thousands of cores to process parallel workloads efficiently
There are hundreds of industry-leading applications already GPU-accelerated. Find out if the applications you use are GPU-accelerated by looking in NVIDIA's application catalog.
How to Accelerate Applications
If GPU-acceleration is not already available for your application, you may be interested in developing GPU-accelerated code yourself. There are three main methods methods to achieve GPU-acceleration in your code, and that is what the rest of this lab attempts to demonstrate. The methods are summarized below.
<img src="files/images/three_methods.png" />
Enough introduction, let's start the hands-on work!
Libraries
As with any type of computer programming, libraries give access to many different types of functions and algorithms that you do not have to directly implement in your software. Libraries are typically highly-optimized and are accessed through a set of Application Programming Interfaces (APIs). Making use of GPU-accelerated libraries is typically the quickest way to add acceleration to your application. In fact, there are a number of GPU-accelerated libraries that are API compatible with the CPU version. This means you simply change the library you are compiling against - no code changes neccesary!
There is an ever growing number of libraries available for GPU-acclerated computing, both from NVIDIA and 3rd party developers. They range from basic building block libraries, to incredibly complex and dense. You can take full advantage of their capabilities without having to write any of that GPU-accelerated code yourself.
One example of a library that contains GPU-accelerated functions is the open-source computer vision package called OpenCV. To quote the OpenCV site, "OpenCV was built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products."
Task #1
Your first task in this lab is to compile and run some simple OpenCV code to generate a line drawing of a given image. You'll then see how calling the GPU-accelerated versions of the OpenCV functions results in the same image, but generated in less time.
You are not required to modify any code in this task, but a text editor is present below if you wish to experiment with different values in the code. The source image we are going to work with looks like this:
<img src="files/task1/images/shield.jpg" width=500 />
Let's first run the CPU-only version of this program to see what the output should look like. To do this, execute the following cell block to compile the CPU version. You execute a cell in this lab by first selecting it with your mouse and then pressing either Ctrl+Enter (keeps focus on the cell), or Shift+Enter or clicking the play button in the toolbar (moves focus to next cell after execution). Try that now. You should see Compiled Successfully printed out if everything works.
End of explanation
%%bash
./lines_cpu
time ./lines_cpu
Explanation: Next we'll execute the compiled program and time how long it takes. Execute the below cell to do this.
NOTE: You may notice that the lines_cpu program is being executed twice below, but only timed once. This is because the first time this program is run on the system some time is spent loading the OpenCV libraries. By only timing the second run, we remove this load time.
End of explanation
from IPython.core.display import Image, display
cpu = Image('out_cpu.png', width=700)
display(cpu)
Explanation: The real value should indicate the program took around 2.9 seconds to run. At this point, an output image has been generated and written to the file out_cpu.png. Let's use some Python code to display this image in your browser. After executing the cell below, you should see a line drawing of the original image shown above.
End of explanation
%%bash
g++ task1/lines_gpu.cpp -lopencv_core -lopencv_highgui -lopencv_imgproc -lopencv_gpu -o lines_gpu
./lines_gpu
time ./lines_gpu
Explanation: Now let's compile and run the GPU version of this program. The compile & run steps have been combined into a single cell which you can execute below.
End of explanation
from IPython.core.display import Image, display
gpu = Image('out_gpu.png', width=800)
display(gpu)
Explanation: By moving the computationally intensive portions of this program to the GPU, we were able to achieve a 5.3x speed-up (from 2.9s to 0.5s), even for this very simple application. This includes the time required to move the image data to GPU memory, process it, and then copy the result back to the CPU's memory in order to write the image to a file.
Use the below cell to confirm the same image was created by the GPU version of the functions.
End of explanation
!sdiff task1/lines_cpu.cpp task1/lines_gpu.cpp
Explanation: You can compare the CPU and GPU versions of the application by executing the line below to show the differences. The GPU code will be on the right, and the CPU on the left. Changed lines are marked by a | and new lines are marked by a > on the GPU side.
End of explanation
!pgcc -o task2_out -acc -Minfo=accel task2/task2.c && echo "Compiled Successfully"
Explanation: You can see by the above sdiff only a few lines of code were added or modified in order to accelerage the functions on the GPU. This really shows the power of using libraries in your code - no need to reinvent the wheel!
Finally, if you wish to modify the code, simply click on the task1 folder on the left, and select either the lines_cpu.cpp or lines_gpu.cpp file. If you modify and save either file, you can reuse the corresponding cells above to compile & run the new code.
Note You are encouraged to finish the other tasks before coming back and experimenting with this code. This way you are less likely to run out of time before the lab ends.
<iframe id="task1" src="task1" width="100%" height="400px">
<p>Your browser does not support iframes.</p>
</iframe>
Compiler Directives
Now that we've seen how libraries can be used to help accelerate your code, let's move on to a more flexible approach; using compiler directives. Here we will provide hints to the compiler and let it accelerate the code for us. So while this is not quite as easy as using a library, it is more flexible and yet does not require you to modify the underlying source code.
Task #2
Open-specification OpenACC directives are a straightforward way to accelerate existing Fortran, C and C++ applications. With OpenACC directives, you provide hints via compiler directives (or 'pragmas') to tell the compiler where - and how - it should parallelize compute-intensive code for execution on an accelerator.
If you've done parallel programming using OpenMP, OpenACC is very similar: using directives, applications can be parallelized incrementally, with little or no change to the Fortran, C or C++ source. Debugging and code maintenance are easier. OpenACC directives are designed for portability across operating systems, host CPUs, and accelerators. You can use OpenACC directives with GPU accelerated libraries, explicit parallel programming languages (e.g., CUDA), MPI, and OpenMP, all in the same program.
Watch the following short video introduction to OpenACC:
<div align="center"><iframe width="640" height="390" style="margin: 0 auto;" src="http://www.youtube.com/embed/c9WYCFEt_Uo" frameborder="0" allowfullscreen></iframe></div>
To demonstrate the power of OpenACC, we're going to look at a very simple Matrix Transpose code. This task just involves compiling and running the source to show the differences in performance between an 8-thread OpenMP version of the code running on the CPU, and the OpenACC version running on the GPU.
The source code found below is broken up into these functions:
referenceTranspose - the naive transpose function executed in parallel on the CPU using OpenMP
openACCTranspose - the naive transpose function executed on the massively parallel GPU using OpenACC
time_kernel - a helper function used to measure the bandwidth of running the referenceTranpose function
time_kernel_acc - a helper function used to measure the bandwidth of running the openACCTranpose function
While it's not important to understand all this code, there are a few important take aways.
The OpenACC version of the transpose is compared against the OpenMP version to check for correctness
In order to get an accurate bandwidth measurement, each version of the transpose is run 500 times and the average is taken from those runs.
There is no GPU-specific code being used in this example. All acceleration is implemented by the OpenACC PGI compiler for you.
Before executing the code, you should look for the following OpenACC directives and see if you can understand their purpose in the code:
#pragma acc parallel loop collapse(2) present(in,out) (line 28) - The parallel OpenACC directive tells the compiler that it should offload the code in the structured code block following the #pragma (in our case the nested for-loops) following our further instructions and execute it on the GPU. The loop tells the compiler to parallelize the next loop. collapse(2) says to apply this directive to the next two loops. And finally the present(in,out) tells the compiler we've already copied the in and out data to the device.
#pragma acc data copyin(in[0:rows*cols]) copyout(out[0:rows*cols]) (line 94) - The data directive is used to tell the compiler how and when to move data between the CPU (host) memory and the GPU memory. Since we are executing each transpose function 500 times, it doesn't make sense to copy the input and output data across the PCI-Express bus for each iteration as this would severely skew the timing results. This directive says "At the beginning of this pragma, copy the input data to the device. At the end of the structured code block, copy the output data from the device to the host memory."
#pragma acc wait (line 102) - The wait directive tells the compiler that it should wait at this point for all the work on the device to complete. Since the CPU and GPU are two separate processors, they are able to execute code independently. If this wait was not there, the timing code would be incorrect as the CPU would not wait for the GPU to finish its work before executing the next line of code.
To look at the code, click on the task2.c filename below. If you decide to make changes to the code, make sure to click the Save button in the text editor box (not the tool bar at the top of the browser tab).
<iframe id="task2" src="task2" width="100%" height="550px">
<p>Your browser does not support iframes.</p>
</iframe>
To compile the task2.c file, simply execute the below cell. Information about the accelerated portions of the code will be printed out, and you can learn more about what these mean by taking the other OpenACC labs available on nvidia.qwiklab.com or the more immersive OpenACC course.
End of explanation
%%bash
export OMP_NUM_THREADS=8
./task2_out
Explanation: To run the task after you have successfully compiled, execute the next cell. You should see the GPU is about 3.7x faster than the 8-thread OpenMP verison of the code. Not bad for only adding three #pragma's.
End of explanation
%run -i monte.py
Explanation: The high-level flow recommended to take with OpenACC is as follows:
Identify the computationally intensive portions of your code - these are usually good targets for OpenACC. Use any popular CPU profiling tool, the nvprof tool provided in the CUDA toolkit from NVIDIA, and see which functions take up the most amount of time.
Accelerate the code on the GPU using kernels or the parallel OpenACC directives. It's very important to verify accuracy of the results at this stage. Don't focus on performance yet.
Once the code is correctly accelerated, optimize data movement with the various data directives. This is where you will usually begin to see increases in performance. Often people get discouraged when they don't see any speedups, or even slowdowns, after step #2. It's important to continue to step #3.
Perform any additional optimizations if needed and repeat the steps.
Task #3
While compiler directives generally are associated with C, C++ and Fortran, let's see how we can use a similar approach in Python with the @vectorize decorator and the Continuum Analytics Numba compiler.
First let's execute a CPU-only version of a Monte Carlo Options Pricer simulation code. It's not important to understand exactly what this code is doing, only that we get a similar stock price between the two versions. We also want to look at the time elapsed value in the text output.
Execute the next cell to load the common code into our namespace. You can download this code at the end of the lab if you wish to look at it in more detail.
End of explanation
%matplotlib inline
def step_cpu(prices, dt, c0, c1, noises):
return prices * np.exp(c0 * dt + c1 * noises)
driver(step_cpu, do_plot=True)
Explanation: Execute the following cell to run the CPU version of the step function. This should generate text output and a graph in about 15 seconds.
End of explanation
from numba import vectorize
import math # needed for the math.exp function
@vectorize(['float64(float64, float64, float64, float64, float64)'], target='cuda')
def step_gpu(prices, dt, c0, c1, noises):
return prices * math.exp(c0 * dt + c1 * noises)
driver(step_gpu, do_plot=True)
Explanation: Now, let's accelerate the step function on the GPU. To do this, we're going to use a Python decorator. Using the @vectorize decorator, numba can compile the step function into a ufunc (universal function) that operates over NumPy arrays as fast as traditional ufuncs written in C!
@vectorize in numba works by running through all of the elements of the input arrays executing the scalar function on each set. This means that our step_gpu function needs to be a scalar function - taking scalar inputs and returning a scalar output. To accomplish this, the only thing we have to modify is to use math.exp which operates on scalars instead of np.exp which expects a NumPy array.
Since a compiler is trying to turn the step_gpu function into machine code, we need to provide it with some information about the data types being passed in. That's the first parameter you see being passed to @vectorize.
Finally, we are targeting the GPU with this decorator (the second parameter). And that's it! The compiler handles the work of generating the GPU code, performing any data movement required, and launching the work. Go ahead and execute the below cell to see what kind of speed up we get.
End of explanation
from mymandel import *
numIters = 1000
# Run the GPU Version first
gimage = np.zeros((1024, 1536), dtype = np.uint8)
blockdim = (32, 8)
griddim = (32, 16)
with mytimer('Mandelbrot created on GPU'):
create_fractal_gpu[griddim, blockdim](-2.0, 1.0, -1.0, 1.0, gimage, numIters)
cuda.synchronize
imshow(gimage)
show()
# Run the CPU Version last
image = np.zeros_like(gimage)
with mytimer('Mandelbrot created on CPU'):
create_fractal_cpu(-2.0, 1.0, -1.0, 1.0, image, numIters)
imshow(image)
show()
Explanation: You should see about a 27% increase in speed.
In the interest of transperency, if you change the target to parallel instead of cuda, the compiler will target the multi-core CPU availalbe on this instance and you will get similar performance to what you just got on the GPU. The reason for this is we're only porting a very small amount of computation the GPU, and therefore not hiding the latency of transferring data around. If you decide to take the Python labs on nvidia.qwiklab.com, you will see how we can achieve much greater increases in performance of this algorithm by moving more computation to the GPU with both library calls and some CUDA code, and hiding the cost of trasnferring data.
CUDA
Programming for the GPU in a CUDA-enabled language is the most flexible of the three approaches. While CUDA was initially just a C compiler when it was first released, it has grown into the parallel computing platform for accessing the general purpose, massively parallel compute power of an NVIDIA GPU.
There is a growing list of languages that understand how to speak CUDA and target the GPU including but not limited to C, C++, Fortran, R, and Python. In this lab, you will write some CUDA code directly in Python. This code will be compiled using Continuum Analytics Numba compiler which contains CUDA Python support.
Task #4
This task does not require any modifications to get working and will be generating the Mandelbrot Set. It is designed to show you the speed-up you can get using CUDA Python to move computational intensive portions of code to the GPU.
Executing the below cell will run the same algorithm on first the GPU and then again on the CPU. Both of these examples are using code compiled from Python using the Numba compiler. The timing of the GPU includes all data transfers between the CPU memory and GPU memory in order to make a fair comparison. While it's not explicitly coded, the Numba compiler is able to recognize and handle the need for the gimage data to be tranferred to the GPU before create_fractal_gpu is called, and back when it's complete. The cuda.synchronize is there to ensure the timing information is accurate.
Feel free to change the numIters variable to decrease or increase the number of iterations performed. In addition you can modify the fractal grid points (starting values of -2.0, 1.0, -1.0, 1.0) to change the area of the fractal processed. As you increase the number of iterations, you should see the gap in performance between the GPU and CPU increasing as the amount of computation hides the data transfer latency.
You will notice that the GPU version adds [griddim, blockdim] before the parameter list. These values control how the parallel work is spread across the GPU and will be described in more detail in the next task. You should run the next cell twice, the first time may be slower due to the one-time compilation of the create_fractal_* functions
End of explanation
# %load mymandel.py
from contextlib import contextmanager
import time
import numpy as np
from pylab import imshow, show
from numba import *
@autojit
def mandel(x, y, max_iters):
Given the real and imaginary parts of a complex number,
determine if it is a candidate for membership in the Mandelbrot
set given a fixed number of iterations.
c = complex(x, y)
z = 0.0j
for i in xrange(max_iters):
z = z*z + c
if (z.real*z.real + z.imag*z.imag) >= 4:
return i
return max_iters
# The compiled CPU version of the fractal code
@autojit
def create_fractal_cpu(min_x, max_x, min_y, max_y, image, iters):
height = image.shape[0]
width = image.shape[1]
pixel_size_x = (max_x - min_x) / width
pixel_size_y = (max_y - min_y) / height
for x in xrange(width):
real = min_x + x * pixel_size_x
for y in xrange(height):
imag = min_y + y * pixel_size_y
color = mandel(real, imag, iters)
image[y, x] = color
# create a GPU accelerated version of the madel function
#so it can be called from other device functions like mandel_kernel
mandel_gpu = cuda.jit(restype=uint32, argtypes=[f8, f8, uint32], device=True)(mandel)
# The compiled GPU version of the fractal code
@cuda.jit(argtypes=[f8, f8, f8, f8, uint8[:,:], uint32])
def create_fractal_gpu(min_x, max_x, min_y, max_y, image, iters):
height = image.shape[0]
width = image.shape[1]
pixel_size_x = (max_x - min_x) / width
pixel_size_y = (max_y - min_y) / height
startX, startY = cuda.grid(2) # startX = cuda.threadIdx.x + cuda.blockDim.x * cuda.BlockIdx.x
# startX = cuda.threadIdx.x + cuda.blockDim.x * cuda.BlockIdx.x
gridX = cuda.gridDim.x * cuda.blockDim.x; # Number of threads, or size of image in X direction
gridY = cuda.gridDim.y * cuda.blockDim.y; # Number of threads, or size of image in Y direction
for x in xrange(startX, width, gridX):
real = min_x + x * pixel_size_x
for y in xrange(startY, height, gridY):
imag = min_y + y * pixel_size_y
image[y, x] = mandel_gpu(real, imag, iters)
# Used for timing sections of code
@contextmanager
def mytimer(name):
startTime = time.time()
yield
elapsedTime = time.time() - startTime
print('{} in {} ms'.format(name, int(elapsedTime * 1000)))
Explanation: You should see around a 9x speed increase when moving from the GPU to the CPU when using the original parameters.
If you are interested in seeing the rest of the code used in the above example, please execute the next cell. This is not a requirement for the lab, but you may find it insightful after you perform the next task. In addition, at the end of this lab, you are presented with a section on downloading the code for offline viewing - but be careful you don't run out of time!
End of explanation
from numba import *
import numpy as np
@cuda.jit
def hello(data):
data[cuda.blockIdx.x , cuda.threadIdx.x ] = cuda.threadIdx.x
numBlocks = 1
threadsPerBlock = 5
data = np.ones((numBlocks, threadsPerBlock), dtype=np.uint8)
hello[numBlocks,threadsPerBlock](data)
print data
Explanation: Task 5 - Hello World
For this task, you get to try your hand and writing some CUDA Python code. We are going to be using the following concepts:
<code style="color:green">@cuda.autojit</code> - this decorator is used to tell the CUDA compiler that the function is to be compiled for the GPU. With autojit, the compiler will try and determine the type information of the variables being passed in. You can create your own signatures manually by using the jit decorator.
<code style="color:green">cuda.blockIdx.x</code> - this is a read-only variable that is defined for you. It is used within a GPU kernel to determine the ID of the block which is currently executing code. Since there will be many blocks running in parallel, we need this ID to help determine which chunk of data that particular block will work on.
<code style="color:green">cuda.threadIdx.x</code> - this is a read-only variable that is defined for you. It is used within a GPU kernel to determine the ID of the thread which is currently executing code in the active block.
<code style="color:green">myKernel[number_of_blocks, threads_per_block](...)</code> - this is the syntax used to launch a kernel on the GPU. Inside the list (the square brackets [...]), the first number is the total number of blocks we want to run on the GPU, and the second is the number of threads there are per block. It's possible, and in fact recommended, for one to schedule more blocks than the GPU can actively run in parallel. The system will just continue executing blocks until they have all completed. The following video addresses grids, blocks, and threads in more detail.
<div align="center"><iframe width="640" height="390" src="http://www.youtube.com/embed/KM-zbhyz9f4" frameborder="0" allowfullscreen></iframe></div>
Let's explore the above concepts by doing a simple "Hello World" example.
Most of the code in this example has already been written for you. Your task is to modify the single line in the hello function such that the data printed out at the end looks like:
[[0 0 0 0 0]]
What's happening is that all the threads in block 0 are writing the block ID into their respective place in the array. Remember that this function is being run in parallel by the threads in block 0, each with their own unique thread ID. Since we're launching a single block with 5 threads, the following is happening parallel:
data[0,0] = 0
data[0,1] = 0
data[0,2] = 0
data[0,3] = 0
data[0,4] = 0
If you get stuck, click on the link below the code to see the solution.
End of explanation
%%bash
rm -f gpu_computing.zip
zip -r gpu_computing task*/* *.py
Explanation: See the solution below
Once you have a solution generating the correct output, try increasing the number of blocks by a few and see if you understand the output you get. Be careful about making the number of blocks too big, as it may take a while to print out all those values! In addition, there is a limit on the number of overall threads, the number of blocks, and the number of threads per block you can request.
Learn More
If you are interested in learning more, you can use the following resources:
More labs are available at nvidia.qwiklab.com
CUDA/GPU Registered Developers with NVIDIA will periodically receive free Credits for use on nvidia.qwiklab.com. Sign up today!
Learn more at the CUDA Developer Zone.
If you have an NVIDIA GPU in your system, you can download and install the CUDA toolkit which comes packed with lots of sample code to look at. Otherwise you can go to docs.nvidia.com/cuda and explore the samples there.
Take the fantastic online and free Udacity Intro to Parallel Programming course which uses CUDA C.
Search or ask questions on Stackoverflow using the cuda tag
NVIDIA provided hands-on training at major conferences such as SuperComputer and its own GPU Technology Conference.
<a id="post-lab"></a>
Post-Lab
Finally, don't forget to save your work from this lab before time runs out and the instance shuts down!!
Save this IPython Notebook by going to File -> Download as -> IPython (.ipynb) at the top of this window. Please note that the in-browser text editors, and likely the executable cells, will not work if you run this IPython Notebook locally.
You can execute the following cell block to create a zip-file of the files you've been working on, and download it with the link below.
End of explanation |
6,645 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fractional Anisotropy Maps - Steps and Results
On Thursday, we showed Greg the output of the first step of the CAPTURE pipeline - namely, after modifying the CAPTURE MATLAB pipeline to accept TIFF files (originally it only took TIFs), we were able to generate two structure tensors from a TIFF stack of Aut1367 originally for use in Ilastik analysis. The main steps for the generation of the structure tensors are explained in a separate viewer (we showed Greg this) on Thursday
Step2: Subsampling
Step3: Results | Python Code:
from dipy.reconst.dti import fractional_anisotropy, color_fa
from argparse import ArgumentParser
from scipy import ndimage
import os
import re
import numpy as np
import nibabel as nb
import sys
import matplotlib
matplotlib.use('Agg') # very important above pyplot import
import matplotlib.pyplot as plt
import vtk
from dipy.reconst.dti import from_lower_triangular
img = nb.load('../../../../../Desktop/result/dogsig1_gausig2.3/v100_ch0_tensorfsl_dogsig1_gausig2.3.nii')
data = img.get_data()
# Output is the structure tensor generated from a lower triangular structure tensor (which data is)
output = from_lower_triangular(data)
Explanation: Fractional Anisotropy Maps - Steps and Results
On Thursday, we showed Greg the output of the first step of the CAPTURE pipeline - namely, after modifying the CAPTURE MATLAB pipeline to accept TIFF files (originally it only took TIFs), we were able to generate two structure tensors from a TIFF stack of Aut1367 originally for use in Ilastik analysis. The main steps for the generation of the structure tensors are explained in a separate viewer (we showed Greg this) on Thursday: http://nbviewer.jupyter.org/github/NeuroDataDesign/seelviz/blob/gh-pages/Tony/ipynb/Generating%20Structure%20Tensors.ipynb
There were two separate structure tensors generated by the CAPTURE pipeline - one was "DTK" (which could be used later in the Diffusion ToolKit process) and the other was "FSL" (an alternate file format). We realized at office hours that the structure tensors (which were 5000 x 5000 x 5 x 6) each were the "lower triangular" values from the structures.
From there, we first tried to use the DTK file directly inside Diffusion ToolKit, but were informed that the "file appeared to be corrupted/missing data". Only the FSL format seemed to have properly saved all the image data (likely because it was run first during the MATLAB script, and because generating the structure tensors froze Tony's computer, so the DTK file format was corrupted. Thus, all analysis was done on the FSL file.
From there, we followed the DiPy tutorial/ndmg code that was suitable for generating FA maps (as recommended by Greg).
End of explanation
output_ds = output[4250:4300, 250:300, :, :, :]
print output.shape
print output_ds.shape
FA = fractional_anisotropy(output_ds)
FA = np.clip(FA, 0, 1)
FA[np.isnan(FA)] = 0
print FA.shape
from dipy.reconst.dti import decompose_tensor
evalues, evectors = decompose_tensor(output_ds)
print evectors[..., 0, 0].shape
print evectors.shape[-2:]
print FA[:, :, :, 0].shape
## To satisfy requirements for RGB
RGB = color_fa(FA[:, :, :, 0], evectors)
nb.save(nb.Nifti1Image(np.array(255 * RGB, 'uint8'), img.get_affine()), 'tensor_rgb_upper.nii.gz')
print('Computing tensor ellipsoids in a random part')
from dipy.data import get_sphere
sphere = get_sphere('symmetric724')
from dipy.viz import fvtk
ren = fvtk.ren()
evals = evalues[:, :, :]
evecs = evectors[:, :, :]
print "printing evals:"
print evals
print "printing evecs"
print evecs
cfa = RGB[:, :, :]
cfa = cfa / cfa.max()
print "printing cfa"
print cfa
fvtk.add(ren, fvtk.tensor(evals, evecs, cfa, sphere))
from IPython.display import Image
def vtk_show(renderer, width=400, height=300):
Takes vtkRenderer instance and returns an IPython Image with the rendering.
renderWindow = vtk.vtkRenderWindow()
renderWindow.SetOffScreenRendering(1)
renderWindow.AddRenderer(renderer)
renderWindow.SetSize(width, height)
renderWindow.Render()
windowToImageFilter = vtk.vtkWindowToImageFilter()
windowToImageFilter.SetInput(renderWindow)
windowToImageFilter.Update()
writer = vtk.vtkPNGWriter()
writer.SetWriteToMemory(1)
writer.SetInputConnection(windowToImageFilter.GetOutputPort())
writer.Write()
data = str(buffer(writer.GetResult()))
return Image(data)
Explanation: Subsampling:
We added this step because the calculation of RGB/eigenvalues/eigenvectors took much too long on the full file. Even still, with small sizes like 25x25, the last VTK rendering step took significant amounts of time. In the pipeline we'll have to think of a more optimal way to compute these, and we're guessing we're missing something (since why is this taking so long)?
End of explanation
# x = 4250:4300, y = 250:300, z = : on Tony's computer (doesn't show anything)
# Thus, all results were displayed after running on Albert's computer
vtk_show(ren)
Explanation: Results:
End of explanation |
6,646 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Convolutional Neural Network in TensorFlow
In this notebook, we convert our LeNet-5-inspired, MNIST-classifying, deep convolutional network from Keras to TensorFlow (compare them side by side) following Aymeric Damien's style.
Load dependencies
Step1: Load data
Step2: Set neural network hyperparameters
Step3: Set parameters for each layer
Step4: Define placeholder Tensors for inputs and labels
Step5: Define types of layers
Step6: Define dictionaries for storing weights and biases for each layer -- and initialize
Step7: Design neural network architecture
Step8: Build model
Step9: Define model's loss and its optimizer
Step10: Define evaluation metrics
Step11: Create op for variable initialization
Step12: Train the network in a session (identical to intermediate_net_in_tensorflow.ipynb except addition of display_progress) | Python Code:
import numpy as np
np.random.seed(42)
import tensorflow as tf
tf.set_random_seed(42)
Explanation: Deep Convolutional Neural Network in TensorFlow
In this notebook, we convert our LeNet-5-inspired, MNIST-classifying, deep convolutional network from Keras to TensorFlow (compare them side by side) following Aymeric Damien's style.
Load dependencies
End of explanation
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
Explanation: Load data
End of explanation
epochs = 1
batch_size = 128
display_progress = 10 # after this many batches, output progress to screen
wt_init = tf.contrib.layers.xavier_initializer() # weight initializer
Explanation: Set neural network hyperparameters
End of explanation
# input layer:
n_input = 784
# first convolutional layer:
n_conv_1 = 32
k_conv_1 = 3
# second convolutional layer:
n_conv_2 = 64
k_conv_2 = 3
# max pooling layer:
pool_size = 2
mp_layer_dropout = 0.25
# dense layer:
n_dense = 128
dense_layer_dropout = 0.5
# output layer:
n_classes = 10
Explanation: Set parameters for each layer
End of explanation
x = tf.placeholder(tf.float32, [None, n_input])
y = tf.placeholder(tf.float32, [None, n_classes])
Explanation: Define placeholder Tensors for inputs and labels
End of explanation
# dense layer with ReLU activation:
def dense(x, W, b):
z = tf.add(tf.matmul(x, W), b)
a = tf.nn.relu(z)
return a
# convolutional layer with ReLU activation:
def conv2d(x, W, b, stride_length=1):
xW = tf.nn.conv2d(x, W, strides=[1, stride_length, stride_length, 1], padding='SAME')
z = tf.nn.bias_add(xW, b)
a = tf.nn.relu(z)
return a
# max-pooling layer:
def maxpooling2d(x, p_size):
return tf.nn.max_pool(x,
ksize=[1, p_size, p_size, 1],
strides=[1, p_size, p_size, 1],
padding='SAME'
)
Explanation: Define types of layers
End of explanation
bias_dict = {
'b_c1': tf.Variable(tf.zeros([n_conv_1])),
'b_c2': tf.Variable(tf.zeros([n_conv_2])),
'b_d1': tf.Variable(tf.zeros([n_dense])),
'b_out': tf.Variable(tf.zeros([n_classes]))
}
# calculate number of inputs to dense layer:
full_square_length = np.sqrt(n_input)
pooled_square_length = int(full_square_length / pool_size)
dense_inputs = pooled_square_length**2 * n_conv_2
weight_dict = {
'W_c1': tf.get_variable('W_c1',
[k_conv_1, k_conv_1, 1, n_conv_1], initializer=wt_init),
'W_c2': tf.get_variable('W_c2',
[k_conv_2, k_conv_2, n_conv_1, n_conv_2], initializer=wt_init),
'W_d1': tf.get_variable('W_d1',
[dense_inputs, n_dense], initializer=wt_init),
'W_out': tf.get_variable('W_out',
[n_dense, n_classes], initializer=wt_init)
}
Explanation: Define dictionaries for storing weights and biases for each layer -- and initialize
End of explanation
def network(x, weights, biases, n_in, mp_psize, mp_dropout, dense_dropout):
# reshape linear MNIST pixel input into square image:
square_dimensions = int(np.sqrt(n_in))
square_x = tf.reshape(x, shape=[-1, square_dimensions, square_dimensions, 1])
# convolutional and max-pooling layers:
conv_1 = conv2d(square_x, weights['W_c1'], biases['b_c1'])
conv_2 = conv2d(conv_1, weights['W_c2'], biases['b_c2'])
pool_1 = maxpooling2d(conv_2, mp_psize)
pool_1 = tf.nn.dropout(pool_1, 1-mp_dropout)
# dense layer:
flat = tf.reshape(pool_1, [-1, weight_dict['W_d1'].get_shape().as_list()[0]])
dense_1 = dense(flat, weights['W_d1'], biases['b_d1'])
dense_1 = tf.nn.dropout(dense_1, 1-dense_dropout)
# output layer:
out_layer_z = tf.add(tf.matmul(dense_1, weights['W_out']), biases['b_out'])
return out_layer_z
Explanation: Design neural network architecture
End of explanation
predictions = network(x, weight_dict, bias_dict, n_input,
pool_size, mp_layer_dropout, dense_layer_dropout)
Explanation: Build model
End of explanation
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=predictions, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
Explanation: Define model's loss and its optimizer
End of explanation
correct_prediction = tf.equal(tf.argmax(predictions, 1), tf.argmax(y, 1))
accuracy_pct = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) * 100
Explanation: Define evaluation metrics
End of explanation
initializer_op = tf.global_variables_initializer()
Explanation: Create op for variable initialization
End of explanation
with tf.Session() as session:
session.run(initializer_op)
print("Training for", epochs, "epochs.")
# loop over epochs:
for epoch in range(epochs):
avg_cost = 0.0 # track cost to monitor performance during training
avg_accuracy_pct = 0.0
# loop over all batches of the epoch:
n_batches = int(mnist.train.num_examples / batch_size)
for i in range(n_batches):
# to reassure you something's happening!
if i % display_progress == 0:
print("Step ", i+1, " of ", n_batches, " in epoch ", epoch+1, ".", sep='')
batch_x, batch_y = mnist.train.next_batch(batch_size)
# feed batch data to run optimization and fetching cost and accuracy:
_, batch_cost, batch_acc = session.run([optimizer, cost, accuracy_pct],
feed_dict={x: batch_x, y: batch_y})
# accumulate mean loss and accuracy over epoch:
avg_cost += batch_cost / n_batches
avg_accuracy_pct += batch_acc / n_batches
# output logs at end of each epoch of training:
print("Epoch ", '%03d' % (epoch+1),
": cost = ", '{:.3f}'.format(avg_cost),
", accuracy = ", '{:.2f}'.format(avg_accuracy_pct), "%",
sep='')
print("Training Complete. Testing Model.\n")
test_cost = cost.eval({x: mnist.test.images, y: mnist.test.labels})
test_accuracy_pct = accuracy_pct.eval({x: mnist.test.images, y: mnist.test.labels})
print("Test Cost:", '{:.3f}'.format(test_cost))
print("Test Accuracy: ", '{:.2f}'.format(test_accuracy_pct), "%", sep='')
Explanation: Train the network in a session (identical to intermediate_net_in_tensorflow.ipynb except addition of display_progress)
End of explanation |
6,647 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analyzing IMDB Data in Keras
Step1: 1. Loading the data
This dataset comes preloaded with Keras, so one simple command will get us training and testing data. There is a parameter for how many words we want to look at. We've set it at 1000, but feel free to experiment.
Step2: 2. Examining the data
Notice that the data has been already pre-processed, where all the words have numbers, and the reviews come in as a vector with the words that the review contains. For example, if the word 'the' is the first one in our dictionary, and a review contains the word 'the', then there is a 1 in the corresponding vector.
The output comes as a vector of 1's and 0's, where 1 is a positive sentiment for the review, and 0 is negative.
Step3: 3. One-hot encoding the output
Here, we'll turn the input vectors into (0,1)-vectors. For example, if the pre-processed vector contains the number 14, then in the processed vector, the 14th entry will be 1.
Step4: And we'll also one-hot encode the output.
Step5: 4. Building the model architecture
Build a model here using sequential. Feel free to experiment with different layers and sizes! Also, experiment adding dropout to reduce overfitting.
Step6: 5. Training the model
Run the model here. Experiment with different batch_size, and number of epochs!
Step7: 6. Evaluating the model
This will give you the accuracy of the model, as evaluated on the testing set. Can you get something over 85%? | Python Code:
# Imports
import numpy as np
import keras
from keras.datasets import imdb
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.preprocessing.text import Tokenizer
import matplotlib.pyplot as plt
%matplotlib inline
np.random.seed(42)
Explanation: Analyzing IMDB Data in Keras
End of explanation
# Loading the data (it's preloaded in Keras)
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=1000)
print(x_train.shape)
print(x_test.shape)
Explanation: 1. Loading the data
This dataset comes preloaded with Keras, so one simple command will get us training and testing data. There is a parameter for how many words we want to look at. We've set it at 1000, but feel free to experiment.
End of explanation
print(x_train[0])
print(y_train[0])
Explanation: 2. Examining the data
Notice that the data has been already pre-processed, where all the words have numbers, and the reviews come in as a vector with the words that the review contains. For example, if the word 'the' is the first one in our dictionary, and a review contains the word 'the', then there is a 1 in the corresponding vector.
The output comes as a vector of 1's and 0's, where 1 is a positive sentiment for the review, and 0 is negative.
End of explanation
# One-hot encoding the output into vector mode, each of length 1000
tokenizer = Tokenizer(num_words=1000)
x_train = tokenizer.sequences_to_matrix(x_train, mode='binary')
x_test = tokenizer.sequences_to_matrix(x_test, mode='binary')
print(x_train[0])
Explanation: 3. One-hot encoding the output
Here, we'll turn the input vectors into (0,1)-vectors. For example, if the pre-processed vector contains the number 14, then in the processed vector, the 14th entry will be 1.
End of explanation
# One-hot encoding the output
num_classes = 2
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
print(y_train.shape)
print(y_test.shape)
Explanation: And we'll also one-hot encode the output.
End of explanation
# TODO: Build the model architecture
# TODO: Compile the model using a loss function and an optimizer.
Explanation: 4. Building the model architecture
Build a model here using sequential. Feel free to experiment with different layers and sizes! Also, experiment adding dropout to reduce overfitting.
End of explanation
# TODO: Run the model. Feel free to experiment with different batch sizes and number of epochs.
Explanation: 5. Training the model
Run the model here. Experiment with different batch_size, and number of epochs!
End of explanation
score = model.evaluate(x_test, y_test, verbose=0)
print("Accuracy: ", score[1])
Explanation: 6. Evaluating the model
This will give you the accuracy of the model, as evaluated on the testing set. Can you get something over 85%?
End of explanation |
6,648 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quantum SVM kernel algorithm
Step1: Here we choose the Wine dataset which has 3 classes.
Step2: Now we setup an Aqua configuration dictionary to use the quantum QSVM.Kernel algorithm and add a multiclass extension to classify the Wine data set, since it has 3 classes.
Although the AllPairs extension is used here in the example the following multiclass extensions would also work | Python Code:
from datasets import *
from qiskit_aqua.utils import split_dataset_to_data_and_labels
from qiskit_aqua.input import get_input_instance
from qiskit_aqua import run_algorithm
import numpy as np
Explanation: Quantum SVM kernel algorithm: multiclass classifier extension
A multiclass extension works in conjunction with an underlying binary (two class) classifier to provide multiclass classification.
Currently three different multiclass extensions are supported:
OneAgainstRest
AllPairs
ErrorCorrectingCode
These use different techniques to group the data with binary classification to achieve the final multiclass classification.
End of explanation
n = 2 # dimension of each data point
sample_Total, training_input, test_input, class_labels = Wine(training_size=40,
test_size=10, n=n, PLOT_DATA=True)
temp = [test_input[k] for k in test_input]
total_array = np.concatenate(temp)
Explanation: Here we choose the Wine dataset which has 3 classes.
End of explanation
aqua_dict = {
'problem': {'name': 'svm_classification', 'random_seed': 10598},
'algorithm': {
'name': 'QSVM.Kernel'
},
'feature_map': {'name': 'SecondOrderExpansion', 'depth': 2, 'entangler_map': {0: [1]}},
'multiclass_extension': {'name': 'AllPairs'},
'backend': {'name': 'qasm_simulator', 'shots': 1024}
}
algo_input = get_input_instance('SVMInput')
algo_input.training_dataset = training_input
algo_input.test_dataset = test_input
algo_input.datapoints = total_array
result = run_algorithm(aqua_dict, algo_input)
for k,v in result.items():
print("'{}' : {}".format(k, v))
Explanation: Now we setup an Aqua configuration dictionary to use the quantum QSVM.Kernel algorithm and add a multiclass extension to classify the Wine data set, since it has 3 classes.
Although the AllPairs extension is used here in the example the following multiclass extensions would also work:
'multiclass_extension': {'name': 'OneAgainstRest'}
'multiclass_extension': {'name': 'ErrorCorrectingCode', 'code_size': 5}
End of explanation |
6,649 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TF Custom Estimator to Build a NN Autoencoder for Feature Extraction
Step1: 1. Define Dataset Metadata
Step2: 2. Define CSV Data Input Function
Step3: 3. Define Feature Columns
a. Load normalizarion params
Step4: b. Create normalized feature columns
Step5: 4. Define Autoencoder Model Function
Step6: 5. Run Experiment using Estimator Train_And_Evaluate
a. Set the parameters
Step7: b. Define TrainSpec and EvaluSpec
Step8: d. Run Experiment via train_and_evaluate
Step9: 6. Use the trained model to encode data (prediction)
Step10: Visualise Encoded Data | Python Code:
MODEL_NAME = 'auto-encoder-01'
TRAIN_DATA_FILES_PATTERN = 'data/data-*.csv'
RESUME_TRAINING = False
MULTI_THREADING = True
Explanation: TF Custom Estimator to Build a NN Autoencoder for Feature Extraction
End of explanation
FEATURE_COUNT = 64
HEADER = ['key']
HEADER_DEFAULTS = [[0]]
UNUSED_FEATURE_NAMES = ['key']
CLASS_FEATURE_NAME = 'CLASS'
FEATURE_NAMES = []
for i in range(FEATURE_COUNT):
HEADER += ['x_{}'.format(str(i+1))]
FEATURE_NAMES += ['x_{}'.format(str(i+1))]
HEADER_DEFAULTS += [[0.0]]
HEADER += [CLASS_FEATURE_NAME]
HEADER_DEFAULTS += [['NA']]
print("Header: {}".format(HEADER))
print("Features: {}".format(FEATURE_NAMES))
print("Class Feature: {}".format(CLASS_FEATURE_NAME))
print("Unused Features: {}".format(UNUSED_FEATURE_NAMES))
Explanation: 1. Define Dataset Metadata
End of explanation
def parse_csv_row(csv_row):
columns = tf.decode_csv(csv_row, record_defaults=HEADER_DEFAULTS)
features = dict(zip(HEADER, columns))
for column in UNUSED_FEATURE_NAMES:
features.pop(column)
target = features.pop(CLASS_FEATURE_NAME)
return features, target
def csv_input_fn(files_name_pattern, mode=tf.estimator.ModeKeys.EVAL,
skip_header_lines=0,
num_epochs=None,
batch_size=200):
shuffle = True if mode == tf.estimator.ModeKeys.TRAIN else False
print("")
print("* data input_fn:")
print("================")
print("Input file(s): {}".format(files_name_pattern))
print("Batch size: {}".format(batch_size))
print("Epoch Count: {}".format(num_epochs))
print("Mode: {}".format(mode))
print("Shuffle: {}".format(shuffle))
print("================")
print("")
file_names = tf.matching_files(files_name_pattern)
dataset = data.TextLineDataset(filenames=file_names)
dataset = dataset.skip(skip_header_lines)
if shuffle:
dataset = dataset.shuffle(buffer_size=2 * batch_size + 1)
num_threads = multiprocessing.cpu_count() if MULTI_THREADING else 1
dataset = dataset.batch(batch_size)
dataset = dataset.map(lambda csv_row: parse_csv_row(csv_row), num_parallel_calls=num_threads)
dataset = dataset.repeat(num_epochs)
iterator = dataset.make_one_shot_iterator()
features, target = iterator.get_next()
return features, target
features, target = csv_input_fn(files_name_pattern="")
print("Feature read from CSV: {}".format(list(features.keys())))
print("Target read from CSV: {}".format(target))
Explanation: 2. Define CSV Data Input Function
End of explanation
df_params = pd.read_csv("data/params.csv", header=0, index_col=0)
len(df_params)
df_params['feature_name'] = FEATURE_NAMES
df_params.head()
Explanation: 3. Define Feature Columns
a. Load normalizarion params
End of explanation
def standard_scaler(x, mean, stdv):
return (x-mean)/stdv
def maxmin_scaler(x, max_value, min_value):
return (x-min_value)/(max_value-min_value)
def get_feature_columns():
feature_columns = {}
# feature_columns = {feature_name: tf.feature_column.numeric_column(feature_name)
# for feature_name in FEATURE_NAMES}
for feature_name in FEATURE_NAMES:
feature_max = df_params[df_params.feature_name == feature_name]['max'].values[0]
feature_min = df_params[df_params.feature_name == feature_name]['min'].values[0]
normalizer_fn = lambda x: maxmin_scaler(x, feature_max, feature_min)
feature_columns[feature_name] = tf.feature_column.numeric_column(feature_name,
normalizer_fn=normalizer_fn
)
return feature_columns
print(get_feature_columns())
Explanation: b. Create normalized feature columns
End of explanation
def autoencoder_model_fn(features, labels, mode, params):
feature_columns = list(get_feature_columns().values())
input_layer_size = len(feature_columns)
encoder_hidden_units = params.encoder_hidden_units
# decoder units are the reverse of the encoder units, without the middle layer (redundant)
decoder_hidden_units = encoder_hidden_units.copy()
decoder_hidden_units.reverse()
decoder_hidden_units.pop(0)
output_layer_size = len(FEATURE_NAMES)
he_initialiser = tf.contrib.layers.variance_scaling_initializer()
l2_regulariser = tf.contrib.layers.l2_regularizer(scale=params.l2_reg)
print("[{}]->{}-{}->[{}]".format(len(feature_columns)
,encoder_hidden_units
,decoder_hidden_units,
output_layer_size))
is_training = (mode == tf.estimator.ModeKeys.TRAIN)
# input layer
input_layer = tf.feature_column.input_layer(features=features,
feature_columns=feature_columns)
# Adding Gaussian Noise to input layer
noisy_input_layer = input_layer + (params.noise_level * tf.random_normal(tf.shape(input_layer)))
# Dropout layer
dropout_layer = tf.layers.dropout(inputs=noisy_input_layer,
rate=params.dropout_rate,
training=is_training)
# # Dropout layer without Gaussian Nosing
# dropout_layer = tf.layers.dropout(inputs=input_layer,
# rate=params.dropout_rate,
# training=is_training)
# Encoder layers stack
encoding_hidden_layers = tf.contrib.layers.stack(inputs= dropout_layer,
layer= tf.contrib.layers.fully_connected,
stack_args=encoder_hidden_units,
#weights_initializer = he_init,
weights_regularizer =l2_regulariser,
activation_fn = tf.nn.relu
)
# Decoder layers stack
decoding_hidden_layers = tf.contrib.layers.stack(inputs=encoding_hidden_layers,
layer=tf.contrib.layers.fully_connected,
stack_args=decoder_hidden_units,
#weights_initializer = he_init,
weights_regularizer =l2_regulariser,
activation_fn = tf.nn.relu
)
# Output (reconstructed) layer
output_layer = tf.layers.dense(inputs=decoding_hidden_layers,
units=output_layer_size, activation=None)
# Encoding output (i.e., extracted features) reshaped
encoding_output = tf.squeeze(encoding_hidden_layers)
# Reconstruction output reshaped (for serving function)
reconstruction_output = tf.squeeze(tf.nn.sigmoid(output_layer))
# Provide an estimator spec for `ModeKeys.PREDICT`.
if mode == tf.estimator.ModeKeys.PREDICT:
# Convert predicted_indices back into strings
predictions = {
'encoding': encoding_output,
'reconstruction': reconstruction_output
}
export_outputs = {
'predict': tf.estimator.export.PredictOutput(predictions)
}
# Provide an estimator spec for `ModeKeys.PREDICT` modes.
return tf.estimator.EstimatorSpec(mode,
predictions=predictions,
export_outputs=export_outputs)
# Define loss based on reconstruction and regularization
# reconstruction_loss = tf.losses.mean_squared_error(tf.squeeze(input_layer), reconstruction_output)
# loss = reconstruction_loss + tf.losses.get_regularization_loss()
reconstruction_loss = tf.losses.sigmoid_cross_entropy(multi_class_labels=tf.squeeze(input_layer), logits=tf.squeeze(output_layer))
loss = reconstruction_loss + tf.losses.get_regularization_loss()
# Create Optimiser
optimizer = tf.train.AdamOptimizer(params.learning_rate)
# Create training operation
train_op = optimizer.minimize(
loss=loss, global_step=tf.train.get_global_step())
# Calculate root mean squared error as additional eval metric
eval_metric_ops = {
"rmse": tf.metrics.root_mean_squared_error(
tf.squeeze(input_layer), reconstruction_output)
}
# Provide an estimator spec for `ModeKeys.EVAL` and `ModeKeys.TRAIN` modes.
estimator_spec = tf.estimator.EstimatorSpec(mode=mode,
loss=loss,
train_op=train_op,
eval_metric_ops=eval_metric_ops)
return estimator_spec
def create_estimator(run_config, hparams):
estimator = tf.estimator.Estimator(model_fn=autoencoder_model_fn,
params=hparams,
config=run_config)
print("")
print("Estimator Type: {}".format(type(estimator)))
print("")
return estimator
Explanation: 4. Define Autoencoder Model Function
End of explanation
TRAIN_SIZE = 2000
NUM_EPOCHS = 1000
BATCH_SIZE = 100
NUM_EVAL = 10
TOTAL_STEPS = (TRAIN_SIZE/BATCH_SIZE)*NUM_EPOCHS
CHECKPOINT_STEPS = int((TRAIN_SIZE/BATCH_SIZE) * (NUM_EPOCHS/NUM_EVAL))
hparams = tf.contrib.training.HParams(
num_epochs = NUM_EPOCHS,
batch_size = BATCH_SIZE,
encoder_hidden_units=[30,3],
learning_rate = 0.01,
l2_reg = 0.0001,
noise_level = 0.0,
max_steps = TOTAL_STEPS,
dropout_rate = 0.05
)
model_dir = 'trained_models/{}'.format(MODEL_NAME)
run_config = tf.contrib.learn.RunConfig(
save_checkpoints_steps=CHECKPOINT_STEPS,
tf_random_seed=19830610,
model_dir=model_dir
)
print(hparams)
print("Model Directory:", run_config.model_dir)
print("")
print("Dataset Size:", TRAIN_SIZE)
print("Batch Size:", BATCH_SIZE)
print("Steps per Epoch:",TRAIN_SIZE/BATCH_SIZE)
print("Total Steps:", TOTAL_STEPS)
print("Required Evaluation Steps:", NUM_EVAL)
print("That is 1 evaluation step after each",NUM_EPOCHS/NUM_EVAL," epochs")
print("Save Checkpoint After",CHECKPOINT_STEPS,"steps")
Explanation: 5. Run Experiment using Estimator Train_And_Evaluate
a. Set the parameters
End of explanation
train_spec = tf.estimator.TrainSpec(
input_fn = lambda: csv_input_fn(
TRAIN_DATA_FILES_PATTERN,
mode = tf.contrib.learn.ModeKeys.TRAIN,
num_epochs=hparams.num_epochs,
batch_size=hparams.batch_size
),
max_steps=hparams.max_steps,
hooks=None
)
eval_spec = tf.estimator.EvalSpec(
input_fn = lambda: csv_input_fn(
TRAIN_DATA_FILES_PATTERN,
mode=tf.contrib.learn.ModeKeys.EVAL,
num_epochs=1,
batch_size=hparams.batch_size
),
# exporters=[tf.estimator.LatestExporter(
# name="encode", # the name of the folder in which the model will be exported to under export
# serving_input_receiver_fn=csv_serving_input_fn,
# exports_to_keep=1,
# as_text=True)],
steps=None,
hooks=None
)
Explanation: b. Define TrainSpec and EvaluSpec
End of explanation
if not RESUME_TRAINING:
print("Removing previous artifacts...")
shutil.rmtree(model_dir, ignore_errors=True)
else:
print("Resuming training...")
tf.logging.set_verbosity(tf.logging.INFO)
time_start = datetime.utcnow()
print("Experiment started at {}".format(time_start.strftime("%H:%M:%S")))
print(".......................................")
estimator = create_estimator(run_config, hparams)
tf.estimator.train_and_evaluate(
estimator=estimator,
train_spec=train_spec,
eval_spec=eval_spec
)
time_end = datetime.utcnow()
print(".......................................")
print("Experiment finished at {}".format(time_end.strftime("%H:%M:%S")))
print("")
time_elapsed = time_end - time_start
print("Experiment elapsed time: {} seconds".format(time_elapsed.total_seconds()))
Explanation: d. Run Experiment via train_and_evaluate
End of explanation
import itertools
DATA_SIZE = 2000
input_fn = lambda: csv_input_fn(
TRAIN_DATA_FILES_PATTERN,
mode=tf.contrib.learn.ModeKeys.INFER,
num_epochs=1,
batch_size=500
)
estimator = create_estimator(run_config, hparams)
predictions = estimator.predict(input_fn=input_fn)
predictions = itertools.islice(predictions, DATA_SIZE)
predictions = list(map(lambda item: list(item["encoding"]), predictions))
print(predictions[:5])
Explanation: 6. Use the trained model to encode data (prediction)
End of explanation
y = pd.read_csv("data/data-01.csv", header=None, index_col=0)[65]
data_reduced = pd.DataFrame(predictions, columns=['c1','c2','c3'])
data_reduced['class'] = y
data_reduced.head()
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(15,10))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(xs=data_reduced.c2/1000000, ys=data_reduced.c3/1000000, zs=data_reduced.c1/1000000, c=data_reduced['class'], marker='o')
plt.show()
Explanation: Visualise Encoded Data
End of explanation |
6,650 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classification
Load the data
Build features from the data and format for model
Load and fit the data with the random forest model
Display results
Code
Import Necessary Tools and Libraries
Step2: Load the Data
We will use data from Datacube. Important features needed in the initial dataset are the following in order
Step7: Generate Median Composite
A clean mask is generated using the pixel_qa values from the dataset. From the clean mask a median temporal composite is created. The pixel_qa is then dropped as the Random Forest Classifier does not explicitly use it.
Step8: Build Features
The methods used for creating the needed features for classification are defined below. We will use our TemporalCompositor from above to give us a cloud-free composite with which we will use to build our necessary features in our build_features function.
Step13: Classifier
The code to load in the model and to classify the given feature set. The order of features matters becuase we are using the model exported from the previous notebook.
Step18: Display
Code for displaying the results in ways that are easy to interpret.
Step19: Results
Load in Data
Load a dataset from Datacube.
Step20: Build Features
Load our Classifier object and build the features.
Step21: Classify the Set of Features
Step22: Display the Results
Image Display
Step23: Map Display
Step26: Example Use
Python Module for Explicit Forest Classification
To show how this proof of concept code may be used; we've created an example Python module. The module can be used to determine whether an input feature set is explicitly a forest or not a forest. This example module can be easily modified to do the same for the other classification labels. | Python Code:
import sys
import os
sys.path.append(os.environ.get('NOTEBOOK_ROOT'))
import datacube
import datetime
import folium
import numpy as np
import pandas as pd
import utils.data_cube_utilities.dc_display_map as dm
import xarray as xr
from folium import plugins
from sklearn.externals import joblib
from sklearn.preprocessing import minmax_scale
from utils.data_cube_utilities.dc_frac import frac_coverage_classify
from utils.data_cube_utilities.dc_mosaic import ls8_unpack_qa
from utils.data_cube_utilities.dc_mosaic import create_median_mosaic
from utils.data_cube_utilities.dc_rgb import rgb
Explanation: Classification
Load the data
Build features from the data and format for model
Load and fit the data with the random forest model
Display results
Code
Import Necessary Tools and Libraries
End of explanation
def load_dc_data():
Loads the dataset from Datacube
dc = datacube.Datacube()
params = dict(platform = 'LANDSAT_8',
product = 'ls8_lasrc_uruguay',
latitude = (-34.44988376, -34.096445),
longitude = (-56.29119062, -55.24653668),
time = ('2016-01-01', '2017-01-01'),
measurements = ['red', 'green', 'blue', 'nir', 'swir1', 'swir2', 'pixel_qa'])
dataset = dc.load(**params)
return dataset
Explanation: Load the Data
We will use data from Datacube. Important features needed in the initial dataset are the following in order:
1. red
2. green
3. blue
4. nir
5. swir1
6. swir2
7. pixel_qa
End of explanation
class TemporalCompositor:
A TemporalCompositor object for creating median composites over a temporal dimension.
Attributes:
dataset (xarray.Dataset): The dataset used in the compositing.
def __init__(self, dataset):
Initialize object and set the dataset.
Args:
dataset (xarray.Dataset): The dataset used in the compositing.
self.dataset = dataset
def clean_mask_ls8(self):
A function to create a clean mask for compositing.
Returns:
The clean mask.
water_mask = ls8_unpack_qa(self.dataset.pixel_qa, cover_type = "water")
clear_mask = ls8_unpack_qa(self.dataset.pixel_qa, cover_type = "clear")
clean_mask = np.logical_or(water_mask, clear_mask)
return clean_mask
def create_temporal_composite(self):
A function to create the median temporal composite.
Returns:
The median temporal composite.
clean = self.clean_mask_ls8()
composite = create_median_mosaic(self.dataset, clean_mask = clean)
composite = composite.drop('pixel_qa')
return composite
Explanation: Generate Median Composite
A clean mask is generated using the pixel_qa values from the dataset. From the clean mask a median temporal composite is created. The pixel_qa is then dropped as the Random Forest Classifier does not explicitly use it.
End of explanation
def NDVI(dataset: xr.Dataset) -> xr.DataArray:
return (dataset.nir - dataset.red)/(dataset.nir + dataset.red).rename("NDVI")
def NBR(dataset: xr.Dataset) -> xr.DataArray:
return ((dataset.nir - dataset.swir2) / (dataset.swir2 + dataset.nir)).rename("NBR")
def NDWI_2(dataset: xr.Dataset) -> xr.DataArray:
return (dataset.green - dataset.nir)/(dataset.green + dataset.nir).rename("NDWI_2")
def SCI(dataset: xr.Dataset) -> xr.DataArray:
return ((dataset.swir1 - dataset.nir)/(dataset.swir1 + dataset.nir)).rename("SCI")
def PNDVI(dataset: xr.Dataset) -> xr.DataArray:
nir = dataset.nir
green = dataset.green
blue = dataset.blue
red = dataset.red
return ((nir - (green + red + blue))/(nir + (green + red + blue))).rename("PNDVI")
def CVI(dataset: xr.Dataset) -> xr.DataArray:
return (dataset.nir * (dataset.red / (dataset.green * dataset.green))).rename("CVI")
def CCCI(dataset: xr.Dataset) -> xr.DataArray:
return ((dataset.nir - dataset.red)/(dataset.nir + dataset.red)).rename("CCCI")
def NBR2(dataset: xr.Dataset) -> xr.DataArray:
return (dataset.swir1 - dataset.swir2)/(dataset.swir1 + dataset.swir2)
def coefficient_of_variance(da:xr.DataArray):
return da.std(dim = "time")/da.mean(dim = "time")
def NDVI_coeff_var(ds, mask = None):
ds_ndvi = NDVI(ds)
masked_ndvi = ds_ndvi.where(mask)
return coefficient_of_variance(masked_ndvi)
def fractional_cover_2d(dataset: xr.Dataset) -> xr.DataArray:
return frac_coverage_classify(dataset, clean_mask= np.ones(dataset.red.values.shape).astype(bool))
Explanation: Build Features
The methods used for creating the needed features for classification are defined below. We will use our TemporalCompositor from above to give us a cloud-free composite with which we will use to build our necessary features in our build_features function.
End of explanation
class Classifier:
A Classifier object for performing the classification on a dataset.
Attributes:
rf (RandomForestClassifier): The RandomForestClassifier used in the classification.
def __init__(self, model_location='./classifiers/models/random_forest.model'):
Initializes the data and loads the binary model
Args:
model_location (string): The location of the RandomForestClassifier's exported binary.
self.rf = joblib.load(model_location)
def classify(self, features):
A function to classify the given dataset.
Args:
features (xarray.Dataset): The set of features to run the classifier with.
Returns:
An Xarray Dataset conatining the given features with the classification results appended.
X = features.values
X = np.array_split(X, 100)
y_pred = []
for i in range(len(X)):
y_pred.append(self.rf.predict(X[i]))
y_pred = np.concatenate(y_pred)
df = pd.DataFrame(y_pred, columns=['label'])
features['label'] = df.values
return features
def build_features(self, dataset):
Builds the features used in classification.
Args:
dataset (xarray.Dataset): The dataset to use for building the features.
Returns:
A Pandas DataFrame of the given features with the built features appended.
features = xr.Dataset()
compositor = TemporalCompositor(dataset)
composite = compositor.create_temporal_composite()
features = features.merge(composite)
feature_list = (NDVI,
NDVI_coeff_var,
PNDVI,
NBR,
NBR2,
NDWI_2,
SCI,
CVI,
CCCI,
fractional_cover_2d
)
clean = compositor.clean_mask_ls8()
for i in range(len(feature_list)):
if(feature_list[i].__name__ == 'NDVI_coeff_var'):
features[feature_list[i].__name__] = feature_list[i](dataset, mask = clean)
elif(feature_list[i].__name__ == 'fractional_cover_2d'):
features = features.merge(fractional_cover_2d(composite))
else:
features[feature_list[i].__name__] = feature_list[i](composite)
features.NDVI_coeff_var.values[ np.isnan(features.NDVI_coeff_var.values)] = 0
return features.to_dataframe()
Explanation: Classifier
The code to load in the model and to classify the given feature set. The order of features matters becuase we are using the model exported from the previous notebook.
End of explanation
class FeaturesDisplay:
A FeaturesDisplay object for presenting classification results.
Attributes:
dataset (xarray.Dataset): The features DataFrame represented as an Xarray Dataset.
def __init__(self, features: xr.Dataset):
Initializes the FeaturesDisplay object.
Args:
features (pandas.DataFrame): A classified features DataFrame with a label key.
self.dataset = xr.Dataset.from_dataframe(features)
def images(self):
Generates multiple images of the features based on classification results.
landuse = ('Forest',
'Misc',
'Naturalgrassland',
'Prairie',
'Summercrops'
)
dataset = self.dataset
for i in range(len(landuse)):
tmp = dataset.where(dataset.label == landuse[i])
print("%s:" % landuse[i])
rgb(tmp, bands= ["swir1","nir","red"], width= 20)
def _get_canvas(self, key:str) -> np.array:
canvas = np.zeros((len(self.dataset.latitude.values),
len(self.dataset.longitude.values),
4))
paint_here = self.dataset.label == key
canvas[paint_here] = np.array([255, 255, 255, 179])
canvas[~paint_here] = np.array([0, 0, 0, 0])
return canvas
def map_overlay(self, key, color=None):
Maps classifications using Folium.
Args:
key (string): The classification to map from the following: (Forest, Misc, Naturalgrassland, Prairie, Summercrops)
color: Set to False to disable colorized overlay.
dataset = self.dataset
tmp = dataset.where(dataset.label == key)
latitudes = (min(dataset.latitude.values), max(dataset.latitude.values))
longitudes = (min(dataset.longitude.values), max(dataset.longitude.values))
zoom_level = dm._degree_to_zoom_level(latitudes[0], latitudes[1])
# I don't know why, but this makes Folium work for these labels
if(key == 'Summercrops' or key == 'Naturalgrassland'):
mult = 255/3
else:
mult = 1
print("%s:" % key)
if(color == None):
r = tmp.nir.values
g = tmp.red.values
b = tmp.green.values
r[np.isnan(r)] = 0
g[np.isnan(g)] = 0
b[np.isnan(b)] = 0
minmax_scale(r, feature_range=(0,255), copy=False)
minmax_scale(g, feature_range=(0,255), copy=False)
minmax_scale(b, feature_range=(0,255), copy=False)
rgb_stack = np.dstack((r,g,b))
a = np.ones(r.shape) * 128
a[np.where(r == 0)] = 0
rgb_uint8 = (rgb_stack / mult).astype(np.uint8)
rgb_uint8 = np.dstack((rgb_uint8, a))
else:
rgb_uint8 = self._get_canvas(key).astype(np.uint8)
m = folium.Map(location=[np.average(latitudes),
np.average(longitudes)],
zoom_start=zoom_level+1,
tiles=" http://mt1.google.com/vt/lyrs=y&z={z}&x={x}&y={y}",
attr="Google")
m.add_child(plugins.ImageOverlay(np.flipud(rgb_uint8), \
bounds =[[min(latitudes), min(longitudes)], [max(latitudes), max(longitudes)]]))
folium.LayerControl().add_to(m)
return m
Explanation: Display
Code for displaying the results in ways that are easy to interpret.
End of explanation
dataset = load_dc_data()
Explanation: Results
Load in Data
Load a dataset from Datacube.
End of explanation
classifier = Classifier()
features = classifier.build_features(dataset)
features.head(10)
Explanation: Build Features
Load our Classifier object and build the features.
End of explanation
features = classifier.classify(features)
features.head(10)
Explanation: Classify the Set of Features
End of explanation
display = FeaturesDisplay(features)
display.images()
Explanation: Display the Results
Image Display
End of explanation
display.map_overlay('Forest', color=False)
display.map_overlay('Misc', color=False)
display.map_overlay('Naturalgrassland', color=False)
display.map_overlay('Prairie', color=False)
display.map_overlay('Summercrops', color=False)
Explanation: Map Display
End of explanation
# This imports the actual classifier.
from classifiers.forest_classifier import ForestClassifier
Load in out dataset. This is just an example. You can use a different
dataset as long as it contains the appropriate features and they are
in the order that the classifier needs them.
dataset = load_dc_data()
Generate a clean mask using the method in this same notebook.
This is just an example. You can supply your own mask as long
as it is of boolean type or boolean-like (1 or 0).
mask = TemporalCompositor(dataset).clean_mask_ls8()
# Running the actual classifier with our example dataset and mask.
forest_classifier = ForestClassifier('./classifiers/models/random_forest.model')
forest = forest_classifier.classify(dataset, mask)
forest
Explanation: Example Use
Python Module for Explicit Forest Classification
To show how this proof of concept code may be used; we've created an example Python module. The module can be used to determine whether an input feature set is explicitly a forest or not a forest. This example module can be easily modified to do the same for the other classification labels.
End of explanation |
6,651 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Segmentation
Step1: Read Data and Select Seed Point(s)
We first load a T1 MRI brain scan and select our seed point(s). If you are unfamiliar with the anatomy you can use the preselected seed point specified below, just uncomment the line.
Step2: ConnectedThreshold
We start by using explicitly specified thresholds, you should modify these (lower/upper) to see the effects on the
resulting segmentation.
Step3: ConfidenceConnected
This region growing algorithm allows the user to implicitly specify the threshold bounds based on the statistics estimated from the seed points, $\mu\pm c\sigma$. This algorithm has some flexibility which you should familiarize yourself with
Step4: VectorConfidenceConnected
We first load a T2 image from the same person and combine it with the T1 image to create a vector image. This region growing algorithm is similar to the previous one, ConfidenceConnected, and allows the user to implicitly specify the threshold bounds based on the statistics estimated from the seed points. The main difference is that in this case we are using the Mahalanobis and not the intensity difference.
Step5: Clean up, Clean up...
Use of low level segmentation algorithms such as region growing is often followed by a clean up step. In this step we fill holes and remove small connected components. Both of these operations are achieved by using binary morphological operations, opening (BinaryMorphologicalOpening) to remove small connected components and closing (BinaryMorphologicalClosing) to fill holes.
SimpleITK supports several shapes for the structuring elements (kernels) including
Step6: And now we compare the original segmentation to the segmentation after clean up (using the GUI you can zoom in on the region of interest for a closer look). | Python Code:
# To use interactive plots (mouse clicks, zooming, panning) we use the notebook back end. We want our graphs
# to be embedded in the notebook, inline mode, this combination is defined by the magic "%matplotlib notebook".
%matplotlib notebook
import SimpleITK as sitk
%run update_path_to_download_script
from downloaddata import fetch_data as fdata
import gui
# Using an external viewer (ITK-SNAP or 3D Slicer) we identified a visually appealing window-level setting
T1_WINDOW_LEVEL = (1050, 500)
Explanation: Segmentation: Region Growing <a href="https://mybinder.org/v2/gh/InsightSoftwareConsortium/SimpleITK-Notebooks/master?filepath=Python%2F30_Segmentation_Region_Growing.ipynb"><img style="float: right;" src="https://mybinder.org/badge_logo.svg"></a>
In this notebook we use one of the simplest segmentation approaches, region growing. We illustrate
the use of three variants of this family of algorithms. The common theme for all algorithms is that a voxel's neighbor is considered to be in the same class if its intensities are similar to the current voxel. The definition of similar is what varies:
<b>ConnectedThreshold</b>: The neighboring voxel's intensity is within explicitly specified thresholds.
<b>ConfidenceConnected</b>: The neighboring voxel's intensity is within the implicitly specified bounds $\mu\pm c\sigma$, where $\mu$ is the mean intensity of the seed points, $\sigma$ their standard deviation and $c$ a user specified constant.
<b>VectorConfidenceConnected</b>: A generalization of the previous approach to vector valued images, for instance multi-spectral images or multi-parametric MRI. The neighboring voxel's intensity vector is within the implicitly specified bounds using the Mahalanobis distance $\sqrt{(\mathbf{x}-\mathbf{\mu})^T\Sigma^{-1}(\mathbf{x}-\mathbf{\mu})}<c$, where $\mathbf{\mu}$ is the mean of the vectors at the seed points, $\Sigma$ is the covariance matrix and $c$ is a user specified constant.
We will illustrate the usage of these three filters using a cranial MRI scan (T1 and T2) and attempt to segment one of the ventricles.
End of explanation
img_T1 = sitk.ReadImage(fdata("nac-hncma-atlas2013-Slicer4Version/Data/A1_grayT1.nrrd"))
# Rescale the intensities and map them to [0,255], these are the default values for the output
# We will use this image to display the results of segmentation
img_T1_255 = sitk.Cast(
sitk.IntensityWindowing(
img_T1,
windowMinimum=T1_WINDOW_LEVEL[1] - T1_WINDOW_LEVEL[0] / 2.0,
windowMaximum=T1_WINDOW_LEVEL[1] + T1_WINDOW_LEVEL[0] / 2.0,
),
sitk.sitkUInt8,
)
point_acquisition_interface = gui.PointDataAquisition(img_T1, window_level=(1050, 500))
# preselected seed point in the left ventricle
point_acquisition_interface.set_point_indexes([(132, 142, 96)])
initial_seed_point_indexes = point_acquisition_interface.get_point_indexes()
Explanation: Read Data and Select Seed Point(s)
We first load a T1 MRI brain scan and select our seed point(s). If you are unfamiliar with the anatomy you can use the preselected seed point specified below, just uncomment the line.
End of explanation
seg_explicit_thresholds = sitk.ConnectedThreshold(
img_T1, seedList=initial_seed_point_indexes, lower=100, upper=170
)
# Overlay the segmentation onto the T1 image
gui.MultiImageDisplay(
image_list=[sitk.LabelOverlay(img_T1_255, seg_explicit_thresholds)],
title_list=["connected threshold result"],
)
Explanation: ConnectedThreshold
We start by using explicitly specified thresholds, you should modify these (lower/upper) to see the effects on the
resulting segmentation.
End of explanation
seg_implicit_thresholds = sitk.ConfidenceConnected(
img_T1,
seedList=initial_seed_point_indexes,
numberOfIterations=0,
multiplier=2,
initialNeighborhoodRadius=1,
replaceValue=1,
)
gui.MultiImageDisplay(
image_list=[sitk.LabelOverlay(img_T1_255, seg_implicit_thresholds)],
title_list=["confidence connected result"],
)
Explanation: ConfidenceConnected
This region growing algorithm allows the user to implicitly specify the threshold bounds based on the statistics estimated from the seed points, $\mu\pm c\sigma$. This algorithm has some flexibility which you should familiarize yourself with:
* The "multiplier" parameter is the constant $c$ from the formula above.
* You can specify a region around each seed point "initialNeighborhoodRadius" from which the statistics are estimated, see what happens when you set it to zero.
* The "numberOfIterations" allows you to rerun the algorithm. In the first run the bounds are defined by the seed voxels you specified, in the following iterations $\mu$ and $\sigma$ are estimated from the segmented points and the region growing is updated accordingly.
End of explanation
img_T2 = sitk.ReadImage(fdata("nac-hncma-atlas2013-Slicer4Version/Data/A1_grayT2.nrrd"))
img_multi = sitk.Compose(img_T1, img_T2)
seg_implicit_threshold_vector = sitk.VectorConfidenceConnected(
img_multi, initial_seed_point_indexes, numberOfIterations=2, multiplier=4
)
gui.MultiImageDisplay(
image_list=[sitk.LabelOverlay(img_T1_255, seg_implicit_threshold_vector)],
title_list=["vector confidence connected result"],
)
Explanation: VectorConfidenceConnected
We first load a T2 image from the same person and combine it with the T1 image to create a vector image. This region growing algorithm is similar to the previous one, ConfidenceConnected, and allows the user to implicitly specify the threshold bounds based on the statistics estimated from the seed points. The main difference is that in this case we are using the Mahalanobis and not the intensity difference.
End of explanation
vectorRadius = (1, 1, 1)
kernel = sitk.sitkBall
seg_implicit_thresholds_clean = sitk.BinaryMorphologicalClosing(
seg_implicit_thresholds, vectorRadius, kernel
)
Explanation: Clean up, Clean up...
Use of low level segmentation algorithms such as region growing is often followed by a clean up step. In this step we fill holes and remove small connected components. Both of these operations are achieved by using binary morphological operations, opening (BinaryMorphologicalOpening) to remove small connected components and closing (BinaryMorphologicalClosing) to fill holes.
SimpleITK supports several shapes for the structuring elements (kernels) including:
* sitkAnnulus
* sitkBall
* sitkBox
* sitkCross
The size of the kernel can be specified as a scalar (same for all dimensions) or as a vector of values, size per dimension.
The following code cell illustrates the results of such a clean up, using closing to remove holes in the original segmentation.
End of explanation
gui.MultiImageDisplay(
image_list=[
sitk.LabelOverlay(img_T1_255, seg_implicit_thresholds),
sitk.LabelOverlay(img_T1_255, seg_implicit_thresholds_clean),
],
shared_slider=True,
title_list=["before morphological closing", "after morphological closing"],
)
Explanation: And now we compare the original segmentation to the segmentation after clean up (using the GUI you can zoom in on the region of interest for a closer look).
End of explanation |
6,652 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An example of using Jupyter for Documenting and Automating BDD Style Tests
This is a sample document which contains both software feature requirements and its corresponding manual and automated tests. This is an executable document, which can be shared between Business Analyst, Developer, (manual and/or automation) Testers and other stakeholders.
Aggregating all these information in a single executable file can help maintain the synchronization between the requirements, manual tests and automated tests. It helps outdated requirements, outdated manual test steps or outdated automation test steps to be identified easily.
Because the document is written in Jupyter (IPython) notebook, tests can easily be arranged in BDD (Gherkin) style without requiring any third party BDD framework.
The tests can be executed one cell at a time, or all in one go. These tests can be discovered and run using Pytest, which means, these tests can also be imported into Continuous Integration test environment such as Jenkins.
Requirements Summary
|Scenario | Can Not Post Comment as Anonymous|
|
Step1: And I am not logged in to any google account.
|Step|Actions|Expected Results|
|
Step2: When I post a comment to a blog post.
|Step|Actions|Expected Results|
|
Step3: Then the comment input must be successful.
|Step|Actions|Expected Results|
|
Step4: But I must be prompted to login first before post can be completed.
|Step|Actions|Expected Results|
| | Python Code:
from marigoso import Test
browser = Test().launch_browser("Firefox")
browser.get_url("https://www.blogger.com/")
header = browser.get_element("tag=h2")
assert header.text == "Sign in to continue to Blogger"
Explanation: An example of using Jupyter for Documenting and Automating BDD Style Tests
This is a sample document which contains both software feature requirements and its corresponding manual and automated tests. This is an executable document, which can be shared between Business Analyst, Developer, (manual and/or automation) Testers and other stakeholders.
Aggregating all these information in a single executable file can help maintain the synchronization between the requirements, manual tests and automated tests. It helps outdated requirements, outdated manual test steps or outdated automation test steps to be identified easily.
Because the document is written in Jupyter (IPython) notebook, tests can easily be arranged in BDD (Gherkin) style without requiring any third party BDD framework.
The tests can be executed one cell at a time, or all in one go. These tests can be discovered and run using Pytest, which means, these tests can also be imported into Continuous Integration test environment such as Jenkins.
Requirements Summary
|Scenario | Can Not Post Comment as Anonymous|
|:-------:|------------------------------------------------------------------------|
|Given| I am a Blogger anonymous user. |
|And | I am not logged in to any google account. |
|When | I post a comment to a blog post. |
|Then | the comment input must be successful. |
|But | I must be prompted to login first before post can be completed. |
Manual and Automated Test Steps
Given I am a Blogger anonymous user.
|Step|Actions|Expected Results|
|:------:|---|----------------|
|01| Launch a browser and navigate to Blogger website.| The loaded page should contain a header asking you to sign in to Blogger.|
End of explanation
browser.get_url("https://mail.google.com/")
header = browser.get_element("tag=h2")
assert header.text == "Sign in to continue to Gmail"
Explanation: And I am not logged in to any google account.
|Step|Actions|Expected Results|
|:------:|---|----------------|
|02| Navigate to any other Google services you are subscribed to, e.g Gmail.| The loaded page should contain a header asking you to sign in to that Google service.|
End of explanation
browser.get_url("http://pytestuk.blogspot.co.uk/2015/11/testing.html")
browser.press_available("id=cookieChoiceDismiss")
iframe = browser.get_element("css=div#bc_0_0T_box iframe")
browser.switch_to.frame(iframe)
browser.kb_type("id=commentBodyField", "An example of Selenium automation in Python.")
assert browser.select_text("id=identityMenu", "Google Account")
Explanation: When I post a comment to a blog post.
|Step|Actions|Expected Results|
|:------:|---|----------------|
|03| Navigate to a particular post in Blogger.| Page must load successfully.|
|04| If there is a Cookie Notice from Google, dismiss it.| Cookie notice must be dismissed successfully.|
|05| Provide the following input: | Input must be successfull.|
| | Comment body| An example of Selenium automation in Python.|
| | Comment as | Google Account|
End of explanation
browser.submit_btn("Publish")
assert not browser.is_available("id=main-error")
Explanation: Then the comment input must be successful.
|Step|Actions|Expected Results|
|:------:|---|----------------|
|06| Press the "Publish" button at the buttom of the page.| The page must be submitted without errors.|
End of explanation
header = browser.get_element("tag=h2")
assert header.text == "Sign in to continue to Blogger"
browser.quit()
import time
localtime = time.asctime(time.localtime(time.time()))
print("All tests passed on {}.".format(localtime))
Explanation: But I must be prompted to login first before post can be completed.
|Step|Actions|Expected Results|
|:------:|---|----------------|
|07| Observe the landing page after submitting the "Publish" button.| The page must ask you to login to Blogger.|
End of explanation |
6,653 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create some text
Step2: Apply regex | Python Code:
# Load regex package
import re
Explanation: Title: Match Any Of A List Of Characters
Slug: match_any_of_a_list_of_symbols
Summary: Match Any Of A List Of Characters
Date: 2016-05-01 12:00
Category: Regex
Tags: Basics
Authors: Chris Albon
Based on: Regular Expressions Cookbook
Preliminaries
End of explanation
# Create a variable containing a text string
text = 'The quick brown fox jumped over the lazy brown bear.'
Explanation: Create some text
End of explanation
# Find all instances of any vowel
re.findall(r'[aeiou]', text)
Explanation: Apply regex
End of explanation |
6,654 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CB Model
The following tries to reproduce Fig 10 from Hawkes, Jalali, Colquhoun (1992). First we create the $Q$-matrix for this particular model from Hawkes, Jalali, Colquhoun (1992). First we create the $Q$-matrix for this particular model.
Step1: We then create a function to plot each exponential component in the asymptotic expression. An explanation on how to get to these plots can be found in the CH82 notebook.
Step2: For practical reasons, we plot the excess shut-time probability densities in the graph below. In all other particulars, it should reproduce Fig. 10 from Hawkes, Jalali, Colquhoun (1992) | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from dcprogs.likelihood import QMatrix
tau = 0.2
qmatrix = QMatrix([ [-2, 1, 1, 0],
[ 1, -101, 0, 100],
[50, 0, -50, 0],
[ 0, 5.6, 0, -5.6]], 1)
Explanation: CB Model
The following tries to reproduce Fig 10 from Hawkes, Jalali, Colquhoun (1992). First we create the $Q$-matrix for this particular model from Hawkes, Jalali, Colquhoun (1992). First we create the $Q$-matrix for this particular model.
End of explanation
from dcprogs.likelihood._methods import exponential_pdfs
def plot_exponentials(qmatrix, tau, x0=None, x=None, ax=None, nmax=2, shut=False):
from dcprogs.likelihood import missed_events_pdf
from dcprogs.likelihood._methods import exponential_pdfs
if ax is None:
fig, ax = plt.subplots(1,1)
if x is None:
x = np.arange(0, 5*tau, tau/10)
if x0 is None:
x0 = x
pdf = missed_events_pdf(qmatrix, tau, nmax=nmax, shut=shut)
graphb = [x0, pdf(x0+tau), '-k']
functions = exponential_pdfs(qmatrix, tau, shut=shut)
plots = ['.r', '.b', '.g']
together = None
for f, p in zip(functions[::-1], plots):
if together is None:
together = f(x+tau)
else:
together = together + f(x+tau)
graphb.extend([x, together, p])
ax.plot(*graphb)
Explanation: We then create a function to plot each exponential component in the asymptotic expression. An explanation on how to get to these plots can be found in the CH82 notebook.
End of explanation
from dcprogs.likelihood import missed_events_pdf
fig, ax = plt.subplots(2,2, figsize=(12,9))
x = np.arange(0, 10, tau/100)
pdf = missed_events_pdf(qmatrix, 0.2, nmax=2, shut=True)
ax[0,0].plot(x, pdf(x), '-k')
ax[0,0].set_xlabel('time $t$ (ms)')
ax[0,0].set_ylabel('Shut-time probability density $f_{\\bar{\\tau}=0.2}(t)$')
ax[0,1].set_xlabel('time $t$ (ms)')
tau = 0.2
x, x0 = np.arange(0, 5*tau, tau/10.0), np.arange(0, 5*tau, tau/100)
plot_exponentials(qmatrix, tau, shut=True, ax=ax[0,1], x=x, x0=x0)
ax[0,1].set_ylabel('Excess shut-time probability density $f_{{\\bar{{\\tau}}={tau}}}(t)$'.format(tau=tau))
ax[0,1].set_xlabel('time $t$ (ms)')
ax[0,1].yaxis.tick_right()
ax[0,1].yaxis.set_label_position("right")
tau = 0.05
x, x0 = np.arange(0, 5*tau, tau/10.0), np.arange(0, 5*tau, tau/100)
plot_exponentials(qmatrix, tau, shut=True, ax=ax[1,0], x=x, x0=x0)
ax[1,0].set_ylabel('Excess shut-time probability density $f_{{\\bar{{\\tau}}={tau}}}(t)$'.format(tau=tau))
ax[1,0].set_xlabel('time $t$ (ms)')
tau = 0.5
x, x0 = np.arange(0, 5*tau, tau/10.0), np.arange(0, 5*tau, tau/100)
plot_exponentials(qmatrix, tau, shut=True, ax=ax[1,1], x=x, x0=x0)
ax[1,1].set_ylabel('Excess shut-time probability density $f_{{\\bar{{\\tau}}={tau}}}(t)$'.format(tau=tau))
ax[1,1].set_xlabel('time $t$ (ms)')
ax[1,1].yaxis.tick_right()
ax[1,1].yaxis.set_label_position("right")
ax[0,1].legend(['a','b','c','d'], loc='best')
fig.tight_layout()
from dcprogs.likelihood import DeterminantEq, find_root_intervals, find_lower_bound_for_roots
from numpy.linalg import eig
tau = 0.5
determinant = DeterminantEq(qmatrix, tau).transpose()
x = np.arange(-100, -3, 0.1)
matrix = qmatrix.transpose()
qaffa = np.array(np.dot(matrix.af, matrix.fa), dtype=np.float128)
aa = np.array(matrix.aa, dtype=np.float128)
def anaH(s):
from numpy.linalg import det
from numpy import identity, exp
arg0 = 1e0/np.array(-2-s, dtype=np.float128)
arg1 = np.array(-(2+s) * tau, dtype=np.float128)
return qaffa * (exp(arg1) - np.array(1e0, dtype=np.float128)) * arg0 + aa
def anadet(s):
from numpy.linalg import det
from numpy import identity, exp
s = np.array(s, dtype=np.float128)
matrix = s*identity(qaffa.shape[0], dtype=np.float128) - anaH(s)
return matrix[0,0] * matrix[1, 1] * matrix[2, 2] \
+ matrix[1,0] * matrix[2, 1] * matrix[0, 2] \
+ matrix[0,1] * matrix[1, 2] * matrix[2, 0] \
- matrix[2,0] * matrix[1, 1] * matrix[0, 2] \
- matrix[1,0] * matrix[0, 1] * matrix[2, 2] \
- matrix[2,1] * matrix[1, 2] * matrix[0, 0]
x = np.arange(-100, -3, 1e-2)
# For some reason gcc builds with regular doubles have trouble finding the
# roots with alpha=2.0 the default so override it here
print("Lower bound for all roots is {}".format(find_lower_bound_for_roots(determinant, alpha=1.9)))
print(eig(np.array(anaH(-160 ), dtype='float64'))[0])
print(anadet(-104))
Explanation: For practical reasons, we plot the excess shut-time probability densities in the graph below. In all other particulars, it should reproduce Fig. 10 from Hawkes, Jalali, Colquhoun (1992)
End of explanation |
6,655 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examples and Exercises from Think Stats, 2nd Edition
http
Step1: Least squares
One more time, let's load up the NSFG data.
Step2: The following function computes the intercept and slope of the least squares fit.
Step3: Here's the least squares fit to birth weight as a function of mother's age.
Step4: The intercept is often easier to interpret if we evaluate it at the mean of the independent variable.
Step5: And the slope is easier to interpret if we express it in pounds per decade (or ounces per year).
Step6: The following function evaluates the fitted line at the given xs.
Step7: And here's an example.
Step8: Here's a scatterplot of the data with the fitted line.
Step9: Residuals
The following functon computes the residuals.
Step10: Now we can add the residuals as a column in the DataFrame.
Step11: To visualize the residuals, I'll split the respondents into groups by age, then plot the percentiles of the residuals versus the average age in each group.
First I'll make the groups and compute the average age in each group.
Step12: Next I'll compute the CDF of the residuals in each group.
Step13: The following function plots percentiles of the residuals against the average age in each group.
Step14: The following figure shows the 25th, 50th, and 75th percentiles.
Curvature in the residuals suggests a non-linear relationship.
Step17: Sampling distribution
To estimate the sampling distribution of inter and slope, I'll use resampling.
Step18: The following function resamples the given dataframe and returns lists of estimates for inter and slope.
Step19: Here's an example.
Step20: The following function takes a list of estimates and prints the mean, standard error, and 90% confidence interval.
Step21: Here's the summary for inter.
Step22: And for slope.
Step23: Exercise
Step24: Visualizing uncertainty
To show the uncertainty of the estimated slope and intercept, we can generate a fitted line for each resampled estimate and plot them on top of each other.
Step25: Or we can make a neater (and more efficient plot) by computing fitted lines and finding percentiles of the fits for each value of the dependent variable.
Step26: This example shows the confidence interval for the fitted values at each mother's age.
Step27: Coefficient of determination
The coefficient compares the variance of the residuals to the variance of the dependent variable.
Step28: For birth weight and mother's age $R^2$ is very small, indicating that the mother's age predicts a small part of the variance in birth weight.
Step29: We can confirm that $R^2 = \rho^2$
Step30: To express predictive power, I think it's useful to compare the standard deviation of the residuals to the standard deviation of the dependent variable, as a measure RMSE if you try to guess birth weight with and without taking into account mother's age.
Step31: As another example of the same idea, here's how much we can improve guesses about IQ if we know someone's SAT scores.
Step32: Hypothesis testing with slopes
Here's a HypothesisTest that uses permutation to test whether the observed slope is statistically significant.
Step33: And it is.
Step34: Under the null hypothesis, the largest slope we observe after 1000 tries is substantially less than the observed value.
Step35: We can also use resampling to estimate the sampling distribution of the slope.
Step36: The distribution of slopes under the null hypothesis, and the sampling distribution of the slope under resampling, have the same shape, but one has mean at 0 and the other has mean at the observed slope.
To compute a p-value, we can count how often the estimated slope under the null hypothesis exceeds the observed slope, or how often the estimated slope under resampling falls below 0.
Step37: Here's how to get a p-value from the sampling distribution.
Step38: Resampling with weights
Resampling provides a convenient way to take into account the sampling weights associated with respondents in a stratified survey design.
The following function resamples rows with probabilities proportional to weights.
Step39: We can use it to estimate the mean birthweight and compute SE and CI.
Step40: And here's what the same calculation looks like if we ignore the weights.
Step41: The difference is non-negligible, which suggests that there are differences in birth weight between the strata in the survey.
Exercises
Exercise
Step42: Estimate intercept and slope.
Step43: Make a scatter plot of the data and show the fitted line.
Step44: Make the same plot but apply the inverse transform to show weights on a linear (not log) scale.
Step45: Plot percentiles of the residuals.
Step46: Compute correlation.
Step47: Compute coefficient of determination.
Step48: Confirm that $R^2 = \rho^2$.
Step49: Compute Std(ys), which is the RMSE of predictions that don't use height.
Step50: Compute Std(res), the RMSE of predictions that do use height.
Step51: How much does height information reduce RMSE?
Step52: Use resampling to compute sampling distributions for inter and slope.
Step53: Plot the sampling distribution of slope.
Step54: Compute the p-value of the slope.
Step55: Compute the 90% confidence interval of slope.
Step56: Compute the mean of the sampling distribution.
Step57: Compute the standard deviation of the sampling distribution, which is the standard error.
Step58: Resample rows without weights, compute mean height, and summarize results.
Step59: Resample rows with weights. Note that the weight column in this dataset is called finalwt. | Python Code:
from __future__ import print_function, division
%matplotlib inline
import numpy as np
import random
import thinkstats2
import thinkplot
Explanation: Examples and Exercises from Think Stats, 2nd Edition
http://thinkstats2.com
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
import first
live, firsts, others = first.MakeFrames()
live = live.dropna(subset=['agepreg', 'totalwgt_lb'])
ages = live.agepreg
weights = live.totalwgt_lb
Explanation: Least squares
One more time, let's load up the NSFG data.
End of explanation
from thinkstats2 import Mean, MeanVar, Var, Std, Cov
def LeastSquares(xs, ys):
meanx, varx = MeanVar(xs)
meany = Mean(ys)
slope = Cov(xs, ys, meanx, meany) / varx
inter = meany - slope * meanx
return inter, slope
Explanation: The following function computes the intercept and slope of the least squares fit.
End of explanation
inter, slope = LeastSquares(ages, weights)
inter, slope
Explanation: Here's the least squares fit to birth weight as a function of mother's age.
End of explanation
inter + slope * 25
Explanation: The intercept is often easier to interpret if we evaluate it at the mean of the independent variable.
End of explanation
slope * 10
Explanation: And the slope is easier to interpret if we express it in pounds per decade (or ounces per year).
End of explanation
def FitLine(xs, inter, slope):
fit_xs = np.sort(xs)
fit_ys = inter + slope * fit_xs
return fit_xs, fit_ys
Explanation: The following function evaluates the fitted line at the given xs.
End of explanation
fit_xs, fit_ys = FitLine(ages, inter, slope)
Explanation: And here's an example.
End of explanation
thinkplot.Scatter(ages, weights, color='blue', alpha=0.1, s=10)
thinkplot.Plot(fit_xs, fit_ys, color='white', linewidth=3)
thinkplot.Plot(fit_xs, fit_ys, color='red', linewidth=2)
thinkplot.Config(xlabel="Mother's age (years)",
ylabel='Birth weight (lbs)',
axis=[10, 45, 0, 15],
legend=False)
Explanation: Here's a scatterplot of the data with the fitted line.
End of explanation
def Residuals(xs, ys, inter, slope):
xs = np.asarray(xs)
ys = np.asarray(ys)
res = ys - (inter + slope * xs)
return res
Explanation: Residuals
The following functon computes the residuals.
End of explanation
live['residual'] = Residuals(ages, weights, inter, slope)
Explanation: Now we can add the residuals as a column in the DataFrame.
End of explanation
bins = np.arange(10, 48, 3)
indices = np.digitize(live.agepreg, bins)
groups = live.groupby(indices)
age_means = [group.agepreg.mean() for _, group in groups][1:-1]
age_means
Explanation: To visualize the residuals, I'll split the respondents into groups by age, then plot the percentiles of the residuals versus the average age in each group.
First I'll make the groups and compute the average age in each group.
End of explanation
cdfs = [thinkstats2.Cdf(group.residual) for _, group in groups][1:-1]
Explanation: Next I'll compute the CDF of the residuals in each group.
End of explanation
def PlotPercentiles(age_means, cdfs):
thinkplot.PrePlot(3)
for percent in [75, 50, 25]:
weight_percentiles = [cdf.Percentile(percent) for cdf in cdfs]
label = '%dth' % percent
thinkplot.Plot(age_means, weight_percentiles, label=label)
Explanation: The following function plots percentiles of the residuals against the average age in each group.
End of explanation
PlotPercentiles(age_means, cdfs)
thinkplot.Config(xlabel="Mother's age (years)",
ylabel='Residual (lbs)',
xlim=[10, 45])
Explanation: The following figure shows the 25th, 50th, and 75th percentiles.
Curvature in the residuals suggests a non-linear relationship.
End of explanation
def SampleRows(df, nrows, replace=False):
Choose a sample of rows from a DataFrame.
df: DataFrame
nrows: number of rows
replace: whether to sample with replacement
returns: DataDf
indices = np.random.choice(df.index, nrows, replace=replace)
sample = df.loc[indices]
return sample
def ResampleRows(df):
Resamples rows from a DataFrame.
df: DataFrame
returns: DataFrame
return SampleRows(df, len(df), replace=True)
Explanation: Sampling distribution
To estimate the sampling distribution of inter and slope, I'll use resampling.
End of explanation
def SamplingDistributions(live, iters=101):
t = []
for _ in range(iters):
sample = ResampleRows(live)
ages = sample.agepreg
weights = sample.totalwgt_lb
estimates = LeastSquares(ages, weights)
t.append(estimates)
inters, slopes = zip(*t)
return inters, slopes
Explanation: The following function resamples the given dataframe and returns lists of estimates for inter and slope.
End of explanation
inters, slopes = SamplingDistributions(live, iters=1001)
Explanation: Here's an example.
End of explanation
def Summarize(estimates, actual=None):
mean = Mean(estimates)
stderr = Std(estimates, mu=actual)
cdf = thinkstats2.Cdf(estimates)
ci = cdf.ConfidenceInterval(90)
print('mean, SE, CI', mean, stderr, ci)
Explanation: The following function takes a list of estimates and prints the mean, standard error, and 90% confidence interval.
End of explanation
Summarize(inters)
Explanation: Here's the summary for inter.
End of explanation
Summarize(slopes)
Explanation: And for slope.
End of explanation
# Solution goes here
Explanation: Exercise: Use ResampleRows and generate a list of estimates for the mean birth weight. Use Summarize to compute the SE and CI for these estimates.
End of explanation
for slope, inter in zip(slopes, inters):
fxs, fys = FitLine(age_means, inter, slope)
thinkplot.Plot(fxs, fys, color='gray', alpha=0.01)
thinkplot.Config(xlabel="Mother's age (years)",
ylabel='Residual (lbs)',
xlim=[10, 45])
Explanation: Visualizing uncertainty
To show the uncertainty of the estimated slope and intercept, we can generate a fitted line for each resampled estimate and plot them on top of each other.
End of explanation
def PlotConfidenceIntervals(xs, inters, slopes, percent=90, **options):
fys_seq = []
for inter, slope in zip(inters, slopes):
fxs, fys = FitLine(xs, inter, slope)
fys_seq.append(fys)
p = (100 - percent) / 2
percents = p, 100 - p
low, high = thinkstats2.PercentileRows(fys_seq, percents)
thinkplot.FillBetween(fxs, low, high, **options)
Explanation: Or we can make a neater (and more efficient plot) by computing fitted lines and finding percentiles of the fits for each value of the dependent variable.
End of explanation
PlotConfidenceIntervals(age_means, inters, slopes, percent=90,
color='gray', alpha=0.3, label='90% CI')
PlotConfidenceIntervals(age_means, inters, slopes, percent=50,
color='gray', alpha=0.5, label='50% CI')
thinkplot.Config(xlabel="Mother's age (years)",
ylabel='Residual (lbs)',
xlim=[10, 45])
Explanation: This example shows the confidence interval for the fitted values at each mother's age.
End of explanation
def CoefDetermination(ys, res):
return 1 - Var(res) / Var(ys)
Explanation: Coefficient of determination
The coefficient compares the variance of the residuals to the variance of the dependent variable.
End of explanation
inter, slope = LeastSquares(ages, weights)
res = Residuals(ages, weights, inter, slope)
r2 = CoefDetermination(weights, res)
r2
Explanation: For birth weight and mother's age $R^2$ is very small, indicating that the mother's age predicts a small part of the variance in birth weight.
End of explanation
print('rho', thinkstats2.Corr(ages, weights))
print('R', np.sqrt(r2))
Explanation: We can confirm that $R^2 = \rho^2$:
End of explanation
print('Std(ys)', Std(weights))
print('Std(res)', Std(res))
Explanation: To express predictive power, I think it's useful to compare the standard deviation of the residuals to the standard deviation of the dependent variable, as a measure RMSE if you try to guess birth weight with and without taking into account mother's age.
End of explanation
var_ys = 15**2
rho = 0.72
r2 = rho**2
var_res = (1 - r2) * var_ys
std_res = np.sqrt(var_res)
std_res
Explanation: As another example of the same idea, here's how much we can improve guesses about IQ if we know someone's SAT scores.
End of explanation
class SlopeTest(thinkstats2.HypothesisTest):
def TestStatistic(self, data):
ages, weights = data
_, slope = thinkstats2.LeastSquares(ages, weights)
return slope
def MakeModel(self):
_, weights = self.data
self.ybar = weights.mean()
self.res = weights - self.ybar
def RunModel(self):
ages, _ = self.data
weights = self.ybar + np.random.permutation(self.res)
return ages, weights
Explanation: Hypothesis testing with slopes
Here's a HypothesisTest that uses permutation to test whether the observed slope is statistically significant.
End of explanation
ht = SlopeTest((ages, weights))
pvalue = ht.PValue()
pvalue
Explanation: And it is.
End of explanation
ht.actual, ht.MaxTestStat()
Explanation: Under the null hypothesis, the largest slope we observe after 1000 tries is substantially less than the observed value.
End of explanation
sampling_cdf = thinkstats2.Cdf(slopes)
Explanation: We can also use resampling to estimate the sampling distribution of the slope.
End of explanation
thinkplot.PrePlot(2)
thinkplot.Plot([0, 0], [0, 1], color='0.8')
ht.PlotCdf(label='null hypothesis')
thinkplot.Cdf(sampling_cdf, label='sampling distribution')
thinkplot.Config(xlabel='slope (lbs / year)',
ylabel='CDF',
xlim=[-0.03, 0.03],
legend=True, loc='upper left')
Explanation: The distribution of slopes under the null hypothesis, and the sampling distribution of the slope under resampling, have the same shape, but one has mean at 0 and the other has mean at the observed slope.
To compute a p-value, we can count how often the estimated slope under the null hypothesis exceeds the observed slope, or how often the estimated slope under resampling falls below 0.
End of explanation
pvalue = sampling_cdf[0]
pvalue
Explanation: Here's how to get a p-value from the sampling distribution.
End of explanation
def ResampleRowsWeighted(df, column='finalwgt'):
weights = df[column]
cdf = thinkstats2.Cdf(dict(weights))
indices = cdf.Sample(len(weights))
sample = df.loc[indices]
return sample
Explanation: Resampling with weights
Resampling provides a convenient way to take into account the sampling weights associated with respondents in a stratified survey design.
The following function resamples rows with probabilities proportional to weights.
End of explanation
iters = 100
estimates = [ResampleRowsWeighted(live).totalwgt_lb.mean()
for _ in range(iters)]
Summarize(estimates)
Explanation: We can use it to estimate the mean birthweight and compute SE and CI.
End of explanation
estimates = [thinkstats2.ResampleRows(live).totalwgt_lb.mean()
for _ in range(iters)]
Summarize(estimates)
Explanation: And here's what the same calculation looks like if we ignore the weights.
End of explanation
import brfss
df = brfss.ReadBrfss(nrows=None)
df = df.dropna(subset=['htm3', 'wtkg2'])
heights, weights = df.htm3, df.wtkg2
log_weights = np.log10(weights)
Explanation: The difference is non-negligible, which suggests that there are differences in birth weight between the strata in the survey.
Exercises
Exercise: Using the data from the BRFSS, compute the linear least squares fit for log(weight) versus height. How would you best present the estimated parameters for a model like this where one of the variables is log-transformed? If you were trying to guess someone’s weight, how much would it help to know their height?
Like the NSFG, the BRFSS oversamples some groups and provides a sampling weight for each respondent. In the BRFSS data, the variable name for these weights is totalwt. Use resampling, with and without weights, to estimate the mean height of respondents in the BRFSS, the standard error of the mean, and a 90% confidence interval. How much does correct weighting affect the estimates?
Read the BRFSS data and extract heights and log weights.
End of explanation
# Solution goes here
Explanation: Estimate intercept and slope.
End of explanation
# Solution goes here
Explanation: Make a scatter plot of the data and show the fitted line.
End of explanation
# Solution goes here
Explanation: Make the same plot but apply the inverse transform to show weights on a linear (not log) scale.
End of explanation
# Solution goes here
Explanation: Plot percentiles of the residuals.
End of explanation
# Solution goes here
Explanation: Compute correlation.
End of explanation
# Solution goes here
Explanation: Compute coefficient of determination.
End of explanation
# Solution goes here
Explanation: Confirm that $R^2 = \rho^2$.
End of explanation
# Solution goes here
Explanation: Compute Std(ys), which is the RMSE of predictions that don't use height.
End of explanation
# Solution goes here
Explanation: Compute Std(res), the RMSE of predictions that do use height.
End of explanation
# Solution goes here
Explanation: How much does height information reduce RMSE?
End of explanation
# Solution goes here
Explanation: Use resampling to compute sampling distributions for inter and slope.
End of explanation
# Solution goes here
Explanation: Plot the sampling distribution of slope.
End of explanation
# Solution goes here
Explanation: Compute the p-value of the slope.
End of explanation
# Solution goes here
Explanation: Compute the 90% confidence interval of slope.
End of explanation
# Solution goes here
Explanation: Compute the mean of the sampling distribution.
End of explanation
# Solution goes here
Explanation: Compute the standard deviation of the sampling distribution, which is the standard error.
End of explanation
# Solution goes here
Explanation: Resample rows without weights, compute mean height, and summarize results.
End of explanation
# Solution goes here
Explanation: Resample rows with weights. Note that the weight column in this dataset is called finalwt.
End of explanation |
6,656 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Download the HTML and create a Beautiful Soup object
Step2: If we looked at the soup object, we'd see that the names we want are in a heirarchical list. In psuedo-code, it looks like
Step3: Drilling down with a forloop
Step4: Results
Step5: Quick analysis | Python Code:
# Import required modules
import requests
from bs4 import BeautifulSoup
import pandas as pd
Explanation: Title: Drilling Down With Beautiful Soup
Slug: beautiful_soup_drill_down
Summary: Drilling Down With Beautiful Soup
Date: 2016-05-01 12:00
Category: Python
Tags: Web Scraping
Authors: Chris Albon
Preliminaries
End of explanation
# Create a variable with the URL to this tutorial
url = 'http://en.wikipedia.org/wiki/List_of_A_Song_of_Ice_and_Fire_characters'
# Scrape the HTML at the url
r = requests.get(url)
# Turn the HTML into a Beautiful Soup object
soup = BeautifulSoup(r.text, "lxml")
Explanation: Download the HTML and create a Beautiful Soup object
End of explanation
# Create a variable to score the scraped data in
character_name = []
Explanation: If we looked at the soup object, we'd see that the names we want are in a heirarchical list. In psuedo-code, it looks like:
class=toclevel-1 span=toctext
class=toclevel-2 span=toctext CHARACTER NAMES
class=toclevel-2 span=toctext CHARACTER NAMES
class=toclevel-2 span=toctext CHARACTER NAMES
class=toclevel-2 span=toctext CHARACTER NAMES
class=toclevel-2 span=toctext CHARACTER NAMES
To get the CHARACTER NAMES, we are going to need to drill down to grap into loclevel-2 and grab the toctext
Setting up where to put the results
End of explanation
# for each item in all the toclevel-2 li items
# (except the last three because they are not character names),
for item in soup.find_all('li',{'class':'toclevel-2'})[:-3]:
# find each span with class=toctext,
for post in item.find_all('span',{'class':'toctext'}):
# add the stripped string of each to character_name, one by one
character_name.append(post.string.strip())
Explanation: Drilling down with a forloop
End of explanation
# View all the character names
character_name
Explanation: Results
End of explanation
# Create a list object where to store the for loop results
houses = []
# For each element in the character_name list,
for name in character_name:
# split up the names by a blank space and select the last element
# this works because it is the last name if they are a house,
# but the first name if they only have one name,
# Then append each last name to the houses list
houses.append(name.split(' ')[-1])
# Convert houses into a pandas series (so we can use value_counts())
houses = pd.Series(houses)
# Count the number of times each name/house name appears
houses.value_counts()
Explanation: Quick analysis: Which house has the most main characters?
End of explanation |
6,657 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploring the Labyrinth
Chapter 2 of Real World Algorithms.
Panos Louridas<br />
Athens University of Economics and Business
Graphs in Python
The most common way to represent graphs in Python is with adjacency lists.
The adjacency lists are put into a Python dictionary.
The keys of the dictionary are the nodes, and the value for each node is its adjacency list.
With a slight abuse of terminology, we could use other data structures instead of a list to represent an adjacency list
Step1: Similarly, here is the same graph, but this time the nodes are strings (single-character strings, which are still strings in Python).
Nodes can be anything
Step2: Depth-first Search
Suppose we have the following graph and we want to explore it depth-first.
In depth-first search, we follow a path as far as we can; when we reach a dead-end, that is, a node with no-unvisited neighbours, we backgrack to the previous unvisited node.
<img width="300" src="example_graph_1.png"/>
The graph is represented in Python as follows
Step3: The depth-first recursive search algorithm is then simply
Step4: It is possible to implement depth-first search without recursion.
To do that, we have to emulate recursion ourselves, by using a stack.
Step5: The stack-based depth-first search may insert a node in the stack multiple times.
For example, consider the following graph
Step6: Then we can traverse with with the stack-based version of depth-first search
Step7: You may notice that node 1 enters the stack twice.
That does not affect the correctness of the algorithm, as the algorithm will explore the whole graph, but we can fix it anyway.
One way to fix it would be to search in the stack and if the node is already there, we would not put it.
However, searching in a list takes place in linear time, depending on the length of the list.
It is faster to keep a separate structure in which we record if something is in the stack.
That requires more space
Step8: Breadth-first Search
In breadth-first search we visit all neighbours of a node, then all the neighbours of the neighbours, and so on.
The exploration is like a ripple spreading outwards.
We can implement breadth-first search using a First-In First-Out (FIFO) queue; in Python this is provided by collections.deque.
Step9: Reading a Graph from a File
Usually we read graphs from files, typically text files.
A common way to store graphs is in text files where each line contains a link between two nodes.
For example, the file containing the first graph we saw would be
Step10: Printing a graph like that is not very convenient.
Python offers the pprint (pretty-print) library that can help us output stuff in a more meaningful manner.
Step11: For undirected graphs, the code is pretty much the same; we only need to take care to enter the edge $(v, u)$ for every edge $(u, v)$ that we meet in the file.
Here is the equivalent for the file example_graph_2.txt, which is the undirected graph we used for depth-first search. | Python Code:
g = {
0: [1, 2, 3],
1: [0, 4],
2: [0],
3: [0, 5],
4: [1, 5],
5: [3, 4, 6, 7],
6: [5],
7: [5],
}
# print whole graph
print(g)
# print adjacency list of node 0
print(g[0])
# print adjacency list of node 5
print(g[5])
Explanation: Exploring the Labyrinth
Chapter 2 of Real World Algorithms.
Panos Louridas<br />
Athens University of Economics and Business
Graphs in Python
The most common way to represent graphs in Python is with adjacency lists.
The adjacency lists are put into a Python dictionary.
The keys of the dictionary are the nodes, and the value for each node is its adjacency list.
With a slight abuse of terminology, we could use other data structures instead of a list to represent an adjacency list: for example, a set is a sensible choice, as we don't care about the order of the items in the list, and checking for membership (i.e., checking if a node is a neighbor of another node) is much faster in a set than in a list. In a well-implemented set it takes constant time, while in a list the time is linear and depends on the length of the list.
For example, here is a graph with 8 nodes (from 0 to 7) and its adjacency lists, represented as lists:
End of explanation
g = {
'a': ['b', 'c', 'd'],
'b': ['a', 'e'],
'c': ['a'],
'd': ['a', 'f'],
'e': ['b', 'f'],
'f': ['d', 'e', 'g', 'h'],
'g': ['f'],
'h': ['f'],
}
# print whole graph
print(g)
# print adjacency list of node 'a'
print(g['a'])
# print adjacency list of node 'e'
print(g['e'])
Explanation: Similarly, here is the same graph, but this time the nodes are strings (single-character strings, which are still strings in Python).
Nodes can be anything: numbers, strings, or anything else that can be used as a key in a Python dictionary, i.e., everything that is "hashable".
End of explanation
g = {
0: [1, 2, 3],
1: [0, 4],
2: [0, 4],
3: [0, 5],
4: [5],
5: [4, 6, 7],
6: [],
7: []
}
Explanation: Depth-first Search
Suppose we have the following graph and we want to explore it depth-first.
In depth-first search, we follow a path as far as we can; when we reach a dead-end, that is, a node with no-unvisited neighbours, we backgrack to the previous unvisited node.
<img width="300" src="example_graph_1.png"/>
The graph is represented in Python as follows:
End of explanation
from typing import Hashable # for use with the type annotation below
visited = [ False ] * len(g)
def dfs(g: dict, node: Hashable) -> None:
print("Visiting", node)
visited[node] = True
for v in g[node]:
if not visited[v]:
dfs(g, v)
dfs(g, 0)
Explanation: The depth-first recursive search algorithm is then simply:
End of explanation
def dfs_stack(g: dict, node: Hashable) -> list[bool]:
s = []
visited = [ False ] * len(g)
s.append(node)
while len(s) != 0:
print("Stack", s)
c = s.pop()
print("Visiting", c)
visited[c] = True
for v in g[c]:
if not visited[v]:
s.append(v)
return visited
dfs_stack(g, 0)
Explanation: It is possible to implement depth-first search without recursion.
To do that, we have to emulate recursion ourselves, by using a stack.
End of explanation
g2 = {
0: [1, 2, 3],
1: [0, 4],
2: [0],
3: [0, 5],
4: [1, 5],
5: [3, 4, 6, 7],
6: [5],
7: [5]
}
Explanation: The stack-based depth-first search may insert a node in the stack multiple times.
For example, consider the following graph:
<img width="250" src="example_graph_2.png"/>
The graph is represented as follows:
End of explanation
dfs_stack(g2, 0)
Explanation: Then we can traverse with with the stack-based version of depth-first search:
End of explanation
def dfs_nd_stack(g: dict, node: Hashable) -> list[bool]:
s = []
visited = [ False ] * len(g)
instack = [ False ] * len(g)
s.append(node)
instack[node] = True
while len(s) != 0:
print("Stack", s)
c = s.pop()
instack[c] = False
print("Visiting", c)
visited[c] = True
for v in g[c]:
if not visited[v] and not instack[v]:
s.append(v)
instack[v] = True
return visited
dfs_nd_stack(g2, 0)
Explanation: You may notice that node 1 enters the stack twice.
That does not affect the correctness of the algorithm, as the algorithm will explore the whole graph, but we can fix it anyway.
One way to fix it would be to search in the stack and if the node is already there, we would not put it.
However, searching in a list takes place in linear time, depending on the length of the list.
It is faster to keep a separate structure in which we record if something is in the stack.
That requires more space: an instance of speed-space trade-off.
End of explanation
from collections import deque
g = {
0: [1, 2, 3],
1: [0, 4],
2: [0, 4],
3: [0, 5],
4: [5],
5: [4, 6, 7],
6: [],
7: []
}
def bfs(g: dict, node: Hashable) -> list[bool]:
q = deque()
visited = [ False ] * len(g)
inqueue = [ False ] * len(g)
q.appendleft(node)
inqueue[node] = True
while not (len(q) == 0):
print("Queue", q)
c = q.pop()
print("Visiting", c)
inqueue[c] = False
visited[c] = True
for v in g[c]:
if not visited[v] and not inqueue[v]:
q.appendleft(v)
inqueue[v] = True
return visited
bfs(g, 0)
Explanation: Breadth-first Search
In breadth-first search we visit all neighbours of a node, then all the neighbours of the neighbours, and so on.
The exploration is like a ripple spreading outwards.
We can implement breadth-first search using a First-In First-Out (FIFO) queue; in Python this is provided by collections.deque.
End of explanation
input_filename = "example_graph_1.txt"
g = {}
with open(input_filename) as graph_input:
for line in graph_input:
# Split line and convert line parts to integers.
nodes = [int(x) for x in line.split()]
if len(nodes) != 2:
continue
# If a node is not already in the graph
# we must create a new empty list.
if nodes[0] not in g:
g[nodes[0]] = []
if nodes[1] not in g:
g[nodes[1]] = []
# We need to append the "to" node
# to the existing list for the "from" node.
g[nodes[0]].append(nodes[1])
print(g)
Explanation: Reading a Graph from a File
Usually we read graphs from files, typically text files.
A common way to store graphs is in text files where each line contains a link between two nodes.
For example, the file containing the first graph we saw would be:
0 1
0 2
0 3
1 0
1 4
2 0
2 4
3 0
3 5
4 5
5 4
5 6
5 7
To read this file we would go line-by-line.
We would split each line on whitespace.
We would then get the two parts and treat them as nodes.
Note that we assume that the nodes are integers, so we convert the split pieces with int(x). If nodes were strings, this would not be required.
We also assume that the graph is directed.
The following example will read file example_graph_1.txt, the directed graph we used for the depth-first example, which has the following contents:
0 1
0 2
0 3
1 0
1 4
2 0
2 4
3 0
3 5
4 5
5 4
5 6
5 7
End of explanation
import pprint
pprint.pprint(g)
Explanation: Printing a graph like that is not very convenient.
Python offers the pprint (pretty-print) library that can help us output stuff in a more meaningful manner.
End of explanation
input_filename = "example_graph_2.txt"
g = {}
with open(input_filename) as graph_input:
for line in graph_input:
# Split line and convert line parts to integers.
nodes = [int(x) for x in line.split()]
if len(nodes) != 2:
continue
# If a node is not already in the graph
# we must create a new empty list.
if nodes[0] not in g:
g[nodes[0]] = []
if nodes[1] not in g:
g[nodes[1]] = []
# We need to append the "to" node
# to the existing list for the "from" node.
g[nodes[0]].append(nodes[1])
# And also the other way round.
g[nodes[1]].append(nodes[0])
pprint.pprint(g)
Explanation: For undirected graphs, the code is pretty much the same; we only need to take care to enter the edge $(v, u)$ for every edge $(u, v)$ that we meet in the file.
Here is the equivalent for the file example_graph_2.txt, which is the undirected graph we used for depth-first search.
End of explanation |
6,658 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Detached Binary
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: Adding Datasets
Now we'll create an empty mesh dataset at quarter-phase so we can compare the difference between using roche and rotstar for deformation potentials
Step3: Running Compute
Let's set the radius of the primary component to be large enough to start to show some distortion when using the roche potentials.
Step4: Now we'll compute synthetics at the times provided using the default options
Step5: Plotting | Python Code:
!pip install -I "phoebe>=2.0,<2.1"
%matplotlib inline
Explanation: Detached Binary: Roche vs Rotstar
Setup
Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
b.add_dataset('mesh', times=[0.75], dataset='mesh01')
Explanation: Adding Datasets
Now we'll create an empty mesh dataset at quarter-phase so we can compare the difference between using roche and rotstar for deformation potentials:
End of explanation
b['rpole@primary@component'] = 1.8
Explanation: Running Compute
Let's set the radius of the primary component to be large enough to start to show some distortion when using the roche potentials.
End of explanation
b.run_compute(irrad_method='none', distortion_method='roche', model='rochemodel')
b.run_compute(irrad_method='none', distortion_method='rotstar', model='rotstarmodel')
Explanation: Now we'll compute synthetics at the times provided using the default options
End of explanation
axs, artists = b['rochemodel'].plot()
axs, artists = b['rotstarmodel'].plot()
Explanation: Plotting
End of explanation |
6,659 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
OpenAI Gym 入門
OpenAI Gym 公式ホームページ
OpenAI Gym 公式ドキュメント ← このノートブックはこの公式ドキュメントの日本語訳+α
OpenAI Gym GitHub
OpenAI Gym で提供されている全環境
OpenAI Gym 公開についての公式ブログ 2016/04/27
OpenAI Universe
OpenAI Gymとは?
OpenAI Gymは強化学習アルゴリズムを作ったり、比較したりするためのツール
エージェントはどんな構造でも良い
TensorFlowやTheanoで書いてもオッケー
今はPythonのみだが、将来的には他の言語もサポートする予定
OpenAI Gymは2つの部分から成る
gym というオープンソースライブラリ。強化学習における「環境」を提供する
OpenAI Gym とういウェブサービス。APIを使い、自分の作ったエージェントの良さを他のユーザーと比較できる
環境とのインタラクション
Step 1
Step1: Step 2
Step2: Step 3
Step3: 観測
ランダムな行動より上手くやりたければ、それぞれの行動が環境にどんな影響を与えるのか知りたいところ。
行動が環境にどんな影響を与えたのかは、環境のstep関数が教えてくれる。
step関数は4つの値を返す
observation (object) 観測
reward (float) 報酬
done (boolean) エピソードの終了フラグ(Trueになった時がreset()を呼ぶタイミング)
info (dict) サブ情報
done==Trueとなったときにエピソードを終了するのであれば、以下のようなコードになる。
Step4: 空間
可能な行動や観測は Spaceオブジェクトで記述されている。
Step5: Descrete
Step6: Box
Step7: スペースからサンプリングすることも、ある値がスペースに含まれているか調べることもできる。
Step8: 環境
gymの主な役割は、強化学習で使える多様な環境を提供すること。
OpenAI Gymが提供する環境 https
Step9: 自分で環境を作ることも可能
作った環境はロードするときにこのようにregistryに登録したら良い。
シミュレーション結果の記録とアップロード
ある環境でのパフォーマンスを計測し、同時にビデオに記録するには、環境(env)をMonitorでラッピングすれば良い。 | Python Code:
# gym オープンソースライブラリの読み込み
import gym
# 環境を作る
env = gym.make('CartPole-v0') # 'CartPole-v0' は環境ID
#env = gym.make('MountainCar-v0') # 'MountainCar-v0'という別の環境
#env = gym.make('MsPacman-v0') # 'MsPacman-v0'という別の環境
env.seed(42)
# 環境の初期化(最初の観測が得られる)
env.reset()
# 描画
env.render()
# 行動選択(手動)
action = 0 # 0: Left, 1: Right
# 環境に対して選択された行動を実行
# printで囲む
env.step(action)
# 描画
env.render()
# 画面を閉じる
env.render(close=True)
Explanation: OpenAI Gym 入門
OpenAI Gym 公式ホームページ
OpenAI Gym 公式ドキュメント ← このノートブックはこの公式ドキュメントの日本語訳+α
OpenAI Gym GitHub
OpenAI Gym で提供されている全環境
OpenAI Gym 公開についての公式ブログ 2016/04/27
OpenAI Universe
OpenAI Gymとは?
OpenAI Gymは強化学習アルゴリズムを作ったり、比較したりするためのツール
エージェントはどんな構造でも良い
TensorFlowやTheanoで書いてもオッケー
今はPythonのみだが、将来的には他の言語もサポートする予定
OpenAI Gymは2つの部分から成る
gym というオープンソースライブラリ。強化学習における「環境」を提供する
OpenAI Gym とういウェブサービス。APIを使い、自分の作ったエージェントの良さを他のユーザーと比較できる
環境とのインタラクション
Step 1: 手動で行動選択(version 1)
'CartPole-v0' という環境に対して、手動で選んだ行動を入れてみる。
End of explanation
import time
# gym オープンソースライブラリの読み込み
import gym
# 環境を作る
env = gym.make('CartPole-v0') # 'CartPole-v0' は環境ID
#env = gym.make('MountainCar-v0') # 'MountainCar-v0'という別の環境
#env = gym.make('MsPacman-v0') # 'MsPacman-v0'という別の環境
# ランダムな行動選択
env.action_space.sample()
# 行動空間(エージェントが選択可能な行動が定義されている空間)
env.action_space
# 環境の初期化(最初の観測が得られる)
env.reset()
for _ in range(100):
time.sleep(0.1) # 描画を遅くするために0.1秒スリープ
env.render() # 描画
action = env.action_space.sample() # ランダムな行動選択
print(action), # 選択された行動をプリント
print(env.step(action)) # 選択行動を実行
env.render(close=True) # 画面を閉じる
Explanation: Step 2: ランダムな行動選択(ランダム方策)
'CartPole-v0' という環境に対して、ランダムな行動を取ってみる(1000ステップ)。
End of explanation
#!python keyboard_agent.py CartPole-v1
#!python keyboard_agent.py LunarLander-v2
#!python keyboard_agent.py MountainCar-v0
#!python keyboard_agent.py SpaceInvaders-v0
#!python keyboard_agent.py Breakout-v0
#!python keyboard_agent.py Acrobot-v1
Explanation: Step 3: 手動で行動選択(version 2)
https://github.com/openai/gym/blob/master/examples/agents/keyboard_agent.py
注: line36: env.render()の前にenv.reset()を入れると、ほとんどの環境で使用可能。
End of explanation
import numpy as np
np.set_printoptions(suppress=True) # Scientific Notation (例 1.0e-0.5)を使わない
all_obs = []
import gym
env = gym.make('CartPole-v0')
for i_episode in range(5): # 5エピソード回す
observation = env.reset() # 環境を初期化し、最初の観測を得る。
all_obs.append(observation) # 観測を記録
for t in range(100): # 各エピソードの最大ステップ数は100
env.render()
print(observation)
action = env.action_space.sample() # ランダム方策
observation, reward, done, info = env.step(action) # 選択行動の実行
all_obs.append(observation) # 観測を記録
if done:
print("Episode finished after {} timesteps\n".format(t+1))
break
env.render(close=True)
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(10, 4))
ax.plot(np.array(all_obs))
ax.legend(['x', 'x_dot', 'theta', 'theta_dot'])
Explanation: 観測
ランダムな行動より上手くやりたければ、それぞれの行動が環境にどんな影響を与えるのか知りたいところ。
行動が環境にどんな影響を与えたのかは、環境のstep関数が教えてくれる。
step関数は4つの値を返す
observation (object) 観測
reward (float) 報酬
done (boolean) エピソードの終了フラグ(Trueになった時がreset()を呼ぶタイミング)
info (dict) サブ情報
done==Trueとなったときにエピソードを終了するのであれば、以下のようなコードになる。
End of explanation
import gym
env = gym.make('CartPole-v0') # 'CartPole-v0' は環境ID
#env = gym.make('MountainCar-v0') # 'MountainCar-v0'という別の環境
#env = gym.make('MsPacman-v0') # 'MsPacman-v0'という別の環境
print(env.action_space)
print(env.observation_space)
Explanation: 空間
可能な行動や観測は Spaceオブジェクトで記述されている。
End of explanation
env.action_space.n
Explanation: Descrete: 非負の整数 {0, 1, 2, ..., n-1}
End of explanation
env.observation_space.high
env.observation_space.low
Explanation: Box: n次元のbox (次元ごとに上限と下限を持つn次元配列)
End of explanation
from gym import spaces
space = spaces.Discrete(8) # {0, 1, 2, ..., 7}
# サンプリング
x = space.sample()
x
assert space.contains(x)
assert space.n == 8
Explanation: スペースからサンプリングすることも、ある値がスペースに含まれているか調べることもできる。
End of explanation
from gym import envs
# 使用可能な環境を列挙
envs.registry.all()
Explanation: 環境
gymの主な役割は、強化学習で使える多様な環境を提供すること。
OpenAI Gymが提供する環境 https://gym.openai.com/envs
End of explanation
import gym
from gym import wrappers # ラッパの呼び出し
env = gym.make('CartPole-v0')
env = wrappers.Monitor(env, './cartpole-v0-experiment-1', force=True) # envをMonitorでラッピング。force=Trueで前の結果を削除。
for i_episode in range(10):
observation = env.reset()
for t in range(100):
env.render()
print(observation)
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
if done:
print("Episode finished after {} timesteps".format(t+1))
break
env.render(close=True)
#!open .
# 結果をOpenAI Gym側のサーバーにアップロードする方法。
import gym
#gym.upload('/tmp/cartpole-v0-experiment-1', api_key='YOUR_API_KEY')
Explanation: 自分で環境を作ることも可能
作った環境はロードするときにこのようにregistryに登録したら良い。
シミュレーション結果の記録とアップロード
ある環境でのパフォーマンスを計測し、同時にビデオに記録するには、環境(env)をMonitorでラッピングすれば良い。
End of explanation |
6,660 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fonksiyonlar
Şu ana kadar zengin Python kütüphaneleri sayesinde pek çok fonksiyonu kolayca kullandık. Öte yandan bazı durumlarda kendi fonksiyonlarımızı yazmak isteyebiliriz. Mesela Python'da kullanılan standart dize fonksiyonları Türkçe harfler ile başa çıkamıyorlar. Diyelim ki "ARMUT" dizesindeki büyük harfleri küçükleri ile değiştirmek istiyoruz. Bunun için dizeler için tanımlanmış olan <font color='green'>lower</font> fonksiyonunu kullanacağız.
Step1: Buraya kadar bir terslik yok. Fakat aynı fonksiyon dizede Türkçe harfler varsa doğru çalışmıyor.
Step2: Bu sorunu herhangi bir dize içindeki harf gruplarını değiştirmek için kullanılan <font color='green'>replace</font> fonksiyonu ile çözebiliriz.
Step3: Peki ya kelimemiz "KEŞKÜL" olsa?
Step4: Belli ki "Ş" ve "Ü" harfleri için de harf değiştirme işlemlerini yapmamız gerekecek.
Step5: Bunun sonu yok. İyisi mi kelimeyi verdiğimizde Türkçe büyük harfleri küçükleri ile değiştiren bir fonksiyon yazmalı.
Python'da bir fonksiyonu <font color='green'>def</font> anahtar kelimesi ile tanımlıyoruz.
Step6: Fonksiyonumuzun ismi <font color='blue'>lowertr</font> ve kelime isimli parametreyi alıyor. Hemen deneyelim.
Step7: İşte bu kadar. Şimdi küçük harflerle yazılmış olan kelimeyi de başka bir değişkene kaydetmeyi deneyelim.
Step8: İkinci satırdaki "None" kelimesi dizge_kucuk değişkeninin içinin boş olduğunu gösteriyor. Demek ki fonksiyonumuz küçük harflere çevirdiği dizgeyi geri döndürmüyor. Ufak bir değişiklik ile hemen hallederiz. Tek yapmamız gereken <font color='green'>return</font> anahtar kelimesi ile çağrılan fonksiyonun ne çevireceğini belirtmek.
Step9: Artık dizgekucuk değişkenine doğru şekilde atama yapabiliriz.
Step10: İşi biraz daha ilerletelim. İki seçeneğimiz olsun. İlk seçenekte daha önceki gibi dizeyi küçük harflere, ikinci seçenekte ise büyük harflere dönüştürelim. Seçeneği belirlemek için ikinci bir parametre tanımlayabiliriz. Bu parametre 1 değerini alırsa küçük harf seçeneğini, 0 değerini alırsa da büyük harf seçeneğini uygulayacağız.
Step11: Dikkat ederseniz daha önce tanımladığımız <font color='blue'>lowertr</font> fonksiyonunu yeni fonksiyonumuzun ilk seçeneğinde kullandık.
Step12: Her seferinde 1 ya da 0 diye seçeneği girmek insanın canını sıkabilir. Özellikle de çoğu zaman küçük harflere dönüştürmeyi kullanacaksanız. Python'da bunun da çaresi var. Tek yapmamız gereken bazı parametrelerin varsayılan değerlerini girmek.
Step13: İkinci parametre, yani secenek, artık <font color='red'>varsayılan parametre</font>. Fonksiyon çağrılırken varsayılan parametrenin değeri verilmezse, fonksiyon içerisinde secenek parametresinin değeri 1 olarak alınıyor.
Step14: Burada bir noktaya dikkat etmeli. Varsayılan parametreler her zaman diğer parametrelerden sonra yazılmak zorundalar.
Parametrelerin varsayılan değerlerini girmenin getirdiği bir kolaylık daha var. Bu tür parametrelerden birkaç tane varsa, isimleri verilerek herhangi bir sırayla girilebiliyorlar. Bunun en güzel örneği grafik çizmede kullanılan <font color='green'>plot</font> fonksiyonu. Aşağıdaki örnekteki linewidth, color ve linestyle parametreleri x ve y parametrelerinden sonra istenilen sırada girilebilir.
Step15: Basit birkaç sayısal örnek yapalım.
Örneğin, bir dikdörtgenin iki kenarını parametre olarak alıp, alanını veren bir fonksiyon yazalım. İkinci kenar uzunluğunun varsayılan değeri 1 olsun.
Step16: Son örnekte, parametrelerin isimlerini kullandığımız için, fonksiyonda tanımlanmış sıralarını kullanmak zorunda kalmadık (yani önce en, sonra boy vermek zorunda değiliz).
Daha doğal bir varsayım olarak diyelim ki, sadece en verildiğinde boy'un da aynı değere sahip olmasını istiyoruz. Yani tek bir kenar ölçüsü verildiğinde dikdörtgenimizin kare olduğunu varsayıyoruz. Bunu yapmanın doğrudan bir yolu yok, ama şöyle bir numara yapabiliriz
Step17: 1'den başlayarak, verilen bir sayıya kadar tamsayıları toplayan bir fonksiyon
Step18: Bunu genelleştirelim; ilk'den N'ye kadar adim adımlarla ilerleyerek elde edilen aritmetik dizinin toplamını veren bir fonksiyon yazalım. Varsayılan değerler ilk için 0 ve adim için 1 olsun.
Step19: Madem genelliyoruz, aritmetik dizilerin ötesine geçelim. Sözgelişi, genel bir f fonksiyonu için $\sum_{i=0}^{N} f(i)$ toplamını hesaplayacak bir fonksiyon yazalım.
Burada, toplam fonksiyonuna başka bir fonksiyonu parametre olarak vermeliyiz. Python'da her şey gibi fonksiyonlar da bir nesne olduğu için, bunu basit şekilde halledebiliriz.
Step20: Tabii bu toplamı yapabilmek için sayısal değer veren bir fonksiyon ismini kullanmalıyız. Sözgelişi, math modülünden karekök alma fonksiyonunu alalım.
Step21: Veya, kendimiz bir fonksiyon tanımlayalım.
Step22: Bu ve benzeri şekilde, bir fonksiyon nesnesi beklenen durumlarda başka bir alternatifimiz isimsiz fonksiyonlardır. Yukarıdaki örneği isimsiz fonksiyonlarla şöyle yazarız.
Step23: Buradaki lambda, isimsiz fonksiyon yaratma komutudur. Ardından parametre listesi yazılır, iki nokta üstüste konur, ve geri verilecek değer yazılır. İstenirse, aşağıdaki gibi, isimlere atanıp tekrar kullanılabilirler de. | Python Code:
meyva = "ARMUT"
print meyva.lower()
Explanation: Fonksiyonlar
Şu ana kadar zengin Python kütüphaneleri sayesinde pek çok fonksiyonu kolayca kullandık. Öte yandan bazı durumlarda kendi fonksiyonlarımızı yazmak isteyebiliriz. Mesela Python'da kullanılan standart dize fonksiyonları Türkçe harfler ile başa çıkamıyorlar. Diyelim ki "ARMUT" dizesindeki büyük harfleri küçükleri ile değiştirmek istiyoruz. Bunun için dizeler için tanımlanmış olan <font color='green'>lower</font> fonksiyonunu kullanacağız.
End of explanation
il = "IĞDIR"
print il.lower()
Explanation: Buraya kadar bir terslik yok. Fakat aynı fonksiyon dizede Türkçe harfler varsa doğru çalışmıyor.
End of explanation
il = "IĞDIR"
il = il.replace("Ğ", "ğ")
il = il.replace("I", "ı")
print il.lower()
Explanation: Bu sorunu herhangi bir dize içindeki harf gruplarını değiştirmek için kullanılan <font color='green'>replace</font> fonksiyonu ile çözebiliriz.
End of explanation
tat = "KEŞKÜL"
print tat.lower()
Explanation: Peki ya kelimemiz "KEŞKÜL" olsa?
End of explanation
tat = "KEŞKÜL"
tat = tat.replace("Ş", "ş")
tat = tat.replace("Ü", "ü")
print tat.lower()
Explanation: Belli ki "Ş" ve "Ü" harfleri için de harf değiştirme işlemlerini yapmamız gerekecek.
End of explanation
def lowertr(kelime):
kelime = kelime.replace("Ö", "ö")
kelime = kelime.replace("Ü", "ü")
kelime = kelime.replace("İ", "i")
kelime = kelime.replace("Ğ", "ğ")
kelime = kelime.replace("Ş", "ş")
kelime = kelime.replace("Ç", "ç")
kelime = kelime.replace("I", "ı")
kelime = kelime.lower()
print kelime
Explanation: Bunun sonu yok. İyisi mi kelimeyi verdiğimizde Türkçe büyük harfleri küçükleri ile değiştiren bir fonksiyon yazmalı.
Python'da bir fonksiyonu <font color='green'>def</font> anahtar kelimesi ile tanımlıyoruz.
End of explanation
lowertr("IĞDIR")
lowertr("KEŞKÜL")
Explanation: Fonksiyonumuzun ismi <font color='blue'>lowertr</font> ve kelime isimli parametreyi alıyor. Hemen deneyelim.
End of explanation
dize = "IĞDIR KEŞKÜL CENNETİ"
dizekucuk = lowertr(tat)
print dizekucuk
Explanation: İşte bu kadar. Şimdi küçük harflerle yazılmış olan kelimeyi de başka bir değişkene kaydetmeyi deneyelim.
End of explanation
def lowertr(kelime):
kelime = kelime.replace("Ö", "ö")
kelime = kelime.replace("Ü", "ü")
kelime = kelime.replace("İ", "i")
kelime = kelime.replace("Ğ", "ğ")
kelime = kelime.replace("Ş", "ş")
kelime = kelime.replace("Ç", "ç")
kelime = kelime.replace("I", "ı")
kelime = kelime.lower()
return kelime # Bu satır küçük harfli kelimeyi döndürüyor
Explanation: İkinci satırdaki "None" kelimesi dizge_kucuk değişkeninin içinin boş olduğunu gösteriyor. Demek ki fonksiyonumuz küçük harflere çevirdiği dizgeyi geri döndürmüyor. Ufak bir değişiklik ile hemen hallederiz. Tek yapmamız gereken <font color='green'>return</font> anahtar kelimesi ile çağrılan fonksiyonun ne çevireceğini belirtmek.
End of explanation
dizge = "IĞDIR KEŞKÜL CENNETİ"
dizgekucuk = lowertr(tat)
print dizge_kucuk
Explanation: Artık dizgekucuk değişkenine doğru şekilde atama yapabiliriz.
End of explanation
def kucukbuyuk(kelime, secenek):
if (secenek):
kelime = lowertr(kelime) # Daha önce hazırlamıştık
else:
kelime = kelime.replace("ö", "Ö")
kelime = kelime.replace("ü", "Ü")
kelime = kelime.replace("i", "İ")
kelime = kelime.replace("ğ", "Ğ")
kelime = kelime.replace("ş", "Ş")
kelime = kelime.replace("ç", "Ç")
kelime = kelime.replace("ı", "I")
kelime = kelime.upper()
return kelime
Explanation: İşi biraz daha ilerletelim. İki seçeneğimiz olsun. İlk seçenekte daha önceki gibi dizeyi küçük harflere, ikinci seçenekte ise büyük harflere dönüştürelim. Seçeneği belirlemek için ikinci bir parametre tanımlayabiliriz. Bu parametre 1 değerini alırsa küçük harf seçeneğini, 0 değerini alırsa da büyük harf seçeneğini uygulayacağız.
End of explanation
print kucukbuyuk("Iğdır", 1)
print kucukbuyuk("Keşkül", 0)
Explanation: Dikkat ederseniz daha önce tanımladığımız <font color='blue'>lowertr</font> fonksiyonunu yeni fonksiyonumuzun ilk seçeneğinde kullandık.
End of explanation
def kucukbuyuk(kelime, secenek=1):
if (secenek):
kelime = lowertr(kelime)
else:
kelime = kelime.replace("ö", "Ö")
kelime = kelime.replace("ü", "Ü")
kelime = kelime.replace("i", "İ")
kelime = kelime.replace("ğ", "Ğ")
kelime = kelime.replace("ş", "Ş")
kelime = kelime.replace("ç", "Ç")
kelime = kelime.replace("ı", "I")
kelime = kelime.upper()
return kelime
Explanation: Her seferinde 1 ya da 0 diye seçeneği girmek insanın canını sıkabilir. Özellikle de çoğu zaman küçük harflere dönüştürmeyi kullanacaksanız. Python'da bunun da çaresi var. Tek yapmamız gereken bazı parametrelerin varsayılan değerlerini girmek.
End of explanation
print kucukbuyuk("IĞDIR") # secenek parametresi varsayılan değeri alacak
print kucukbuyuk("kasımpatı", 0) # secenek parametresi ayrıca girilmiş
Explanation: İkinci parametre, yani secenek, artık <font color='red'>varsayılan parametre</font>. Fonksiyon çağrılırken varsayılan parametrenin değeri verilmezse, fonksiyon içerisinde secenek parametresinin değeri 1 olarak alınıyor.
End of explanation
% pylab inline
x = arange(0.0, 2.0, 0.01)
y = cos(2*pi*t)
plot(x, y, linewidth=2, color="red", linestyle="dashed")
Explanation: Burada bir noktaya dikkat etmeli. Varsayılan parametreler her zaman diğer parametrelerden sonra yazılmak zorundalar.
Parametrelerin varsayılan değerlerini girmenin getirdiği bir kolaylık daha var. Bu tür parametrelerden birkaç tane varsa, isimleri verilerek herhangi bir sırayla girilebiliyorlar. Bunun en güzel örneği grafik çizmede kullanılan <font color='green'>plot</font> fonksiyonu. Aşağıdaki örnekteki linewidth, color ve linestyle parametreleri x ve y parametrelerinden sonra istenilen sırada girilebilir.
End of explanation
def dikd_alan(en, boy=1):
return en*boy
print dikd_alan(3,5)
print dikd_alan(3)
print dikd_alan(boy=2, en=5)
Explanation: Basit birkaç sayısal örnek yapalım.
Örneğin, bir dikdörtgenin iki kenarını parametre olarak alıp, alanını veren bir fonksiyon yazalım. İkinci kenar uzunluğunun varsayılan değeri 1 olsun.
End of explanation
def dikd_alan(en, boy=None):
if boy == None:
boy = en
return en*boy
print dikd_alan(3,5)
print dikd_alan(3)
Explanation: Son örnekte, parametrelerin isimlerini kullandığımız için, fonksiyonda tanımlanmış sıralarını kullanmak zorunda kalmadık (yani önce en, sonra boy vermek zorunda değiliz).
Daha doğal bir varsayım olarak diyelim ki, sadece en verildiğinde boy'un da aynı değere sahip olmasını istiyoruz. Yani tek bir kenar ölçüsü verildiğinde dikdörtgenimizin kare olduğunu varsayıyoruz. Bunu yapmanın doğrudan bir yolu yok, ama şöyle bir numara yapabiliriz: boy için varsayılan değeri None olarak veririz, ve bu değer değiştirilmediyse en'e eşitleriz.
End of explanation
def toplam(N):
t = 0
i = 1
while i<=N:
t += i
i += 1
return t
print toplam(100)
Explanation: 1'den başlayarak, verilen bir sayıya kadar tamsayıları toplayan bir fonksiyon:
End of explanation
def toplam(N, ilk=0, adim=1):
t = 0
i = ilk
while i <= N:
t += i
i += adim
return t
print toplam(100) # 0 + 1 + 2 + ... + 100
print toplam(100,10) # 10 + 11 + ... + 100
print toplam(100, adim=5) # 5 + 10 + ... + 100
Explanation: Bunu genelleştirelim; ilk'den N'ye kadar adim adımlarla ilerleyerek elde edilen aritmetik dizinin toplamını veren bir fonksiyon yazalım. Varsayılan değerler ilk için 0 ve adim için 1 olsun.
End of explanation
def toplam(N, f):
t = 0
i = 0
while i<=N:
t += f(i)
i += 1
return t
Explanation: Madem genelliyoruz, aritmetik dizilerin ötesine geçelim. Sözgelişi, genel bir f fonksiyonu için $\sum_{i=0}^{N} f(i)$ toplamını hesaplayacak bir fonksiyon yazalım.
Burada, toplam fonksiyonuna başka bir fonksiyonu parametre olarak vermeliyiz. Python'da her şey gibi fonksiyonlar da bir nesne olduğu için, bunu basit şekilde halledebiliriz.
End of explanation
from math import sqrt
print toplam(10, sqrt) # sqrt(0) + sqrt(1) + ... + sqrt(10)
Explanation: Tabii bu toplamı yapabilmek için sayısal değer veren bir fonksiyon ismini kullanmalıyız. Sözgelişi, math modülünden karekök alma fonksiyonunu alalım.
End of explanation
def f(x):
return 1.0/(2*x+1)
print toplam(10, f) # 1/3 + 1/5 + ... + 1/21
Explanation: Veya, kendimiz bir fonksiyon tanımlayalım.
End of explanation
print toplam(10, lambda x: 1.0/(2*x+1))
Explanation: Bu ve benzeri şekilde, bir fonksiyon nesnesi beklenen durumlarda başka bir alternatifimiz isimsiz fonksiyonlardır. Yukarıdaki örneği isimsiz fonksiyonlarla şöyle yazarız.
End of explanation
f = lambda x: x/(x+1.0)
g = lambda x,y: 1.0*(x+y)/(x*y)
print f(3)
print g(1,2)
Explanation: Buradaki lambda, isimsiz fonksiyon yaratma komutudur. Ardından parametre listesi yazılır, iki nokta üstüste konur, ve geri verilecek değer yazılır. İstenirse, aşağıdaki gibi, isimlere atanıp tekrar kullanılabilirler de.
End of explanation |
6,661 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center>Introduction to Machine Learning with Neuroimaging</center>
<center><b>Written by Luke Chang ([email protected])</b></center>
<center><p>This tutorial will provide a quick introduction to running machine learning analyses using modern python based modules. We will briefly cover supervised methods using an example pain dataset from Chang et al., (In Press) and Krishnan et al., (Under Review), which is publically available in <a href=http
Step1: Prediction/Classification
Download Pain Data from Neurovault
Step2: Load Pickled Data
<p>This allows you to save the downloaded data files into a 'pickled' object for fast reloading. If you've already downloaded the test data and saved it as a pickle object, then start here to save time.</p>
Step3: Run Prediction Analyses
<p>This code will initialize a Predict object from nltools. Requires
Step4: <p>Run Linear Support Vector Regression with leave one subject out cross-validation (LOSO)</p>
Step5: <p>Run Ridge Regression with 5 fold Cross-Validation and a nested cross-validation to estimate shrinkage parameter</p>
Step6: <p>Run Principal Components Regression with no Cross-Validation. This pattern should be very similar to the pain pattern reported in Krishnan et al., (Under Review). Principal Components Regression is much slower than the other linear methods, but scales well when feature set is large</p>
Step7: <p>You might be interested in only training a pattern on a subset of the brain using an anatomical mask. Here we use a mask of subcortex.</p>
Step8: Apply Mask
<p>After training a pattern we are typically interested in testing it on new data to see how sensitive and specific it might be to the construct it was trained to predict. Here we provide an example of applying it to a new dataset using the 'dot_product' method. This will produce a scalar prediction per image akin to regression (don't forget to add the intercept!). We can also use a standardized method, which examines the overall spatial correlation between two images (i.e., 'correlation').</p>
Step9: ROC Plot
<p>We are often interested in evaluating how well a pattern can discriminate between different classes of data. Here we apply a pattern to a new dataset and evaluate how well it can discriminate between high and low pain using single-interval classification. We use the Roc class to initialize an Roc object and the plot() and summary() methods to run the analyses. We could also just run the calculate() method to run the analysis without plotting.</p>
Step10: <p>The above example uses single-interval classification, which attempts to determine the optimal classification interval. However, sometimes we are intersted in directly comparing responses to two images within the same person. In this situation we should use forced-choice classification, which looks at the relative classification accuracy between two images.</p>
Step11: Coactivation Based Clustering w/ Neurosynth
<p>Sometimes we are interested in understanding how a region of interested might be functionally organized. One increasingly popular technique is to parcellate the region based on shared patterns of functional connectivity. These functional correlations can be derived from functional connectivity using resting-state fMRI or functional coactivation from meta-analyses. See our <a href=http
Step12: Run Clustering
Step13: Plot Slice Montages
Step14: Decode with Neurosynth | Python Code:
# iPython notebook magic commands
%load_ext autoreload
%autoreload 2
%matplotlib inline
#General modules
import os
from os.path import join, basename, isdir
from os import makedirs
import pandas as pd
import matplotlib.pyplot as plt
import time
import pickle
# Supervised Modules
from pyneurovault import api
import nibabel as nb
import numpy as np
from nltools.analysis import Predict, apply_mask, Roc
from nltools.utils import get_resource_path
# Unsupervised Modules
# from neurosynth import Dataset, Clusterer, Masker, Decoder
# from neurosynth.analysis.cluster import cluster_similarity
from nilearn import plotting, datasets
from sklearn.decomposition import RandomizedPCA
# Define output folder
out_folder = "/Users/lukechang/Downloads/nv_tmp"
Explanation: <center>Introduction to Machine Learning with Neuroimaging</center>
<center><b>Written by Luke Chang ([email protected])</b></center>
<center><p>This tutorial will provide a quick introduction to running machine learning analyses using modern python based modules. We will briefly cover supervised methods using an example pain dataset from Chang et al., (In Press) and Krishnan et al., (Under Review), which is publically available in <a href=http://neurovault.org>neurovault</a>. We will also provide a brief example of unsupervised based methods to parcellate a region of interest into functionally homogenous subregions using <a href=http://neurosynth.org>neurosynth</a>.</p></center>
Installation
<ol>
<li>Install python (I recommend <a href = http://continuum.com>Anaconda</a>, which includes prepackaged core scientific computing packages)</li>
<li>Install <a href = http://nilearn.org>nilearn</a> (> pip install nilearn)</li>
<li>Install Luke's <a href = http://github.org/ljchang>nltools</a> toolbox (> pip install git+http://github.org/ljchang/neurolearn)</li>
<li>Install <a href = http://neurosynth.org>neurosynth</a> (> pip install neurosynth)</li>
</ol>
Function Definitions
End of explanation
tic = time.time() #Start Timer
# Pain Collection
collection = 504
# Will extract all collections and images in one query to work from
nv = api.NeuroVault()
# Download all images to file
standard = os.path.join(os.path.dirname(api.__file__),'data','MNI152_T1_2mm_brain.nii.gz')
nv.download_images(dest_dir = out_folder,target=standard, collection_ids=[collection],resample=True)
# Create Variables
collection_data = nv.get_images_df().ix[nv.get_images_df().collection_id == collection,:].reset_index()
img_index = sorted((e,i) for i,e in enumerate(collection_data.file))
index = [x[1] for x in img_index]
img_file = [x[0] for x in img_index]
dat = nb.funcs.concat_images([os.path.join(out_folder,'resampled','00' + str(x) + '.nii.gz') for x in collection_data.image_id[index]])
# dat = nb.funcs.concat_images([os.path.join(out_folder,'original',str(x) + '.nii.gz') for x in collection_data.image_id[index]])
holdout = [int(x.split('_')[-2]) for x in img_file]
heat_level = [x.split('_')[-1].split('.')[0] for x in img_file]
Y_dict = {'High':3,'Medium':2,'Low':1}
Y = np.array([Y_dict[x] for x in heat_level])
print Y
# Pickle for later use
# Saving the objects:
with open(os.path.join(out_folder,'Pain_Data.pkl'), 'w') as f:
pickle.dump([dat,holdout,Y], f)
print 'Elapsed: %.2f seconds' % (time.time() - tic) #Stop timer
Explanation: Prediction/Classification
Download Pain Data from Neurovault
End of explanation
tic = time.time() #Start Timer
# Getting back the objects:
with open(os.path.join(out_folder,'Pain_Data.pkl')) as f:
dat, holdout, Y = pickle.load(f)
print 'Load Pickled File - Elapsed: %.2f seconds' % (time.time() - tic) #Stop timer
Explanation: Load Pickled Data
<p>This allows you to save the downloaded data files into a 'pickled' object for fast reloading. If you've already downloaded the test data and saved it as a pickle object, then start here to save time.</p>
End of explanation
tic = time.time() #Start Timer
# Test Prediction with kfold xVal
svr = Predict(dat,Y,algorithm='svr', output_dir=out_folder,
cv_dict = {'type':'kfolds','n_folds':5,'subject_id':holdout},
**{'kernel':"linear"})
svr.predict()
print 'Elapsed: %.2f seconds' % (time.time() - tic) #Stop timer
Explanation: Run Prediction Analyses
<p>This code will initialize a Predict object from nltools. Requires:
<ol>
<li>Nibabel Data object</li>
<li>Y - training labels</li>
<li>algorithm - algorithm name</li>
<li>subject_id - vector indicating subject labels</li>
<li>output_dir- path of folder to save data</li>
<li>cv_dict - Optional Cross-Validation dictionary</li>
<li>**{kwargs} - Optional algorithm dictionary</li>
</ol>
</p>
<p>Will run Linear Support Vector Regression ('svr') with 5 fold cross-validation.</p>
End of explanation
tic = time.time() #Start Timer
# Test Prediction with LOSO xVal
svr = Predict(dat,Y,algorithm='svr', output_dir=out_folder,
cv_dict = {'type':'loso','subject_id':holdout},
**{'kernel':"linear"})
svr.predict()
print 'Elapsed: %.2f seconds' % (time.time() - tic) #Stop timer
Explanation: <p>Run Linear Support Vector Regression with leave one subject out cross-validation (LOSO)</p>
End of explanation
tic = time.time() #Start Timer
# Test Ridge with kfold xVal + grid search for regularization
ridge = Predict(dat, Y, algorithm='ridgeCV',output_dir=out_folder,
cv_dict = {'type':'kfolds','n_folds':5,'subject_id':holdout},
**{'alphas':np.linspace(.1, 10, 5)})
ridge.predict()
print 'Total Elapsed: %.2f seconds' % (time.time() - tic) #Stop timer
Explanation: <p>Run Ridge Regression with 5 fold Cross-Validation and a nested cross-validation to estimate shrinkage parameter</p>
End of explanation
tic = time.time() #Start Timer
# Principal Components Regression
pcr = Predict(dat,Y,algorithm='pcr', output_dir=out_folder,
cv_dict = {'type':'kfolds','n_folds':5,'subject_id':holdout})
pcr.predict()
print 'Total Elapsed: %.2f seconds' % (time.time() - tic) #Stop timer
Explanation: <p>Run Principal Components Regression with no Cross-Validation. This pattern should be very similar to the pain pattern reported in Krishnan et al., (Under Review). Principal Components Regression is much slower than the other linear methods, but scales well when feature set is large</p>
End of explanation
tic = time.time() #Start Timer
mask = nb.load(os.path.join(get_resource_path(),'gray_matter_mask.nii.gz'))
# Test Prediction with kfold xVal
ridge = Predict(dat,Y,algorithm='ridge', output_dir=out_folder,
cv_dict = {'type':'kfolds','n_folds':5, 'subject_id':holdout},
mask = mask)
ridge.predict()
print 'Elapsed: %.2f seconds' % (time.time() - tic) #Stop timer
Explanation: <p>You might be interested in only training a pattern on a subset of the brain using an anatomical mask. Here we use a mask of subcortex.</p>
End of explanation
tic = time.time() #Start Timer
# Load data using nibabel
pines = nb.load(os.path.join(out_folder, 'ridgeCV_weightmap.nii.gz'))
pexpd = apply_mask(data=dat, weight_map=pines, output_dir=out_folder, method='dot_product', save_output=True)
pexpc = apply_mask(data=dat, weight_map=pines, output_dir=out_folder, method='correlation', save_output=True)
plt.subplot(2, 1, 1)
plt.plot(pexpd)
plt.title('Pattern Expression')
plt.ylabel('Dot Product')
plt.subplot(2, 1, 2)
plt.plot(pexpc)
plt.xlabel('Subject')
plt.ylabel('Correlation')
print 'Elapsed: %.2f seconds' % (time.time() - tic) #Stop timer
Explanation: Apply Mask
<p>After training a pattern we are typically interested in testing it on new data to see how sensitive and specific it might be to the construct it was trained to predict. Here we provide an example of applying it to a new dataset using the 'dot_product' method. This will produce a scalar prediction per image akin to regression (don't forget to add the intercept!). We can also use a standardized method, which examines the overall spatial correlation between two images (i.e., 'correlation').</p>
End of explanation
tic = time.time() #Start Timer
# Create Variables
include = (svr.Y==3) | (svr.Y==1)
input_values = svr.yfit_xval[include]
binary_outcome = svr.Y[include]
binary_outcome = binary_outcome==3
# Single-Interval
roc = Roc(input_values=input_values, binary_outcome=binary_outcome)
roc.plot()
roc.summary()
print 'Total Elapsed: %.2f seconds' % (time.time() - tic) #Stop timer
Explanation: ROC Plot
<p>We are often interested in evaluating how well a pattern can discriminate between different classes of data. Here we apply a pattern to a new dataset and evaluate how well it can discriminate between high and low pain using single-interval classification. We use the Roc class to initialize an Roc object and the plot() and summary() methods to run the analyses. We could also just run the calculate() method to run the analysis without plotting.</p>
End of explanation
tic = time.time() #Start Timer
# Forced Choice
roc_fc = Roc(input_values=input_values, binary_outcome=binary_outcome, forced_choice=True)
roc_fc.plot()
roc_fc.summary()
print 'Total Elapsed: %.2f seconds' % (time.time() - tic) #Stop timer
Explanation: <p>The above example uses single-interval classification, which attempts to determine the optimal classification interval. However, sometimes we are intersted in directly comparing responses to two images within the same person. In this situation we should use forced-choice classification, which looks at the relative classification accuracy between two images.</p>
End of explanation
# Initialize main clustering object: use PCA with 100 components for dimensionality reduction;
# keep all voxels with minimum of 100 studies (approx. 1% base rate).
reducer = RandomizedPCA(100)
roi_mask = os.path.join(out_folder,'Clustering/Masks/ROIs/FSL_TPJ.nii.gz')
clstr = Clusterer(dataset, 'coactivation', output_dir=os.path.join(out_folder,'Clustering'),
min_studies_per_voxel=100, reduce_reference=reducer, roi_mask=roi_mask)
Explanation: Coactivation Based Clustering w/ Neurosynth
<p>Sometimes we are interested in understanding how a region of interested might be functionally organized. One increasingly popular technique is to parcellate the region based on shared patterns of functional connectivity. These functional correlations can be derived from functional connectivity using resting-state fMRI or functional coactivation from meta-analyses. See our <a href=http://cosanlab.com/static/papers/Changetal2013CerCor.pdf>paper</a> on the insula for example (<a href = http://neurovault.org/collections/13/>Images</a> can be downloaded from neurovault). This tutorial will show how to perform a similar functional coactivation based parcellation of a region using <a href=http://neurosynth.org>neurosynth</a> tools.</p>
<p>The basic idea is to:
<ol>
<li>Create a binary mask of a region (we will use TPJ as example)</li>
<li>Reduce the dimensionality of voxels in the the neurosynth dataset (we use randomizedPCA)</li>
<li>Calculate the similarity coefficient for every voxel in the mask with the neurosynth components (we use pearson correlation)</li>
<li>Cluster the voxels in the mask based on shared patterns of correlation (we will use k-means clustering)</li>
<li>Decode the function of each subregion in the mask using Neurosynth terms (we use topic modeling to reduce dimensionality)</li>
</ol>
</p>
<p>Warning: This can take a lot of RAM if you use the full neurosynth Dataset!</p>
Initialize Clusterer
End of explanation
clstr.cluster(algorithm='kmeans', n_clusters=range(2,11,1),
bundle=False, coactivation_maps=True,
precomputed_distances=True)
Explanation: Run Clustering
End of explanation
K = range(2,11,1)
fig, axes = plt.subplots(len(K), 1)
for i, k in enumerate(K):
plotting.plot_roi(os.path.join(out_folder,'Clustering/kmeans_k%d/kmeans_k%dcluster_labels.nii.gz' % (k, k)),
title="Whole-brain k-means clustering (k = %d)" % k, display_mode='x',
cut_coords=range(-60,-45,5) + range(50,65,5), axes=axes[i])
fig.set_size_inches((15, 20))
fig.savefig(os.path.join(out_folder,'Clustering/Sagittal_Slice_Montage.png'))
Explanation: Plot Slice Montages
End of explanation
# Decoding Polar Plots
K = range(2,11,1)
dcdr = Decoder(dataset, method='roi')
for i in K:
res = dcdr.decode(os.path.join(out_folder,'Clustering/kmeans_k' + str(i) + '/kmeans_k' + str(i) + 'cluster_labels.nii.gz'),
save=os.path.join(out_folder,'Clustering/kmeans_k' + str(i) + '/decoding_results_z.txt'), value='r')
_ = dcdr.plot_polar(res, overplot=True, n_top=2)
_.savefig(os.path.join(out_folder,'Clustering/kmeans_k' + str(i) + '/Decode_PolarPlot_k' + str(i) + '.pdf'))
dat.get_affine()
Explanation: Decode with Neurosynth
End of explanation |
6,662 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analyzing the Impact of Failures (and letting loose a Chaos Monkey)
Planned (maintenance) and unplanned failure of nodes and interfaces in the network is a frequent occurrence. While most networks are designed to be tolerant to such failures, gaining confidence that they are in fact tolerant is difficult. Network engineers often reason about network behavior under failures manually, which is a complex and error-prone task. Consequently, the network could be one link failure away from an outage that leads to a massive loss of revenue and reputation.
Fortunately, based just on device configurations, Batfish makes it easy to proactively analyze the network behavior under failures and offer guarantees on its tolerance to a range of failure scenarios.
In this notebook, we will show how to use Batfish to analyze network behavior under failures. Specifically, we will describe how to simulate a specific network failure scenario, how to check forwarding changes for all flows in that scenario, and finally how to identify vulnerabilities using Chaos Monkey style testing.
Check out a video demo of this notebook here.
Initialization
We will use the example network shown below with three autonomous systems (ASes) spread that connect via eBGP. Within each AS iBGP and OSPF is used. The configurations of these devices are available here.
Step1: bf.fork_snapshot
Step2: In the code, bf.fork_snapshot accepts four parameters
Step3: Great! We have confirmed that Paris can still reach PoP via Asia even when London has failed.
differentialReachability
Step4: We see from the result that the failures of London would in fact permit a flow that was originally being blocked by the AS1_TO_AS2 ACL on New York. This difference reveals a potential security vulnerability! Luckily, Batfish allows us to catch and fix it before something bad happens in production. Similarly, if there were flows that were carried in BASE_SNAPSHOT_NAME but dropped in FAIL_LONDON_SNAPSHOT_NAME (an availability issue), Batfish would have caught it.
Check out our Introduction to Forwarding Change Validation notebook for more use cases of differential reachability queries.
Chaos Monkey Testing
Chaos Monkey style testing is a common method to to build highly reliable software systems. In it, different components of the system are randomly failed to see what impact it has on the service performance. Such testing is known to be highly effective but is not possible to do in the networking context. Until now.
Batfish can easily enable Chaos Monkey testing for networks. Using the basic functions shown above, we can compose more complicated functions that randomly fail links and identify potential vulnerabilities in the network.
Suppose we wanted our network to be robust to any possible 2-link failures. The example below shows how to perform Chaos Monkey testing to identify 2-link-failures that can cause an outage. Specifically, we will fail a pair of links picked at random and check whether the forwarding behavior would be changed by the failure using the differentialReachability question.
Next, we run Chaos Monkey testing, shown as below.
Step5: We see that there is a failure scenario under to which the network is not robust, that is, the failure will lead to a change in the forwarding behavior of at least some flows. This scenario is the failure of two links that connect Seattle to Philadelphia and San Francisco. This is unexpected because Seattle has another link that connects it to the rest of the network and should generally be available for traffic.
Let us diagnose this situation to understand the problem. To begin, we first see which flows are impacted.
Step6: We see that when the links fail, if we ignore flows that end in Seattle (whose links have failed), a general pattern is that Asia loses connectivity to US. Given the network topology, this is quite surprising because after those failure we would have expected Asia to be able to reach US via Europe.
To investigate furhter into the root cause, we ask Batfish to show how the BGP RIB in the two cases differ. We do so using the bgpRib question and comparing the two snapshots as in the differential reachability question. We focus on the impacted destination prefix 2.128.0.0/16.
Step7: We see that routers in Asia (Hongkong, Singapore, and Tokyo) and Seattle do not have any BGP routes to the prefix in the failure snapshot, which they did in the reference snapshot. The missing route in Seattle can be explained via missing routes in Asia since Seattle depended on Asia after losing its two other links.
That Europe still has the routes after the failure alerts us to the possibility of improper filtering of incoming routes in Asia. So, we should check on that. There are many ways to analyze the incoming route filters; we'll use the definedStructures question of Batfish to extract necessary definitions that we need to view.
Step8: We see the route map as1_to_as3 is defined on line 119 and 120. Now we can quickly navigate to the lines in the config file, as showing below.
Step9: We see that the route map is denying routes that match the access-list '102.' Let's look at the definition of this list, which is on lines 115-117 per the defined structures list above. | Python Code:
# Import packages
%run startup.py
bf = Session(host="localhost")
# Initialize the example network and snapshot
NETWORK_NAME = "example_network"
BASE_SNAPSHOT_NAME = "base"
SNAPSHOT_PATH = "networks/failure-analysis"
bf.set_network(NETWORK_NAME)
bf.init_snapshot(SNAPSHOT_PATH, name=BASE_SNAPSHOT_NAME, overwrite=True)
Explanation: Analyzing the Impact of Failures (and letting loose a Chaos Monkey)
Planned (maintenance) and unplanned failure of nodes and interfaces in the network is a frequent occurrence. While most networks are designed to be tolerant to such failures, gaining confidence that they are in fact tolerant is difficult. Network engineers often reason about network behavior under failures manually, which is a complex and error-prone task. Consequently, the network could be one link failure away from an outage that leads to a massive loss of revenue and reputation.
Fortunately, based just on device configurations, Batfish makes it easy to proactively analyze the network behavior under failures and offer guarantees on its tolerance to a range of failure scenarios.
In this notebook, we will show how to use Batfish to analyze network behavior under failures. Specifically, we will describe how to simulate a specific network failure scenario, how to check forwarding changes for all flows in that scenario, and finally how to identify vulnerabilities using Chaos Monkey style testing.
Check out a video demo of this notebook here.
Initialization
We will use the example network shown below with three autonomous systems (ASes) spread that connect via eBGP. Within each AS iBGP and OSPF is used. The configurations of these devices are available here.
End of explanation
# Fork a new snapshot with London deactivated
FAIL_LONDON_SNAPSHOT_NAME = "fail_london"
bf.fork_snapshot(BASE_SNAPSHOT_NAME, FAIL_LONDON_SNAPSHOT_NAME, deactivate_nodes=["london"], overwrite=True)
Explanation: bf.fork_snapshot: Simulating network failures
To simulate network failures, Batfish offers a simple API bf.fork_snapshot that clones the original snapshot to a new one with the specified failure scenarios.
Suppose we want to analyze the scenario where node London fails. We can use bf.fork_snapshot to simulate this failure as shown below.
End of explanation
# Get the answer of a traceroute question from Paris to the PoP's prefix
pop_prefix = "2.128.0.0/24"
tr_answer = bf.q.traceroute(
startLocation="paris",
headers=HeaderConstraints(dstIps=pop_prefix),
maxTraces=1
).answer(FAIL_LONDON_SNAPSHOT_NAME)
# Display the result in a pretty form
show(tr_answer.frame())
Explanation: In the code, bf.fork_snapshot accepts four parameters: BASE_SNAPSHOT_NAME indicates the original snapshot name, FAIL_LONDON_SNAPSHOT_NAME is the name of the new snapshot, deactivate_nodes is a list of nodes that we wish to fail, and overwrite=True indicates that we want to reinitialize the snapshot if it already exists.
In addition to deactivate_nodes, bf.fork_snapshot can also take deactivate_interfaces as a parameter to simulate interface failures. Combining these functions, Batfish allows us to simulate complicated failure scenarios involving interfaces and nodes, for example: bf.fork_snapshot(BASE_SNAPSHOT_NAME, FAIL_SNAPSHOT_NAME, deactivate_nodes=FAIL_NODES, deactivate_interfaces=FAIL_INTERFACES, overwrite=True)).
To understand network behavior under the simulated failure, we can run any Batfish question on the newly created snapshot. As an example, to ensure that the flows from Paris would still reach PoP even if London failed, we can run the traceroute question on the snapshot in which London has failed, as shown below. (See the Introduction to Forwarding Analysis using Batfish notebook for more forwarding analysis questions).
End of explanation
# Get the answer to the differential reachability question given two snapshots
diff_reachability_answer = bf.q.differentialReachability(
headers=HeaderConstraints(dstIps=pop_prefix), maxTraces=1).answer(
snapshot=FAIL_LONDON_SNAPSHOT_NAME,
reference_snapshot=BASE_SNAPSHOT_NAME)
# Display the results
show(diff_reachability_answer.frame())
Explanation: Great! We have confirmed that Paris can still reach PoP via Asia even when London has failed.
differentialReachability: Checking changes of forwarding behavior for all flows
Above, we saw how Batfish can create new snapshots that simulate failure scenarios and run analysis on them. This capability is useful to test the forwarding behavior under interesting failure scenarios. In some cases, we may also want to verify that certain network failures have no impact to the network, i.e., the forwarding behavior of all flows would not be changed by those failures.
We now show a powerful question differentialReachability of Batfish, which allows us to analyze changes of any flow between two snapshots. This question will report any flow that was successfully delivered in the base snapshot but will not be delivered in failure snapshot or the other way around---not delivered in the base snapshot but delivered in the failure snapshot.
Let us revisit the scenario where London fails. To understand if this failure impacts any flow to the PoP in the US, we can run the differential reachability question as below, by scoping the search to flows destined to the PoP(from anywhere) and comparing FAIL_LONDON_SNAPSHOT_NAME with BASE_SNAPSHOT_NAME as the reference. Leaving the headers field unscoped would search across flows to all possible destinations.
End of explanation
# Fix for demonstration purpose
random.seed(0)
max_iterations = 5
# Get all links in the network
links = bf.q.edges().answer(BASE_SNAPSHOT_NAME).frame()
for i in range(max_iterations):
# Get two links at random
failed_link1_index = random.randint(0, len(links) - 1)
failed_link2_index = random.randint(0, len(links) - 1)
# Fork a snapshot with the link failures
FAIL_SNAPSHOT_NAME = "fail_snapshot"
bf.fork_snapshot(
BASE_SNAPSHOT_NAME,
FAIL_SNAPSHOT_NAME,
deactivate_interfaces=[links.loc[failed_link1_index].Interface,
links.loc[failed_link2_index].Interface],
overwrite=True)
# Run a differential reachability question
answer = bf.q.differentialReachability(
headers=HeaderConstraints(dstIps=pop_prefix)
).answer(
snapshot=FAIL_SNAPSHOT_NAME,
reference_snapshot=BASE_SNAPSHOT_NAME
)
# A non-empty returned answer means changed forwarding behavior
# We print the bad failure scenario and exit
if len(answer.frame()) > 0:
show(links.iloc[[failed_link1_index, failed_link2_index]])
break
Explanation: We see from the result that the failures of London would in fact permit a flow that was originally being blocked by the AS1_TO_AS2 ACL on New York. This difference reveals a potential security vulnerability! Luckily, Batfish allows us to catch and fix it before something bad happens in production. Similarly, if there were flows that were carried in BASE_SNAPSHOT_NAME but dropped in FAIL_LONDON_SNAPSHOT_NAME (an availability issue), Batfish would have caught it.
Check out our Introduction to Forwarding Change Validation notebook for more use cases of differential reachability queries.
Chaos Monkey Testing
Chaos Monkey style testing is a common method to to build highly reliable software systems. In it, different components of the system are randomly failed to see what impact it has on the service performance. Such testing is known to be highly effective but is not possible to do in the networking context. Until now.
Batfish can easily enable Chaos Monkey testing for networks. Using the basic functions shown above, we can compose more complicated functions that randomly fail links and identify potential vulnerabilities in the network.
Suppose we wanted our network to be robust to any possible 2-link failures. The example below shows how to perform Chaos Monkey testing to identify 2-link-failures that can cause an outage. Specifically, we will fail a pair of links picked at random and check whether the forwarding behavior would be changed by the failure using the differentialReachability question.
Next, we run Chaos Monkey testing, shown as below.
End of explanation
show(answer.frame())
Explanation: We see that there is a failure scenario under to which the network is not robust, that is, the failure will lead to a change in the forwarding behavior of at least some flows. This scenario is the failure of two links that connect Seattle to Philadelphia and San Francisco. This is unexpected because Seattle has another link that connects it to the rest of the network and should generally be available for traffic.
Let us diagnose this situation to understand the problem. To begin, we first see which flows are impacted.
End of explanation
diff_routes = bf.q.bgpRib(network="2.128.0.0/16").answer(snapshot=FAIL_SNAPSHOT_NAME,
reference_snapshot=BASE_SNAPSHOT_NAME)
diff_routes
Explanation: We see that when the links fail, if we ignore flows that end in Seattle (whose links have failed), a general pattern is that Asia loses connectivity to US. Given the network topology, this is quite surprising because after those failure we would have expected Asia to be able to reach US via Europe.
To investigate furhter into the root cause, we ask Batfish to show how the BGP RIB in the two cases differ. We do so using the bgpRib question and comparing the two snapshots as in the differential reachability question. We focus on the impacted destination prefix 2.128.0.0/16.
End of explanation
# View all defined structres on 'hongkong'
bf.q.definedStructures(nodes="hongkong").answer()
Explanation: We see that routers in Asia (Hongkong, Singapore, and Tokyo) and Seattle do not have any BGP routes to the prefix in the failure snapshot, which they did in the reference snapshot. The missing route in Seattle can be explained via missing routes in Asia since Seattle depended on Asia after losing its two other links.
That Europe still has the routes after the failure alerts us to the possibility of improper filtering of incoming routes in Asia. So, we should check on that. There are many ways to analyze the incoming route filters; we'll use the definedStructures question of Batfish to extract necessary definitions that we need to view.
End of explanation
# See the config lines where the route map as1_to_as3 is defined
!cat networks/failure-analysis/configs/hongkong.cfg | head -121 | tail -4
Explanation: We see the route map as1_to_as3 is defined on line 119 and 120. Now we can quickly navigate to the lines in the config file, as showing below.
End of explanation
# See the config lines where the access list '102' is defined
!cat networks/failure-analysis/configs/hongkong.cfg | head -118 | tail -5
Explanation: We see that the route map is denying routes that match the access-list '102.' Let's look at the definition of this list, which is on lines 115-117 per the defined structures list above.
End of explanation |
6,663 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Various t0s
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
Step1: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step2: And let's make our system a little more interesting so that we can discriminate between the various t0s
Step3: t0 Parameters
There are three t0 parameters that are available to define an orbit (but only one of which is editable at any given time), as well as a t0 parameter for the entire system. Let's first access the three t0 parameters for our binary orbit.
't0_supconj' defines the time at which the primary component in our orbit is at superior conjunction. For a binary system in which there are eclipses, this is defined as the primary eclipse. By default this parameter is editable.
Step4: 't0_perpass' defines the time at which both components in our orbit is at periastron passage. By default this parameter is constrained by 't0_supconj'. For more details or information on how to change which parameter is editable, see the Constraints Tutorial.
Step5: The 't0_ref' defines the time at which the primary component in our orbit passes an arbitrary reference point. This 't0_ref' is defined in the same way as PHOEBE legacy's 'HJD0' parameter, so is included for convenience translating between the two.
Step6: In addition, there is a single 't0' parameter that is system-wide. This parameter simply defines the time at which all parameters are defined and therefore at which all computations start. The value of this parameter begins to play an important role if any parameter is given a time-derivative (see apsidal motion for an example) or when using N-body instead of Keplerian dynamics (coming in a future release).
Step7: Influence on Oribits (positions)
Step8: To visualize where these times are with respect to the orbits, we can plot the model orbit and highlight the positions of each star at the times defined by these parameters. Note here that the observer is in the positive w-direction.
NOTE
Step9: Influence on Phasing
All computations in PHOEBE 2 are done in the time-domain. Times can be translated to phases using any ephemeris available, as well as any of the t0s.
By default (if not passing any options), times will be phased using the outer-most orbit in the system and using 't0_supconj'.
Step10: Similarly, if plotting phases on any axis, passing the 't0' keyword will set the zero-phase accordingly. To see this, let's compute a light curve and phase it with the various t0s shown in the orbits above. | Python Code:
!pip install -I "phoebe>=2.1,<2.2"
Explanation: Various t0s
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
End of explanation
b.set_value('sma@binary', 20)
b.set_value('q', 0.8)
b.set_value('ecc', 0.8)
b.set_value('per0', 45)
Explanation: And let's make our system a little more interesting so that we can discriminate between the various t0s
End of explanation
b.get_parameter('t0_supconj', context='component')
Explanation: t0 Parameters
There are three t0 parameters that are available to define an orbit (but only one of which is editable at any given time), as well as a t0 parameter for the entire system. Let's first access the three t0 parameters for our binary orbit.
't0_supconj' defines the time at which the primary component in our orbit is at superior conjunction. For a binary system in which there are eclipses, this is defined as the primary eclipse. By default this parameter is editable.
End of explanation
b.get_parameter('t0_perpass', context='component')
b.get_parameter('t0_perpass', context='constraint')
Explanation: 't0_perpass' defines the time at which both components in our orbit is at periastron passage. By default this parameter is constrained by 't0_supconj'. For more details or information on how to change which parameter is editable, see the Constraints Tutorial.
End of explanation
b.get_parameter('t0_ref', context='component')
b.get_parameter('t0_ref', context='constraint')
Explanation: The 't0_ref' defines the time at which the primary component in our orbit passes an arbitrary reference point. This 't0_ref' is defined in the same way as PHOEBE legacy's 'HJD0' parameter, so is included for convenience translating between the two.
End of explanation
b.get_parameter('t0', context='system')
Explanation: In addition, there is a single 't0' parameter that is system-wide. This parameter simply defines the time at which all parameters are defined and therefore at which all computations start. The value of this parameter begins to play an important role if any parameter is given a time-derivative (see apsidal motion for an example) or when using N-body instead of Keplerian dynamics (coming in a future release).
End of explanation
b.add_dataset('orb', times=np.linspace(-1,1,1001))
b.run_compute(ltte=False)
Explanation: Influence on Oribits (positions)
End of explanation
afig, mplfig = b.plot(x='us', y='ws', z=0, time='t0_supconj', show=True)
afig, mplfig = b.plot(x='us', y='ws', z=0, time='t0_perpass', show=True)
afig, mplfig = b.plot(x='us', y='ws', z=0, time='t0_ref', show=True)
Explanation: To visualize where these times are with respect to the orbits, we can plot the model orbit and highlight the positions of each star at the times defined by these parameters. Note here that the observer is in the positive w-direction.
NOTE: sending z=0 will override the default of ordering in z by vs (the unused coordinate in the same system), which can be expensive to draw.
End of explanation
b.to_phase(0.0)
b.to_phase(0.0, component='binary', t0='t0_supconj')
b.to_phase(0.0, component='binary', t0='t0_perpass')
b.to_phase(0.0, component='binary', t0='t0_ref')
Explanation: Influence on Phasing
All computations in PHOEBE 2 are done in the time-domain. Times can be translated to phases using any ephemeris available, as well as any of the t0s.
By default (if not passing any options), times will be phased using the outer-most orbit in the system and using 't0_supconj'.
End of explanation
b.add_dataset('lc', times=np.linspace(0,1,51), ld_func='linear', ld_coeffs=[0.0])
b.run_compute(ltte=False, irrad_method='none', atm='blackbody')
afig, mplfig = b['lc01@model'].plot(x='phases', t0='t0_supconj', xlim=(-0.3,0.3), show=True)
afig, mplfig = b['lc01@model'].plot(x='phases', t0='t0_perpass', xlim=(-0.3,0.3), show=True)
afig, mplfig = b['lc01@model'].plot(x='phases', t0='t0_ref', xlim=(-0.3,0.3), show=True)
Explanation: Similarly, if plotting phases on any axis, passing the 't0' keyword will set the zero-phase accordingly. To see this, let's compute a light curve and phase it with the various t0s shown in the orbits above.
End of explanation |
6,664 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow Classification
Data
https
Step1: Clean the Data
Step2: Feature Columns
Step3: Continuous Features
Number of times pregnant
Plasma glucose concentration a 2 hours in an oral glucose tolerance test
Diastolic blood pressure (mm Hg)
Triceps skin fold thickness (mm)
2-Hour serum insulin (mu U/ml)
Body mass index (weight in kg/(height in m)^2)
Diabetes pedigree function
Step4: Categorical Features
If you know the set of all possible feature values of a column and there are only a few of them, you can use categorical_column_with_vocabulary_list. If you don't know the set of possible values in advance you can use categorical_column_with_hash_bucket
Step5: Converting Continuous to Categorical
Step6: Putting them together
Step7: Train Test Split
Step8: Input Function
Step9: Creating the Model
Step10: Evaluation
Step11: Predictions
Step12: DNN Classifier | Python Code:
import pandas as pd
diabetes = pd.read_csv('data/pima-indians-diabetes.csv')
diabetes.head()
diabetes.columns
Explanation: TensorFlow Classification
Data
https://archive.ics.uci.edu/ml/datasets/pima+indians+diabetes
Title: Pima Indians Diabetes Database
Sources:
(a) Original owners: National Institute of Diabetes and Digestive and
Kidney Diseases
(b) Donor of database: Vincent Sigillito ([email protected])
Research Center, RMI Group Leader
Applied Physics Laboratory
The Johns Hopkins University
Johns Hopkins Road
Laurel, MD 20707
(301) 953-6231
(c) Date received: 9 May 1990
Past Usage:
Smith,~J.~W., Everhart,~J.~E., Dickson,~W.~C., Knowler,~W.~C., \&
Johannes,~R.~S. (1988). Using the ADAP learning algorithm to forecast
the onset of diabetes mellitus. In {\it Proceedings of the Symposium
on Computer Applications and Medical Care} (pp. 261--265). IEEE
Computer Society Press.
The diagnostic, binary-valued variable investigated is whether the
patient shows signs of diabetes according to World Health Organization
criteria (i.e., if the 2 hour post-load plasma glucose was at least
200 mg/dl at any survey examination or if found during routine medical
care). The population lives near Phoenix, Arizona, USA.
Results: Their ADAP algorithm makes a real-valued prediction between
0 and 1. This was transformed into a binary decision using a cutoff of
0.448. Using 576 training instances, the sensitivity and specificity
of their algorithm was 76% on the remaining 192 instances.
Relevant Information:
Several constraints were placed on the selection of these instances from
a larger database. In particular, all patients here are females at
least 21 years old of Pima Indian heritage. ADAP is an adaptive learning
routine that generates and executes digital analogs of perceptron-like
devices. It is a unique algorithm; see the paper for details.
Number of Instances: 768
Number of Attributes: 8 plus class
For Each Attribute: (all numeric-valued)
Number of times pregnant
Plasma glucose concentration a 2 hours in an oral glucose tolerance test
Diastolic blood pressure (mm Hg)
Triceps skin fold thickness (mm)
2-Hour serum insulin (mu U/ml)
Body mass index (weight in kg/(height in m)^2)
Diabetes pedigree function
Age (years)
Class variable (0 or 1)
Missing Attribute Values: Yes
Class Distribution: (class value 1 is interpreted as "tested positive for
diabetes")
Class Value Number of instances
0 500
1 268
Brief statistical analysis:Attribute number: Mean: Standard Deviation:
1. 3.8 3.4
2. 120.9 32.0
3. 69.1 19.4
4. 20.5 16.0
5. 79.8 115.2
6. 32.0 7.9
7. 0.5 0.3
8. 33.2 11.8
End of explanation
# Columns that will be normalized
cols_to_norm = ['Number_pregnant', 'Glucose_concentration', 'Blood_pressure', 'Triceps',
'Insulin', 'BMI', 'Pedigree']
# Normalizing the columns
diabetes[cols_to_norm] = diabetes[cols_to_norm].apply(lambda x: (x - x.min()) / (x.max() - x.min()))
diabetes.head()
Explanation: Clean the Data
End of explanation
diabetes.columns
import tensorflow as tf
Explanation: Feature Columns
End of explanation
num_preg = tf.feature_column.numeric_column('Number_pregnant')
plasma_gluc = tf.feature_column.numeric_column('Glucose_concentration')
dias_press = tf.feature_column.numeric_column('Blood_pressure')
tricep = tf.feature_column.numeric_column('Triceps')
insulin = tf.feature_column.numeric_column('Insulin')
bmi = tf.feature_column.numeric_column('BMI')
diabetes_pedigree = tf.feature_column.numeric_column('Pedigree')
age = tf.feature_column.numeric_column('Age')
Explanation: Continuous Features
Number of times pregnant
Plasma glucose concentration a 2 hours in an oral glucose tolerance test
Diastolic blood pressure (mm Hg)
Triceps skin fold thickness (mm)
2-Hour serum insulin (mu U/ml)
Body mass index (weight in kg/(height in m)^2)
Diabetes pedigree function
End of explanation
assigned_group = tf.feature_column.categorical_column_with_vocabulary_list('Group',['A','B','C','D'])
# Alternative
# assigned_group = tf.feature_column.categorical_column_with_hash_bucket('Group', hash_bucket_size=10)
Explanation: Categorical Features
If you know the set of all possible feature values of a column and there are only a few of them, you can use categorical_column_with_vocabulary_list. If you don't know the set of possible values in advance you can use categorical_column_with_hash_bucket
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
diabetes['Age'].hist(bins = 20)
age_buckets = tf.feature_column.bucketized_column(age, boundaries=[20, 30, 40, 50, 60, 70, 80])
Explanation: Converting Continuous to Categorical
End of explanation
feat_cols = [num_preg, plasma_gluc, dias_press, tricep, insulin,
bmi, diabetes_pedigree, assigned_group, age_buckets]
Explanation: Putting them together
End of explanation
diabetes.head()
diabetes.info()
# Dropping 'Class' to exclude the column
x_data = diabetes.drop('Class',axis = 1)
labels = diabetes['Class']
from sklearn.model_selection import train_test_split
# Test train split
X_train, X_test, y_train, y_test = train_test_split(x_data,
labels,
test_size = 0.33,
random_state = 101)
Explanation: Train Test Split
End of explanation
input_func = tf.estimator.inputs.pandas_input_fn(x = X_train,
y = y_train,
batch_size = 10,
num_epochs = 1000,
shuffle = True)
Explanation: Input Function
End of explanation
model = tf.estimator.LinearClassifier(feature_columns = feat_cols,
n_classes = 2)
model.train(input_fn = input_func,
steps = 1000)
# Useful link for your own data
# https://stackoverflow.com/questions/44664285/what-are-the-contraints-for-tensorflow-scope-names
Explanation: Creating the Model
End of explanation
eval_input_func = tf.estimator.inputs.pandas_input_fn(
x = X_test,
y = y_test,
batch_size = 10,
num_epochs = 1,
shuffle = False)
results = model.evaluate(eval_input_func)
results
Explanation: Evaluation
End of explanation
pred_input_func = tf.estimator.inputs.pandas_input_fn(
x = X_test,
batch_size = 10,
num_epochs = 1,
shuffle = False)
# Predictions is a generator!
predictions = model.predict(pred_input_func)
list(predictions)[0:5]
Explanation: Predictions
End of explanation
dnn_model = tf.estimator.DNNClassifier(hidden_units=[10, 10, 10],
feature_columns = feat_cols,
n_classes = 2)
# Creating an embedding columns with 4 groups (A, B, C, D)
embedded_group_column = tf.feature_column.embedding_column(assigned_group,
dimension = 4)
feat_cols = [num_preg, plasma_gluc, dias_press, tricep, insulin,
bmi, diabetes_pedigree, embedded_group_column, age_buckets]
input_func = tf.estimator.inputs.pandas_input_fn(x = X_train,
y = y_train,
batch_size = 10,
num_epochs = 1000,
shuffle = True)
dnn_model = tf.estimator.DNNClassifier(hidden_units=[10, 10, 10],
feature_columns = feat_cols,
n_classes = 2)
dnn_model.train(input_fn = input_func,
steps = 1000)
eval_input_func = tf.estimator.inputs.pandas_input_fn(
x = X_test,
y = y_test,
batch_size = 10,
num_epochs = 1,
shuffle = False)
dnn_model.evaluate(eval_input_func)
Explanation: DNN Classifier
End of explanation |
6,665 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TUTORIAL 05 - Empirical Interpolation Method for non-affine elliptic problems
Keywords
Step1: 3. Affine decomposition
The paramtrized bilinear form $a(\cdot, \cdot; \boldsymbol{\mu})$ is trivially affine.
The empirical interpolation method will be used on the forcing term $g(\boldsymbol{x}; \boldsymbol{\mu})$ to obatin an efficient (approximately affine) expansion of $f(\cdot; \boldsymbol{\mu})$.
Step2: 4. Main program
4.1. Read the mesh for this problem
The mesh was generated by the data/generate_mesh.ipynb notebook.
Step3: 4.2. Create Finite Element space (Lagrange P1)
Step4: 4.3. Allocate an object of the Gaussian class
Step5: 4.4. Prepare reduction with a reduced basis method
Step6: 4.5. Perform the offline phase
Step7: 4.6.1. Perform an online solve
Step8: 4.6.2. Perform an online solve with a lower number of EIM terms
Step9: 4.6.3. Perform an online solve with an even lower number of EIM terms
Step10: 4.7.1. Perform an error analysis
Step11: 4.7.2. Perform an error analysis with respect to the exact problem
Step12: 4.7.3. Perform an error analysis with respect to the exact problem, but employing a smaller number of EIM terms | Python Code:
from dolfin import *
from rbnics import *
Explanation: TUTORIAL 05 - Empirical Interpolation Method for non-affine elliptic problems
Keywords: empirical interpolation method
1. Introduction
In this Tutorial, we consider steady heat conduction in a two-dimensional square domain $\Omega = (-1, 1)^2$.
The boundary $\partial\Omega$ is kept at a reference temperature (say, zero). The conductivity coefficient is fixed to 1, while the heat source is characterized by the following expression
$$
g(\boldsymbol{x}; \boldsymbol{\mu}) = \exp{ -2 (x_0-\mu_0)^2 - 2 (x_1 - \mu_1)^2} \quad \forall \boldsymbol{x} = (x_0, x_1) \in \Omega.
$$
The parameter vector $\boldsymbol{\mu}$, given by
$$
\boldsymbol{\mu} = (\mu_0,\mu_1)
$$
affects the center of the Gaussian source $g(\boldsymbol{x}; \boldsymbol{\mu})$, which could be located at any point $\Omega$. Thus, the parameter domain is
$$
\mathbb{P}=[-1,1]^2.
$$
In order to obtain a faster evaluation (yet, provably accurate) of the problem we propose to use a certified reduced basis approximation for the problem. In order to preserve the affinity assumption (for the sake of performance) the empirical interpolation method will be used on the forcing term $g(\boldsymbol{x}; \boldsymbol{\mu})$.
2. Parametrized formulation
Let $u(\boldsymbol{\mu})$ be the temperature in the domain $\Omega$.
We will directly provide a weak formulation for this problem
<center>for a given parameter $\boldsymbol{\mu}\in\mathbb{P}$, find $u(\boldsymbol{\mu})\in\mathbb{V}$ such that</center>
$$a\left(u(\boldsymbol{\mu}),v;\boldsymbol{\mu}\right)=f(v;\boldsymbol{\mu})\quad \forall v\in\mathbb{V}$$
where
the function space $\mathbb{V}$ is defined as
$$
\mathbb{V} = \left{ v \in H^1(\Omega(\mu_0)): v|_{\partial\Omega} = 0\right}
$$
Note that, as in the previous tutorial, the function space is parameter dependent due to the shape variation.
the parametrized bilinear form $a(\cdot, \cdot; \boldsymbol{\mu}): \mathbb{V} \times \mathbb{V} \to \mathbb{R}$ is defined by
$$a(u,v;\boldsymbol{\mu}) = \int_{\Omega} \nabla u \cdot \nabla v \ d\boldsymbol{x}$$
the parametrized linear form $f(\cdot; \boldsymbol{\mu}): \mathbb{V} \to \mathbb{R}$ is defined by
$$f(v;\boldsymbol{\mu}) = \int_\Omega g(\boldsymbol{\mu}) v \ d\boldsymbol{x}.$$
End of explanation
@EIM()
class Gaussian(EllipticCoerciveProblem):
# Default initialization of members
def __init__(self, V, **kwargs):
# Call the standard initialization
EllipticCoerciveProblem.__init__(self, V, **kwargs)
# ... and also store FEniCS data structures for assembly
assert "subdomains" in kwargs
assert "boundaries" in kwargs
self.subdomains, self.boundaries = kwargs["subdomains"], kwargs["boundaries"]
self.u = TrialFunction(V)
self.v = TestFunction(V)
self.dx = Measure("dx")(subdomain_data=subdomains)
self.f = ParametrizedExpression(
self, "exp(- 2 * pow(x[0] - mu[0], 2) - 2 * pow(x[1] - mu[1], 2))", mu=(0., 0.),
element=V.ufl_element())
# note that we cannot use self.mu in the initialization of self.f, because self.mu has not been initialized yet
# Return custom problem name
def name(self):
return "GaussianEIM"
# Return the alpha_lower bound.
def get_stability_factor_lower_bound(self):
return 1.
# Return theta multiplicative terms of the affine expansion of the problem.
def compute_theta(self, term):
if term == "a":
return (1.,)
elif term == "f":
return (1.,)
else:
raise ValueError("Invalid term for compute_theta().")
# Return forms resulting from the discretization of the affine expansion of the problem operators.
def assemble_operator(self, term):
v = self.v
dx = self.dx
if term == "a":
u = self.u
a0 = inner(grad(u), grad(v)) * dx
return (a0,)
elif term == "f":
f = self.f
f0 = f * v * dx
return (f0,)
elif term == "dirichlet_bc":
bc0 = [DirichletBC(self.V, Constant(0.0), self.boundaries, 1),
DirichletBC(self.V, Constant(0.0), self.boundaries, 2),
DirichletBC(self.V, Constant(0.0), self.boundaries, 3)]
return (bc0,)
elif term == "inner_product":
u = self.u
x0 = inner(grad(u), grad(v)) * dx
return (x0,)
else:
raise ValueError("Invalid term for assemble_operator().")
Explanation: 3. Affine decomposition
The paramtrized bilinear form $a(\cdot, \cdot; \boldsymbol{\mu})$ is trivially affine.
The empirical interpolation method will be used on the forcing term $g(\boldsymbol{x}; \boldsymbol{\mu})$ to obatin an efficient (approximately affine) expansion of $f(\cdot; \boldsymbol{\mu})$.
End of explanation
mesh = Mesh("data/gaussian.xml")
subdomains = MeshFunction("size_t", mesh, "data/gaussian_physical_region.xml")
boundaries = MeshFunction("size_t", mesh, "data/gaussian_facet_region.xml")
Explanation: 4. Main program
4.1. Read the mesh for this problem
The mesh was generated by the data/generate_mesh.ipynb notebook.
End of explanation
V = FunctionSpace(mesh, "Lagrange", 1)
Explanation: 4.2. Create Finite Element space (Lagrange P1)
End of explanation
problem = Gaussian(V, subdomains=subdomains, boundaries=boundaries)
mu_range = [(-1.0, 1.0), (-1.0, 1.0)]
problem.set_mu_range(mu_range)
Explanation: 4.3. Allocate an object of the Gaussian class
End of explanation
reduction_method = ReducedBasis(problem)
reduction_method.set_Nmax(20, EIM=21)
reduction_method.set_tolerance(1e-4, EIM=1e-3)
Explanation: 4.4. Prepare reduction with a reduced basis method
End of explanation
reduction_method.initialize_training_set(50, EIM=60)
reduced_problem = reduction_method.offline()
Explanation: 4.5. Perform the offline phase
End of explanation
online_mu = (0.3, -1.0)
reduced_problem.set_mu(online_mu)
reduced_solution = reduced_problem.solve()
plot(reduced_solution, reduced_problem=reduced_problem)
Explanation: 4.6.1. Perform an online solve
End of explanation
reduced_solution_11 = reduced_problem.solve(EIM=11)
plot(reduced_solution_11, reduced_problem=reduced_problem)
Explanation: 4.6.2. Perform an online solve with a lower number of EIM terms
End of explanation
reduced_solution_1 = reduced_problem.solve(EIM=1)
plot(reduced_solution_1, reduced_problem=reduced_problem)
Explanation: 4.6.3. Perform an online solve with an even lower number of EIM terms
End of explanation
reduction_method.initialize_testing_set(50, EIM=60)
reduction_method.error_analysis(filename="error_analysis")
Explanation: 4.7.1. Perform an error analysis
End of explanation
reduction_method.error_analysis(
with_respect_to=exact_problem, filename="error_analysis__with_respect_to_exact")
Explanation: 4.7.2. Perform an error analysis with respect to the exact problem
End of explanation
reduction_method.error_analysis(
with_respect_to=exact_problem, EIM=11, filename="error_analysis__with_respect_to_exact__EIM_11")
Explanation: 4.7.3. Perform an error analysis with respect to the exact problem, but employing a smaller number of EIM terms
End of explanation |
6,666 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Python
A Tutorial by Jacob Gerace
Why Python?
Clean syntax
The same code can run on all Operating Systems
Extensive first and third party libraries (of particular note for our purposes is NumPy)
Markdown Sidenote
This text is written in a Markdown block. Markdown is straightforward way to format writeups in Jupyter, but I won't cover it here for the sake of brevity.
See if you can use Markdown in your next homework, here's a link that explains the formatting
Step1: More Complicated Data Types
Step2: Basic Things to do with Variables. Especially Floats.
Step3: Conditionals in Python
Step4: Functions in Python
Step5: Loops in Python
Step6: Numpy - "The Fundamental Package for Scientific Computing with Python" | Python Code:
#A variable stores a piece of data and gives it a name
answer = 42
#answer contained an integer because we gave it an integer!
is_it_tuesday = True
is_it_wednesday = False
#these both are 'booleans' or true/false values
pi_approx = 3.1415
#This will be a floating point number, or a number containing digits after the decimal point
my_name = "Jacob"
#This is a string datatype, the name coming from a string of characters
#Data doesn't have to be a singular unit
#p.s., we can print all of these with a print command. For Example:
print(answer)
print(pi_approx)
Explanation: Introduction to Python
A Tutorial by Jacob Gerace
Why Python?
Clean syntax
The same code can run on all Operating Systems
Extensive first and third party libraries (of particular note for our purposes is NumPy)
Markdown Sidenote
This text is written in a Markdown block. Markdown is straightforward way to format writeups in Jupyter, but I won't cover it here for the sake of brevity.
See if you can use Markdown in your next homework, here's a link that explains the formatting: https://daringfireball.net/projects/markdown/syntax .
You can also look at existing Markdown examples (i.e. this worksheet) and emulate the style. Double click a Markdown box in Jupyter to show the code.
LaTeX Sidenote
LaTeX (pronounced "La-tech") is a language itself used widely to write documents with symbolic math
When you add a mathematical formula to these markdown blocks, the math is in LaTeX.
Ex from class: $$V \frac{dC}{dt} = u(t) - Q C(t)$$
A good resource: https://en.wikibooks.org/wiki/LaTeX/Mathematics
What I hope you'll get out of this tutorial:
The feeling that you'll "know where to start" when you see python code in lecture, or when you need to write python for an assignment.
(You won't be a python expert after one hour)
What lists, loops, functions, numpy, conditionals
NEEDS WORK
Python Basics
Variable Basics
End of explanation
#What if we want to store many integers? We need a list!
prices = [10, 20, 30, 40, 50]
#This is a way to define a list in place. We can also make an empty list and add to it.
colors = []
colors.append("Green")
colors.append("Blue")
colors.append("Red")
print(colors)
#We can also add unlike data to a list
prices.append("Sixty")
#As an exercise, look up lists in python and find out how to add in the middle of a list!
print(prices)
#We can access a specific element of a list too:
print(colors[0])
print(colors[2])
#Notice here how the first element of the list is index 0, not 1!
#Languages like MATLAB are 1 indexed, be careful!
#In addition to lists, there are tuples
#Tuples behave very similarly to lists except that you can't change them after you make them
#An empty Tuple isn't very useful:
empty_tuple = ()
#Nor is a tuple with just one value:
one_tuple = ("first",)
#But tuples with many values are useful:
rosa_parks_info = ("Rosa", "Parks", 1913, "February", 4)
#You can access tuples just like lists
print(rosa_parks_info[0] + " " + rosa_parks_info[1])
#You cannot modify existing tuples, but you can make new tuples that extend the information.
#I expect Tuples to come up less than lists. So we'll just leave it at that.
Explanation: More Complicated Data Types
End of explanation
float1 = 5.75
float2 = 2.25
#Addition, subtraction, multiplication, division are as you expect
print(float1 + float2)
print(float1 - float2)
print(float1 * float2)
print(float1 / float2)
#Here's an interesting one that showed up in your first homework. Modulus. The remainder of division:
print(5 % 2)
#Just about every standard math function on a calculator has a python equivalent pre made.
#however, they are from the 'math' package in python. Let's add that package!
import math
print(math.log(float1))
print(math.exp(float2))
print(math.pow(2,5))
# There is a quicker way to write exponents if you want:
print(2.0**5.0)
#Like in MATLAB, you can expand the math to entire lists
list3 = [1, 2, 3, 4, 5]
print(2 * list3)
#There's more you can do with lists in normal python, but we'll save more operations until we get to numpy.
Explanation: Basic Things to do with Variables. Especially Floats.
End of explanation
#Sometimes you want to execute code only in certain circumstances. We saw this on HW1.
#Should be fairly straightforward:
answer = 42
if answer == 42:
print('This is the answer to the ultimate question')
elif answer < 42:
print('This is less than the answer to the ultimate question')
else:
print('This is more than the answer to the ultimate question')
print('This print statement is run no matter what because it is not indented!')
#An if statement is an example of a structure that creates a new block. The block includes all of the code that is
#indented. The indentation (tab character) is imperative. Don't forget it!
#This is normally just good coding style in other languages, but in python it isn't optional
#We can check multiple things at once using boolean operations
rainy = True
day = "Wednesday"
if (rainy == False) and (day != "Tuesday"):
#&& is boolean and, true only if both are true. False otherwise
print("The price for golfing is the full $10")
elif (rainy == True) and (day == "Tuesday"):
print("The price for golfing is reduced to $5!")
elif (rainy == True) or (day == "Tuesday"):
#|| is boolean inclusive or False only if both are false. True otherwise.
print("The price for golfing is reduced to $7.50!")
#You can structure these statements more neatly if you "nest" if statements (put an if statement inside an if statement)
#But this is just for edification.
Explanation: Conditionals in Python
End of explanation
#We can separate off code into functions, that can take input and can give output. They serve as black boxes from the
#perspective of the rest of our code
#use the def keyword, and indent because this creates a new block
def print_me( str ):
print(str)
#End with the "return" keyword
return
#Your functions can return data if you so choose
def my_favorite_song( ):
ans = "Amsterdam - Imagine Dragons"
return ans
#call functions by repeating their name, and putting your variable in the parenthesis.
#Your variable need not be named the same thing, but it should be the right type!
text = "I'll take the West train, just by the side of Amsterdam"
print_me(text)
print(my_favorite_song())
Explanation: Functions in Python
End of explanation
#Repeat code until a conditional statement ends the loop
#Let's try printing a list
fib = [1, 1, 2, 3, 5, 8]
#While loops are the basic type
i = 0
while(i < len(fib)):
print(fib[i])
i = i + 1
#In matlab, to do the same thing you would have the conditional as: counter < (length(fib) + 1)
#This is because matlab starts indexing at 1, and python starts at 0.
#The above type of loop is so common that the 'for' loop is the way to write it faster.
print("Let's try that again")
#This is most similar to for loops in matlab
for i in range(0, len(fib)) :
print(fib[i])
print("One more time:")
#Or you can do so even neater
for e in fib:
print(e)
Explanation: Loops in Python
End of explanation
import numpy as np
#Here, we grab all of the functions and tools from the numpy package and store them in a local variable called np.
#You can call that variable whatever you like, but 'np' is standard.
#numpy has arrays, which function similarly to python lists.
a = np.array([1,2,3])
b = np.array([9,8,7])
#Be careful with syntax. The parentheses and brackets are both required!
print(a)
#Access elements from them just like you would a regular list
print(a[0])
#Element-wise operations are a breeze!
c = a + b
d = a - b
e = a * b
f = a / b
print(c)
print(d)
print(e)
print(f)
#This is different from MATLAB where you add a dot to get element wise operators.
#If you actually do want to do matrix multiplication...
g = np.matmul(np.transpose(a),b)
print(g)
#Now, let's use numpy for something essential for you: Numeric Integration
#Define the function you want to integrate....
def foo(y,t):
return t
#Note this doesn't use y in the return. That is okay, but we need to include it just to satisfy the function we will use.
#Set your initial or boundary condition
IC = 0
#Give the number of points to evaluate the integration
start_time = 0
end_time = 10
num_times = 101
times = np.linspace(start_time, end_time, num_times)
from scipy.integrate import odeint
integrated_func = odeint(foo,IC,times)
#Can we plot the result? You betcha. Just import a new package
%matplotlib inline
import matplotlib.pyplot as plt
from ipywidgets import interact
plt.plot(times, integrated_func)
#Very similar to MATLAB!
Explanation: Numpy - "The Fundamental Package for Scientific Computing with Python"
End of explanation |
6,667 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial
Step1: Gibbs samples
It's nice to start with the Gibbs sampled chains, since they almost certainly look nicer. First, read them in.
Read in your conjugate Gibbs chains.
Step2: Visual inspection
You've already used the most important method of vetting chains
Step3: TBC
TBC
TBC
If you thought some burn-in should be removed, do so here by changing the lower limit of burn.
Step4: Gelman-Rubin statistic
Recall from the notes that the Gelman-Rubin convergence statistic, $R$, quantitatively tests the similarlity of independent chains intended to sample the same PDF. To be meaningful, they should start from different locations and burn-in should be removed.
For a given parameter, $\theta$, the $R$ statistic compares the variance across chains with the variance within a chain. Intuitively, if the chains are random-walking in very different places, i.e. not sampling the same distribution, $R$ will be large.
We'd like to see $R\approx 1$; for example, $R<1.1$ is often used.
Step5: Checkpoint
Step6: Checkpoint
Step7: As with the Gelman-Rubin statistic, this is a case where one might be interested in seeing the effective number of samples for the most degenerate linear combinations of parameters, rather than the parameters themselves.
Something to do
By now you are probably bored. Don't worry. Here is some work for you to do.
Let's get a sense of how many samples are really needed to, e.g., determine 1D credible intervals (as opposed to making the whole posterior look nice). Remember that the effective number of samples is less than the total, obviously.
At this point, we're done comparing the individual chains, so we can lump them all together into one massive list of MCMC samples.
Step8: Let's have a look at the credible interval calculation for the first parameter. If you followed the notebooks as given, and didn't remove any burn-in, the full chain should be of length 40,000.
Step9: The PDF estimate should look pretty reliable with so many samples. The question is, if we're going to reduce this to a statement like $\mu=X^{+Y}_{-Z}$, keeping only up to the leading significant figure of $Y$ and $Z$, how many did we actually need to keep?
Thin the chain by factors of 4, 40, and 400 (to produce chains of length about 10000, 1000 and 100), and see how the endpoints of the 68.3% credible intervals compare. We're looking at the endpoints rather than the values of $Y$ and $Z$ above because the latter are more volatile (depending also on the estimate of $X$).
Remember that thinning by a factor of 4 means that we keep only every 4th entry in the chain, not that we simply select the first 25% of samples. So we're not answering how long we needed to bother running the chain to begin with - that's a slightly different question. We're finding out how redundant our samples are, not just in the "effective independence" sense, but for the specific purpose of quantifying this credible interval.
Step10: Checkpoint
Step11: Metropolis samples
Now, read in the Metropolis chains and perform the same checks.
Step12: Below we plot the traces. Address the same 3 questions posed for the Gibbs samples.
Step13: TBC
TBC
TBC
Compare the two methods in these terms. (Though keep in mind that we solved slightly different problems in the two notebooks, making this comparison less than entirely fair. Or, go back and run Metropolis sampling on a background-free simulation if you really want to. We'll wait.)
Commentary TBC
On the basis of the traces above, choose a burn-in length to remove from the beginning of each chain.
Step14: Here we compute the G-R criterion. Do the values make sense in light of your visual inspection?
Step15: Commentary TBC
Next, we'll look at the autocorrelation plot. If you had to guess an autocorrelation length, what would it be?
Step16: Commentary TBC
Next, the effective number of samples. How does it compare to the Gibbs case? | Python Code:
exec(open('tbc.py').read()) # define TBC and TBC_above
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
%matplotlib inline
from glob import glob
import incredible as cr
Explanation: Tutorial: MCMC Diagnostics
You should already have run two different MCMC algorithms to generate chains for the AGN photometry problem (or an approximation thereof). Let's work through the process of diagnosing whether these Markov chains are usefully sampling the posterior distribution.
The diagnostics discussed below include both qualitative and quantitative checks. We don't particularly think it's all that instructive to write the code that does the quatitative calculations - though there is surely room for improvement or expansion if you're interested - so intead we will demonstrate how to use functions provided by the incredible and pandas packages.
End of explanation
TBC() # change path if need be
# chains = [np.loadtxt(f) for f in glob('../ignore/agn_gibbs_chain_*.txt')]
param_labels = [r'$\mu$', r'$\sigma$', r'$x_0$', r'$y_0$']
Explanation: Gibbs samples
It's nice to start with the Gibbs sampled chains, since they almost certainly look nicer. First, read them in.
Read in your conjugate Gibbs chains.
End of explanation
plt.rcParams['figure.figsize'] = (16.0, 12.0)
fig, ax = plt.subplots(len(param_labels), 1);
cr.plot_traces(chains, ax, labels=param_labels, Line2D_kwargs={'markersize':1.0})
Explanation: Visual inspection
You've already used the most important method of vetting chains: visual inspection. The key questions are:
1. Do multiple, independent chains appear to be sampling the same distribution (have they converged to the same distribution)?
2. Is there a clear "burn-in" period before convergence that should be eliminated?
3. Are the chains highly autocorrelated (taking small steps compared with the width of the posterior)? This is not an issue per se, if the chains are long enough, although it means the sampler is not moving as efficiently as one might hope.
Plot the parameter traces below, and answer these questions (qualitatively) for the conjugate Gibbs sampling chains.
End of explanation
burn = 0
for i in range(len(chains)):
chains[i] = chains[i][range(burn, chains[i].shape[0]),:]
Explanation: TBC
TBC
TBC
If you thought some burn-in should be removed, do so here by changing the lower limit of burn.
End of explanation
cr.GelmanRubinR(chains)
Explanation: Gelman-Rubin statistic
Recall from the notes that the Gelman-Rubin convergence statistic, $R$, quantitatively tests the similarlity of independent chains intended to sample the same PDF. To be meaningful, they should start from different locations and burn-in should be removed.
For a given parameter, $\theta$, the $R$ statistic compares the variance across chains with the variance within a chain. Intuitively, if the chains are random-walking in very different places, i.e. not sampling the same distribution, $R$ will be large.
We'd like to see $R\approx 1$; for example, $R<1.1$ is often used.
End of explanation
plt.rcParams['figure.figsize'] = (16.0, 12.0)
fig, ax = plt.subplots(len(param_labels), 1);
for j,lab in enumerate(param_labels):
pd.plotting.autocorrelation_plot(chains[0][:,j], ax=ax[j]);
ax[j].set_ylabel(lab+' autocorrelation')
Explanation: Checkpoint: If your Gibbs sampler works properly, $R$ for the chains we ran should be very close to 1 (we have differences of order 0.00001).
Autocorrelation
Similarly, the autocorrelation of a sequence, as a function of lag, $k$, can be quantified:
$\rho_k = \frac{\mathrm{Cov}i\left(\theta_i,\theta{i+k}\right)}{\mathrm{Var}(\theta)}$
The larger lag one needs to get a small autocorrelation, the less informative individual samples are.
The pandas function plotting.autocorrelation_plot() is useful for this. Note that seemingly random oscillations basically tell us the level of noise due to the fininte chain length. A coherent drop as a function of lag indicates a genuine autocorrelation, and the lag at which it drops to within the noise is an approximate autocorrelation length. If we needed to thin the chains to conserve disk space, this would be a reasonable factor to thin by.
End of explanation
cr.effective_samples(chains, maxlag=500) # `maxlag' might be something you need to play with, in practice
Explanation: Checkpoint: Again, for this problem, the Gibbs chains should be very well behaved. Our autocorrelation plots basically look like noise (almost all points within the horizontal, dashed lines that pandas provides as an estimate of the noise).
Effective number of independent samples
From $m$ chains of length $n$, we can also estimate the "effective number of independent samples" as
$n_\mathrm{eff} = \frac{mn}{1+2\sum_{t=1}^\infty \hat{\rho}_t}$, with
$\hat{\rho}_t = 1 - \frac{V_t}{2V}$ ($V$ as in the Gelman-Rubin calculation), and
$V_t = \frac{1}{m(n-t)} \sum_{j=0}^m \sum_{i=t+1}^n (\theta_{i,j} - \theta_{i-t,j})^2$.
In practice, the sum in $n_\mathrm{eff}$ is cut off when the estimates $\hat{\rho}t$ become "too noisy", e.g. when the sum of two successive values $\hat{\rho}_t$ and $\hat{\rho}{t+1}$ is negative. Roughly speaking, this should occur when the lag is of the order of the autocorrelation length.
The effective_samples function allows you to pass a guess at this maximum lag, since doing the calculation to arbitrarily long lags becomes very expensive. It will issue a warning if it thinks this maximum lag is too small, according to the criterion above.
End of explanation
chain = np.concatenate(chains, axis=0)
Explanation: As with the Gelman-Rubin statistic, this is a case where one might be interested in seeing the effective number of samples for the most degenerate linear combinations of parameters, rather than the parameters themselves.
Something to do
By now you are probably bored. Don't worry. Here is some work for you to do.
Let's get a sense of how many samples are really needed to, e.g., determine 1D credible intervals (as opposed to making the whole posterior look nice). Remember that the effective number of samples is less than the total, obviously.
At this point, we're done comparing the individual chains, so we can lump them all together into one massive list of MCMC samples.
End of explanation
print(chain.shape[0], 'samples')
plt.rcParams['figure.figsize'] = (14.0, 5.0)
fig, ax = plt.subplots(1, 2);
h40k = cr.whist(chain[:,0], plot=ax[0])
ci40k = cr.whist_ci(h40k, plot=ax[1]);
ax[0].set_xlabel(r'$\mu$');
ax[1].set_xlabel(r'$\mu$');
ci40k
Explanation: Let's have a look at the credible interval calculation for the first parameter. If you followed the notebooks as given, and didn't remove any burn-in, the full chain should be of length 40,000.
End of explanation
TBC()
# No clues here, but it's pretty much cut and paste.
# Analogous to the cell above, save the output of `whist` in h10k, h1k, h100, and the output of
# whist_ci in ci10k, ci1k and ci100. This is so we can plot them all togther later.
Explanation: The PDF estimate should look pretty reliable with so many samples. The question is, if we're going to reduce this to a statement like $\mu=X^{+Y}_{-Z}$, keeping only up to the leading significant figure of $Y$ and $Z$, how many did we actually need to keep?
Thin the chain by factors of 4, 40, and 400 (to produce chains of length about 10000, 1000 and 100), and see how the endpoints of the 68.3% credible intervals compare. We're looking at the endpoints rather than the values of $Y$ and $Z$ above because the latter are more volatile (depending also on the estimate of $X$).
Remember that thinning by a factor of 4 means that we keep only every 4th entry in the chain, not that we simply select the first 25% of samples. So we're not answering how long we needed to bother running the chain to begin with - that's a slightly different question. We're finding out how redundant our samples are, not just in the "effective independence" sense, but for the specific purpose of quantifying this credible interval.
End of explanation
plt.rcParams['figure.figsize'] = (14.0, 5.0)
fig, ax = plt.subplots(1, 2);
ax[0].plot(h40k['x'], h40k['density'], '-', label='40k');
ax[0].plot(h10k['x'], h10k['density'], '-', label='10k');
ax[0].plot(h1k['x'], h1k['density'], '-', label='1k');
ax[0].plot(h100['x'], h100['density'], '-', label='100');
ax[0].legend();
ax[0].set_xlabel(r'$\mu$');
ax[1].plot(0.0, ci40k['mode'], 'o', color='C0', label='40k');
ax[1].plot([0.0]*2, [ci40k['min'][0],ci40k['max'][0]], '-', color='C0', linewidth=3);
ax[1].plot([0.0]*2, [ci40k['min'][1],ci40k['max'][1]], '--', color='C0');
ax[1].plot(1.0, ci10k['mode'], 'o', color='C1', label='10k');
ax[1].plot([1.0]*2, [ci10k['min'][0],ci10k['max'][0]], '-', color='C1', linewidth=3);
ax[1].plot([1.0]*2, [ci10k['min'][1],ci10k['max'][1]], '--', color='C1');
ax[1].plot(2.0, ci1k['mode'], 'o', color='C2', label='1k');
ax[1].plot([2.0]*2, [ci1k['min'][0],ci1k['max'][0]], '-', color='C2', linewidth=3);
ax[1].plot([2.0]*2, [ci1k['min'][1],ci1k['max'][1]], '--', color='C2');
ax[1].plot(3.0, ci100['mode'], 'o', color='C3', label='100');
ax[1].plot([3.0]*2, [ci100['min'][0],ci100['max'][0]], '-', color='C3', linewidth=3);
ax[1].plot([3.0]*2, [ci100['min'][1],ci100['max'][1]], '--', color='C3');
ax[1].legend();
ax[1].set_ylabel(r'$\mu$');
Explanation: Checkpoint: Your mileage may vary, of course. But we got a difference of unity in one endpoint in the 100-sample case, and otherwise everything was identical.
... which is a little surprising, honestly, even though we knew the autocorrelation was quite low in this case. But here's a slightly different question: which of the possible results would you be confident enough to put in a paper? The cell below compares them visually.
End of explanation
TBC() # change path if necessary
# chains = [np.loadtxt(f) for f in glob('../ignore/agn_metro_chain_*.txt')]
param_labels = [r'$x_0$', r'$y_0$', r'$\ln F_0$', r'$b$', r'$\sigma$']
Explanation: Metropolis samples
Now, read in the Metropolis chains and perform the same checks.
End of explanation
plt.rcParams['figure.figsize'] = (16.0, 12.0)
fig, ax = plt.subplots(len(param_labels), 1);
cr.plot_traces(chains, ax, labels=param_labels, Line2D_kwargs={'markersize':1.0})
Explanation: Below we plot the traces. Address the same 3 questions posed for the Gibbs samples.
End of explanation
TBC()
# burn =
for i in range(len(chains)):
chains[i] = chains[i][range(burn, chains[i].shape[0]),:]
Explanation: TBC
TBC
TBC
Compare the two methods in these terms. (Though keep in mind that we solved slightly different problems in the two notebooks, making this comparison less than entirely fair. Or, go back and run Metropolis sampling on a background-free simulation if you really want to. We'll wait.)
Commentary TBC
On the basis of the traces above, choose a burn-in length to remove from the beginning of each chain.
End of explanation
cr.GelmanRubinR(chains)
Explanation: Here we compute the G-R criterion. Do the values make sense in light of your visual inspection?
End of explanation
plt.rcParams['figure.figsize'] = (16.0, 12.0)
fig, ax = plt.subplots(len(param_labels), 1);
for j,lab in enumerate(param_labels):
pd.plotting.autocorrelation_plot(chains[0][:,j], ax=ax[j]);
ax[j].set_ylabel(lab+' autocorrelation')
Explanation: Commentary TBC
Next, we'll look at the autocorrelation plot. If you had to guess an autocorrelation length, what would it be?
End of explanation
cr.effective_samples(chains, maxlag=1000)
Explanation: Commentary TBC
Next, the effective number of samples. How does it compare to the Gibbs case?
End of explanation |
6,668 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Read HYDRAD Results
In our paper, we make various comparisons between EBTEL and the field-aligned code HYDRAD. However, runs of HYDRAD are computationally expensive and it is not feasible to do these on the fly. We are in the process of making HYDRAD available to the public so at some point a more open solution may be possible.
For now, we provide only the output of HYDRAD which will be averaged over the coronal portion of the loop to extract timeseries for the electron and ion temperatures as well as density.
Unfortunately, for the entire parameter space covered, the dataset is quite large. Here we provide the utilities for doing the time averaging, but do not as of yet have a way of efficiently and openly distributing this dataset. If you know of a viable solution, do not hesitate to contact us. We would be happy to provide this dataset to anyone interested.
For now, we include only the already-time-averaged results. We merely read in these text files and convert them to Python pickle files. If you have the full HYDRAD dataset in ../results/static/, the averaging calculation will be performed.
Step1: Compute the coronal averages for temperature and density over the whole parameter space
Step2: Define a function to do the spatial averaging.
Step3: Now, if the raw HYDRAD results are in the appropriate directory, do the time average over all of them and save them to a data structure. Otherwise, just load the time-dependent spatial averages from a binary blob.
Step4: Finally, save the results as a serialized pickle file. | Python Code:
import os
import pickle
import numpy as np
Explanation: Read HYDRAD Results
In our paper, we make various comparisons between EBTEL and the field-aligned code HYDRAD. However, runs of HYDRAD are computationally expensive and it is not feasible to do these on the fly. We are in the process of making HYDRAD available to the public so at some point a more open solution may be possible.
For now, we provide only the output of HYDRAD which will be averaged over the coronal portion of the loop to extract timeseries for the electron and ion temperatures as well as density.
Unfortunately, for the entire parameter space covered, the dataset is quite large. Here we provide the utilities for doing the time averaging, but do not as of yet have a way of efficiently and openly distributing this dataset. If you know of a viable solution, do not hesitate to contact us. We would be happy to provide this dataset to anyone interested.
For now, we include only the already-time-averaged results. We merely read in these text files and convert them to Python pickle files. If you have the full HYDRAD dataset in ../results/static/, the averaging calculation will be performed.
End of explanation
hfRes_format = '../results/static/HYDRAD_raw/%s/HYDRAD_%d/Results/profile%d.phy'
hydrad_labs = [20,40,200,500]
hydrad_res = {'electron':{},'ion':{},'single':{},
'loop_midpoint':4.5e+9, 'time':np.arange(0,5001)}
int_perc = 0.9
Explanation: Compute the coronal averages for temperature and density over the whole parameter space: electron heating, ion heating, single fluid and $\tau=20,40,200,500$ s.
First, set some options and define the range of parameters.
End of explanation
def spatial_average(s,f,mp,eps_mp):
#calculate bounds
mp_lower = mp - eps_mp*mp*(1.-1.e9/(1.e9 + 2.*mp))
mp_upper = mp + eps_mp*mp*(1.-1.e9/(1.e9 + 2.*mp))
#find f and s within specified bounds
i_eb = np.where((s>=mp_lower) & (s<=mp_upper))[0]
s_eb = s[i_eb]
f_eb = f[i_eb]
#take average
delta_s = np.gradient(s_eb)
return np.average(f_eb,weights=delta_s)
Explanation: Define a function to do the spatial averaging.
End of explanation
if os.path.isdir('../results/static/HYDRAD_raw') and not os.path.isfile('../results/static/hydrad_varying_tau_results.pickle'):
for key in hydrad_res:
if key=='loop_midpoint' or key=='time':
continue
for hl in hydrad_labs:
Te_avg = []
Ti_avg = []
n_avg = []
for t in hydrad_res['time']:
#Load results
temp = np.loadtxt(hfRes_format%(key,hl,t))
#slice
s_temp = temp[:,0]
Te_temp = temp[:,7]
Ti_temp = temp[:,8]
n_temp = temp[:,3]
#save averages
Te_avg.append(spatial_average(s_temp,Te_temp,hydrad_res['loop_midpoint'],int_perc))
Ti_avg.append(spatial_average(s_temp,Ti_temp,hydrad_res['loop_midpoint'],int_perc))
n_avg.append(spatial_average(s_temp,n_temp,hydrad_res['loop_midpoint'],int_perc))
hydrad_res[key]['tau%ds'%hl] = {'Te':Te_avg,'Ti':Ti_avg,'n':n_avg}
else:
with open('../results/static/hydrad_varying_tau_results.pickle','rb') as f:
hydrad_res = pickle.load(f)
Explanation: Now, if the raw HYDRAD results are in the appropriate directory, do the time average over all of them and save them to a data structure. Otherwise, just load the time-dependent spatial averages from a binary blob.
End of explanation
with open(__dest__,'wb') as f:
pickle.dump(hydrad_res,f)
Explanation: Finally, save the results as a serialized pickle file.
End of explanation |
6,669 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Minimal example
Generate a .csv file that is accepted as input to SmartVA-Analyze 1.1
Step1: Example of simple, hypothetical mapping
If we have data on a set of verbal autopsies (VAs) that did not use the PHMRC Shortened Questionnaire, we must map them to the expected format. This is a simple, hypothetical example for a set of VAs that asked only about injuries, hypertension, chest pain | Python Code:
# SmartVA-Analyze 1.1 accepts a csv file as input
# and expects a column for every field name in the "Guide for data entry.xlsx" spreadsheet
df = pd.DataFrame(index=[0], columns=cb.index.unique())
# SmartVA-Analyze 1.1 also requires a handful of columns that are not in the Guide
df['child_3_10'] = np.nan
df['agedays'] = np.nan
df['child_5_7e'] = np.nan
df['child_5_6e'] = np.nan
df['adult_2_9a'] = np.nan
df.loc[0,'sid'] = 'example'
# if we save this dataframe as a csv, we can run it through SmartVA-Analyze 1.1
fname = 'example_1.csv'
df.to_csv(fname, index=False)
# here are the results of running this example through SmartVA-Analyze 1.1
pd.read_csv('neonate-predictions.csv')
Explanation: Minimal example
Generate a .csv file that is accepted as input to SmartVA-Analyze 1.1
End of explanation
hypothetical_data = pd.DataFrame(index=range(5))
hypothetical_data['sex'] = ['M', 'M', 'F', 'M', 'F']
hypothetical_data['age'] = [35, 45, 75, 67, 91]
hypothetical_data['injury'] = ['rti', 'fall', '', '', '']
hypothetical_data['heart_disease'] = ['N', 'N', 'Y', 'Y', 'Y']
hypothetical_data['chest_pain'] = ['N', 'N', 'Y', 'N', '']
hypothetical_data
# SmartVA-Analyze 1.1 accepts a csv file as input
# and expects a column for every field name in the "Guide for data entry.xlsx" spreadsheet
df = pd.DataFrame(index=hypothetical_data.index, columns=cb.index.unique())
# SmartVA-Analyze 1.1 also requires a handful of columns that are not in the Guide
df['child_3_10'] = np.nan
df['agedays'] = np.nan
df['child_5_7e'] = np.nan
df['child_5_6e'] = np.nan
df['adult_2_9a'] = np.nan
# to find the coding of specific variables, look in the Guide, and
# as necessary refer to the numbers in paper form for the PHMRC Shortened Questionnaire
# http://www.healthdata.org/sites/default/files/files/Tools/SmartVA/2015/PHMRC%20Shortened%20VAI_all-modules_2015.zip
# set id
df['sid'] = hypothetical_data.index
# set sex
df['gen_5_2'] = hypothetical_data['sex'].map({'M': '1', 'F': '2'})
# set age
df['gen_5_4'] = 1 # units are years
df['gen_5_4a'] = hypothetical_data['age'].astype(int)
# good place to save work and confirm that it runs through SmartVA
fname = 'example_2.csv'
df.to_csv(fname, index=False)
# here are the results of running this example
pd.read_csv('adult-predictions.csv')
# map injuries to appropriate codes
# suffered injury?
df['adult_5_1'] = hypothetical_data['injury'].map({'rti':'1', 'fall':'1', '':'0'})
# injury type
df['adult_5_2'] = hypothetical_data['injury'].map({'rti':'1', 'fall':'2'})
# _another_ good place to save work and confirm that it runs through SmartVA
fname = 'example_3.csv'
df.to_csv(fname, index=False)
# here are the results of running this example
pd.read_csv('adult-predictions.csv')
# map heart disease (to column adult_1_1i, see Guide)
df['adult_1_1i'] = hypothetical_data['heart_disease'].map({'Y':'1', 'N':'0'})
# map chest pain (to column adult_2_43, see Guide)
df['adult_2_43'] = hypothetical_data['chest_pain'].map({'Y':'1', 'N':'0', '':'9'})
# and that completes the work for a simple, hypothetical mapping
fname = 'example_4.csv'
df.to_csv(fname, index=False)
# have a look at the non-empty entries in the mapped database:
df.T.dropna()
# here are the results of running this example
pd.read_csv('adult-predictions.csv')
Explanation: Example of simple, hypothetical mapping
If we have data on a set of verbal autopsies (VAs) that did not use the PHMRC Shortened Questionnaire, we must map them to the expected format. This is a simple, hypothetical example for a set of VAs that asked only about injuries, hypertension, chest pain:
End of explanation |
6,670 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'ec-earth3-gris', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: EC-EARTH-CONSORTIUM
Source ID: EC-EARTH3-GRIS
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:59
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation |
6,671 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Train a Simple Audio Recognition model for microcontroller use
This notebook demonstrates how to train a 20kb Simple Audio Recognition model for TensorFlow Lite for Microcontrollers. It will produce the same model used in the micro_speech example application.
The model is designed to be used with Google Colaboratory.
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step1: Install dependencies
Next, we'll install a GPU build of TensorFlow, so we can use GPU acceleration for training.
Step2: We'll also clone the TensorFlow repository, which contains the scripts that train and freeze the model.
Step3: Load TensorBoard
Now, set up TensorBoard so that we can graph our accuracy and loss as training proceeds.
Step4: Begin training
Next, run the following script to begin training. The script will first download the training data
Step5: Freeze the graph
Once training is complete, run the following cell to freeze the graph.
Step6: Convert the model
Run this cell to use the TensorFlow Lite converter to convert the frozen graph into the TensorFlow Lite format, fully quantized for use with embedded devices.
Step7: The following cell will print the model size, which will be under 20 kilobytes.
Step8: Finally, we use xxd to transform the model into a source file that can be included in a C++ project and loaded by TensorFlow Lite for Microcontrollers. | Python Code:
import os
# A comma-delimited list of the words you want to train for.
# The options are: yes,no,up,down,left,right,on,off,stop,go
# All other words will be used to train an "unknown" category.
os.environ["WANTED_WORDS"] = "yes,no"
# The number of steps and learning rates can be specified as comma-separated
# lists to define the rate at each stage. For example,
# TRAINING_STEPS=15000,3000 and LEARNING_RATE=0.001,0.0001
# will run 18,000 training loops in total, with a rate of 0.001 for the first
# 15,000, and 0.0001 for the final 3,000.
os.environ["TRAINING_STEPS"]="15000,3000"
os.environ["LEARNING_RATE"]="0.001,0.0001"
# Calculate the total number of steps, which is used to identify the checkpoint
# file name.
total_steps = sum(map(lambda string: int(string),
os.environ["TRAINING_STEPS"].split(",")))
os.environ["TOTAL_STEPS"] = str(total_steps)
# Print the configuration to confirm it
!echo "Training these words: ${WANTED_WORDS}"
!echo "Training steps in each stage: ${TRAINING_STEPS}"
!echo "Learning rate in each stage: ${LEARNING_RATE}"
!echo "Total number of training steps: ${TOTAL_STEPS}"
Explanation: Train a Simple Audio Recognition model for microcontroller use
This notebook demonstrates how to train a 20kb Simple Audio Recognition model for TensorFlow Lite for Microcontrollers. It will produce the same model used in the micro_speech example application.
The model is designed to be used with Google Colaboratory.
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/examples/micro_speech/train_speech_model.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/examples/micro_speech/train_speech_model.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
The notebook runs Python scripts to train and freeze the model, and uses the TensorFlow Lite converter to convert it for use with TensorFlow Lite for Microcontrollers.
Training is much faster using GPU acceleration. Before you proceed, ensure you are using a GPU runtime by going to Runtime -> Change runtime type and selecting GPU. Training 18,000 iterations will take 1.5-2 hours on a GPU runtime.
Configure training
The following os.environ lines can be customized to set the words that will be trained for, and the steps and learning rate of the training. The default values will result in the same model that is used in the micro_speech example. Run the cell to set the configuration:
End of explanation
# Replace Colab's default TensorFlow install with a more recent
# build that contains the operations that are needed for training
!pip uninstall -y tensorflow tensorflow_estimator tensorboard
!pip install -q tf-estimator-nightly==1.14.0.dev2019072901 tf-nightly-gpu==1.15.0.dev20190729
Explanation: Install dependencies
Next, we'll install a GPU build of TensorFlow, so we can use GPU acceleration for training.
End of explanation
# Clone the repository from GitHub
!git clone -q https://github.com/tensorflow/tensorflow
# Check out a commit that has been tested to work
# with the build of TensorFlow we're using
!git -c advice.detachedHead=false -C tensorflow checkout 17ce384df70
Explanation: We'll also clone the TensorFlow repository, which contains the scripts that train and freeze the model.
End of explanation
# Delete any old logs from previous runs
!rm -rf /content/retrain_logs
# Load TensorBoard
%load_ext tensorboard
%tensorboard --logdir /content/retrain_logs
Explanation: Load TensorBoard
Now, set up TensorBoard so that we can graph our accuracy and loss as training proceeds.
End of explanation
!python tensorflow/tensorflow/examples/speech_commands/train.py \
--model_architecture=tiny_conv --window_stride=20 --preprocess=micro \
--wanted_words=${WANTED_WORDS} --silence_percentage=25 --unknown_percentage=25 \
--quantize=1 --verbosity=WARN --how_many_training_steps=${TRAINING_STEPS} \
--learning_rate=${LEARNING_RATE} --summaries_dir=/content/retrain_logs \
--data_dir=/content/speech_dataset --train_dir=/content/speech_commands_train \
Explanation: Begin training
Next, run the following script to begin training. The script will first download the training data:
End of explanation
!python tensorflow/tensorflow/examples/speech_commands/freeze.py \
--model_architecture=tiny_conv --window_stride=20 --preprocess=micro \
--wanted_words=${WANTED_WORDS} --quantize=1 --output_file=/content/tiny_conv.pb \
--start_checkpoint=/content/speech_commands_train/tiny_conv.ckpt-${TOTAL_STEPS}
Explanation: Freeze the graph
Once training is complete, run the following cell to freeze the graph.
End of explanation
!toco \
--graph_def_file=/content/tiny_conv.pb --output_file=/content/tiny_conv.tflite \
--input_shapes=1,49,40,1 --input_arrays=Reshape_2 --output_arrays='labels_softmax' \
--inference_type=QUANTIZED_UINT8 --mean_values=0 --std_dev_values=9.8077
Explanation: Convert the model
Run this cell to use the TensorFlow Lite converter to convert the frozen graph into the TensorFlow Lite format, fully quantized for use with embedded devices.
End of explanation
import os
model_size = os.path.getsize("/content/tiny_conv.tflite")
print("Model is %d bytes" % model_size)
Explanation: The following cell will print the model size, which will be under 20 kilobytes.
End of explanation
# Install xxd if it is not available
!apt-get -qq install xxd
# Save the file as a C source file
!xxd -i /content/tiny_conv.tflite > /content/tiny_conv.cc
# Print the source file
!cat /content/tiny_conv.cc
Explanation: Finally, we use xxd to transform the model into a source file that can be included in a C++ project and loaded by TensorFlow Lite for Microcontrollers.
End of explanation |
6,672 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Processing LexisNexus results
Step1: Dictionary
name
Step2: Spreadsheet structure
col1 col2 col3
row1 1 t 2
row2 2 x 5
row3 3 a 5
Step3: Some regular expression matching | Python Code:
# import the modules we need
import os
import re
os.listdir('data')
data = open('data/LexisNexis practice.TXT').read()
# 5 of 54 DOCUMENTS
data.count('of 54 DOCUMENTS')
'This is a string of words'.split('i')
docs = data.split('of 54 DOCUMENTS')
len(docs)
for dnum in [1,2,3,4]:
print('This is doc number {}'.format(dnum))
docs[0]
data = open('data/LexisNexis practice.TXT').read()
docs = data.split('of 54 DOCUMENTS')
dnum=1
for doc in docs[1:]:
# open a new file
output = open('data/doc{}.txt'.format(dnum), 'w')
# write the document to the file
output.write(doc)
dnum=dnum+1
print(docs[3])
for doc in docs[1:]:
print(doc.count('LANGUAGE:'))
# body is the text between LENGTH: number and LANGUAGE:
start = docs[1].find('LENGTH:')
end = docs[1].find('LANGUAGE:')
body = docs[1][start:end]
doc_body
for doc in docs[1:]:
start = doc.find('LENGTH:')
end = doc.find('LANGUAGE:')
doc_body.append(doc[start:end])
for doc in docs[1:]:
start = doc.find('LENGTH:')
end = doc.find('LANGUAGE:')
pre_body = doc[:start]
body = doc[start:end]
post_body = doc[end:]
import csv
a=[1,2,3,4,'abds','adsds'] # list
a[0]
a[4]
a[2:5]
Explanation: Processing LexisNexus results
End of explanation
d1 = { 'item1': 'This is item 1', 'item2': 'This is item 2'}
d1
d1['item2']
d1['item1']
d1['item3'] = a
d1
d1['item3']
d1['item3'][2]
Explanation: Dictionary
name : value - pairs
key : value - pair
End of explanation
data = [
{ 'col1': 1, 'col2': 't', 'col3': 2},
{ 'col1': 2, 'col2': 'x', 'col3': 5},
{ 'col1': 3, 'col2': 'a', 'col3': 5}
]
data
with open('data/test.csv', 'w') as outfile:
out = csv.DictWriter(outfile,
fieldnames=['col1','col2','col3'])
out.writeheader()
out.writerows(data)
open('data/test.txt','w')
fh = open('data/test.csv')
for row in csv.DictReader(fh):
print(row)
csv_data = [r for r in csv.DictReader(open('data/test.csv'))]
csv_data
csv_data[0]
csv_data[1]['col2']
doc_data = []
for doc in docs[1:]:
start = doc.find('LENGTH:')
end = doc.find('LANGUAGE:')
pre_body = doc[:start]
body = doc[start:end]
post_body = doc[end:]
row_dict = { 'pre_body': pre_body,
'body': body,
'post_body': post_body }
doc_data.append(row_dict)
print(doc_data[2]['pre_body'])
with open('data/docs.csv', 'w') as outfile:
out = csv.DictWriter(outfile,
fieldnames=['pre_body','body','post_body'])
out.writeheader()
out.writerows(doc_data)
Explanation: Spreadsheet structure
col1 col2 col3
row1 1 t 2
row2 2 x 5
row3 3 a 5
End of explanation
re.findall('[A-Z]+:',doc, re.MULTILINE)
print('\n\n=====\n\n'.join(doc.split('\n\n')))
Explanation: Some regular expression matching
End of explanation |
6,673 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analysis of the final catalogue of matched sources
We will analyse the changes in the classification using the new sigma and the new catalogue without the galaxies that went to LGZ
Configuration
Load libraries and setup
Step1: General configuration
Step2: Load data
Step3: Join data tables
Step4: Explore and repair data
Step5: Change the AllWISE_input that are '' to 'N/A'. That comes from a previous error with the fill value.
Step6: Plots for the paper
q(m) over n(m) depending on the colour (not used)
Step7: Explanation of the KDE used in the computing of the q(m) and n(m)
Step8: Figure 4 of the paper
Step9: N_LOFAR
Step10: Fractions
Load the full catalogue of galaxies and work out the fraction of matched LOFAR. It will be necessary to work in the restricted area with both the pw and the lofar matched catalogue.
The fractions could be corrected if the relative fractions of sources of each type change between the full area and the restricted area for the LOFAR matches.
Step11: Create the additional columns for the types of matches
Step12: Matched sources
Step13: Non-matched sources
Step14: Diagnostic columns
Step15: Study the 3 repeated sources
3 sources that are in group 1 and 2
Step16: Analyse changes in the matches
Step17: Save data for tests
Step18: Additional description of the data | Python Code:
import numpy as np
from astropy.table import Table, join
from astropy import units as u
from astropy.coordinates import SkyCoord, search_around_sky
from IPython.display import clear_output
import pickle
import os
from mltier1 import (get_center, Field, parallel_process, describe)
%load_ext autoreload
%autoreload
from IPython.display import clear_output
import matplotlib as mpl
# mpl.rc('figure', figsize=(6.64, 6.64*0.74), dpi=100)
# mpl.rc('figure.subplot', left=0.15, right=0.95, bottom=0.15, top=0.92)
# mpl.rc('lines', linewidth=1.75, markersize=8.0, markeredgewidth=0.75)
# mpl.rc('font', size=18.0, family="serif", serif="CM")
# mpl.rc('xtick', labelsize='small')
# mpl.rc('ytick', labelsize='small')
# mpl.rc('xtick.major', width=1.0, size=8)
# mpl.rc('ytick.major', width=1.0, size=8)
# mpl.rc('xtick.minor', width=1.0, size=4)
# mpl.rc('ytick.minor', width=1.0, size=4)
# mpl.rc('axes', linewidth=1.5)
# mpl.rc('legend', fontsize='small', numpoints=1, labelspacing=0.4, frameon=False)
# mpl.rc('text', usetex=True)
# mpl.rc('savefig', dpi=300)
%pylab inline
def most_common(a, n=2):
u, c = np.unique(a, return_counts=True)
order = np.argsort(c)
for i in range(n):
print(c[order][-(i+1)], u[order][-(i+1)])
Explanation: Analysis of the final catalogue of matched sources
We will analyse the changes in the classification using the new sigma and the new catalogue without the galaxies that went to LGZ
Configuration
Load libraries and setup
End of explanation
save_intermediate = True
plot_intermediate = True
idp = "idata/final_analysis_pdf_v1.0"
if not os.path.isdir(idp):
os.makedirs(idp)
Explanation: General configuration
End of explanation
pwli = Table.read("lofar_pw_pdf.fits")
pwli.colnames
lofar_all = Table.read("data/LOFAR_HBA_T1_DR1_merge_ID_optical_v1.1b.fits")
Explanation: Load data
End of explanation
pwl = join(pwli, lofar_all[['Source_Name', 'AllWISE', 'objID', 'ML_LR',
'ID_flag', 'ID_name', 'ID_ra', 'ID_dec',
'LGZ_Size', 'LGZ_Width', 'LGZ_PA', 'LGZ_Assoc',
'LGZ_Assoc_Qual', 'LGZ_ID_Qual']],
join_type='left',
keys='Source_Name',
uniq_col_name='{col_name}{table_name}',
table_names=['', '_input'])
colour_limits = [0.0, 0.5, 1.0, 1.25, 1.5, 1.75, 2.0, 2.25, 2.5, 2.75, 3.0, 3.5, 4.0]
bin_list, centers, Q_0_colour, n_m, q_m = pickle.load(open("lofar_params.pckl", "rb"))
Explanation: Join data tables
End of explanation
for col in pwl.colnames:
fv = pwl[col].fill_value
typ = pwl[col].dtype
print(col, fv, typ)
# Restore NaNs
if fv == 1e+20:
pwl[col][(pwl[col] == fv)] = np.nan
# if (isinstance(fv, np.float64) and (fv != 1e+20)):
# print(col, fv)
# pwl[col].fill_value = 1e+20
pwl["colour"][(pwl["colour"] == 1e+20)] = np.nan
describe(pwl["colour"])
Explanation: Explore and repair data
End of explanation
pwl["AllWISE_input"][pwl["AllWISE_input"] == ""] = "N/A"
Explanation: Change the AllWISE_input that are '' to 'N/A'. That comes from a previous error with the fill value.
End of explanation
plt.rcParams["figure.figsize"] = (12,10)
from matplotlib import cm
from matplotlib.collections import LineCollection
cm_subsection = linspace(0., 1., 16)
colors = [ cm.viridis(x) for x in cm_subsection ]
low = np.nonzero(centers[1] >= 15)[0][0]
high = np.nonzero(centers[1] >= 22.2)[0][0]
fig, a = plt.subplots()
for i, q_m_k in enumerate(q_m):
#plot(centers[i], q_m_old[i]/n_m_old[i])
a = subplot(4,4,i+1)
if i not in [-1]:
q_m_aux = q_m[i]/np.sum(q_m[i])
lwidths = (q_m_aux/np.max(q_m_aux)*10).astype(float)
#print(lwidths)
y_aux = q_m_k/n_m[i]
factor = np.max(y_aux[low:high])
y = y_aux
#print(y)
x = centers[i]
points = np.array([x, y]).T.reshape(-1, 1, 2)
segments = np.concatenate([points[:-1], points[1:]], axis=1)
lc = LineCollection(segments, linewidths=lwidths, color=colors[i])
a.add_collection(lc)
#plot(centers[i], x/factor, color=colors[i-1])
xlim([12, 30])
if i == 0:
xlim([10, 23])
ylim([0, 1.2*factor])
subplots_adjust(left=0.125,
bottom=0.1,
right=0.9,
top=0.9,
wspace=0.4,
hspace=0.2)
Explanation: Plots for the paper
q(m) over n(m) depending on the colour (not used)
End of explanation
save = True
save_pdf = True
plt.rcParams["figure.figsize"] = (6.64, 6.64*0.74)
plt.rcParams["figure.dpi"] = 100
plt.rcParams['lines.linewidth'] = 1.75
plt.rcParams['lines.markersize'] = 8.0
plt.rcParams['lines.markeredgewidth'] = 0.75
plt.rcParams['font.size'] = 15.0 ## not 18.0
plt.rcParams['font.family'] = "serif"
plt.rcParams['font.serif'] = "CM"
plt.rcParams['xtick.labelsize'] = 'small'
plt.rcParams['ytick.labelsize'] = 'small'
plt.rcParams['xtick.major.width'] = 1.0
plt.rcParams['xtick.major.size'] = 8
plt.rcParams['ytick.major.width'] = 1.0
plt.rcParams['ytick.major.size'] = 8
plt.rcParams['xtick.minor.width'] = 1.0
plt.rcParams['xtick.minor.size'] = 4
plt.rcParams['ytick.minor.width'] = 1.0
plt.rcParams['ytick.minor.size'] = 4
plt.rcParams['axes.linewidth'] = 1.5
plt.rcParams['legend.fontsize'] = 'small'
plt.rcParams['legend.numpoints'] = 1
plt.rcParams['legend.labelspacing'] = 0.4
plt.rcParams['legend.frameon'] = False
plt.rcParams['text.usetex'] = True
plt.rcParams['savefig.dpi'] = 300
from matplotlib import cm
from matplotlib.collections import LineCollection
from matplotlib.lines import Line2D
cm_subsection = linspace(0., 1., 16)
colors = [ cm.viridis(x) for x in cm_subsection ]
low = np.nonzero(centers[1] >= 15)[0][0]
high = np.nonzero(centers[1] >= 22.2)[0][0]
cm_subsection = linspace(0., 1., 14)
colors = [ cm.jet(x) for x in cm_subsection ]
fig, a = plt.subplots()
lcs = []
proxies = []
def make_proxy(zvalue, scalar_mappable, **kwargs):
color = scalar_mappable.cmap(scalar_mappable.norm(zvalue))
return Line2D([0, 1], [0, 1], color=color, **kwargs)
for i, q_m_k in enumerate(q_m):
#plot(centers[i], q_m_old[i]/n_m_old[i])
q_m_aux = q_m[i]/np.sum(q_m[i])
lwidths = (q_m_aux/np.max(q_m_aux)*10).astype(float)
if save_pdf and (i<6): # Solve problems with the line with in pdfs
lwidths[lwidths < 0.005] = 0
#print(lwidths)
y_aux = q_m_k/n_m[i]
factor = np.max(y_aux[low:high])
y = y_aux
#print(y)
x = centers[i]
points = np.array([x, y]).T.reshape(-1, 1, 2)
segments = np.concatenate([points[:-1], points[1:]], axis=1)
#lc = LineCollection(segments, linewidths=lwidths, color=colors[i])
if i not in [0]:
if i == 1:
color = "k"
else:
color = colors[i-2]
lcs.append(LineCollection(segments, linewidths=lwidths, color=color))
proxies.append(Line2D([0, 1], [0, 1], color=color, lw=5))
a.add_collection(lcs[-1])
xlim([16, 26])
ylim([0, 700000])
xlabel("$i$ magnitude")
ylabel("$q(m,c)/n(m,c)$")
inset = plt.axes([0.285, 0.6, .2, .25])
q_m_aux = q_m[0]/np.sum(q_m[0])
lwidths = (q_m_aux/np.max(q_m_aux)*10).astype(float)
y_aux = q_m[0]/n_m[0]
factor = np.max(y_aux[low:high])
y = y_aux
x = centers[0]
points = np.array([x, y]).T.reshape(-1, 1, 2)
segments = np.concatenate([points[:-1], points[1:]], axis=1)
lc = LineCollection(segments, linewidths=lwidths, color="k")
proxy = Line2D([0, 1], [0, 1], color="k", lw=5)
inset.add_collection(lc)
xlim([15, 23])
ylim([0, 10000])
xlabel("$W1$ magnitude")
ylabel("$q(m)/n(m)$")
inset.legend([proxy], ["only $W1$"], fontsize="xx-small")
#ylim([0, 1.2*factor])
#print(lcs)
# a.legend(proxies, ["$i-W1 < 0.0$",
# "$0.0 \leq i-W1 < 0.5$",
# "$0.5 \leq i-W1 < 1.0$",
# "$1.0 \leq i-W1 < 1.25$",
# "$1.25 \leq i-W1 < 1.5$",
# "$1.5 \leq i-W1 < 1.75$",
# "$1.75 \leq i-W1 < 2.0$",
# "$2.0 \leq i-W1 < 2.25$",
# "$2.25 \leq i-W1 < 2.5$",
# "$2.5 \leq i-W1 < 2.75$",
# "$2.75 \leq i-W1 < 3.0$",
# "$3.0 \leq i-W1 < 3.5$",
# "$3.5 \leq i-W1 < 4.0$",
# "$i-W1 \geq 4.0$"])
# a.legend(proxies, ["only $i$",
# "$i-W1 \in (-\infty, 0.0)$",
# "$i-W1 \in [0.0, 0.5)$",
# "$i-W1 \in [0.5, 1.0)$",
# "$i-W1 \in [1.0, 1.25)$",
# "$i-W1 \in [1.25, 1.5)$",
# "$i-W1 \in [1.5, 1.75)$",
# "$i-W1 \in [1.75, 2.0)$",
# "$i-W1 \in [2.0, 2.25)$",
# "$i-W1 \in [2.25, 2.5)$",
# "$i-W1 \in [2.5, 2.75)$",
# "$i-W1 \in [2.75, 3.0)$",
# "$i-W1 \in [3.0, 3.5)$",
# "$i-W1 \in [3.5, 4.0)$",
# "$i-W1 \in [4.0, \infty)$"],
# fontsize="xx-small")
a.legend(proxies, ["only $i$",
"$(-\infty, 0.0)$",
"$[0.0, 0.5)$",
"$[0.5, 1.0)$",
"$[1.0, 1.25)$",
"$[1.25, 1.5)$",
"$[1.5, 1.75)$",
"$[1.75, 2.0)$",
"$[2.0, 2.25)$",
"$[2.25, 2.5)$",
"$[2.5, 2.75)$",
"$[2.75, 3.0)$",
"$[3.0, 3.5)$",
"$[3.5, 4.0)$",
"$[4.0, \infty)$"],
fontsize="xx-small",
title="$i-W1$")
# subplots_adjust(left=0.15,
# bottom=0.1,
# right=0.9,
# top=0.9,
# wspace=0.4,
# hspace=0.2)
subplots_adjust(left=0.15,
bottom=0.15,
right=0.95,
top=0.92,
wspace=0.4,
hspace=0.2)
if save:
plt.savefig("idata/q_n_m.png")
plt.savefig("idata/q_n_m.svg")
plt.savefig("idata/q_n_m_high.png", dpi=800)
if save_pdf:
plt.savefig("idata/q_n_m.pdf")
Explanation: Explanation of the KDE used in the computing of the q(m) and n(m):
* https://jakevdp.github.io/blog/2013/12/01/kernel-density-estimation/
Figure 3 of the paper
Help from:
* https://stackoverflow.com/questions/19877666/add-legends-to-linecollection-plot
* https://matplotlib.org/users/mathtext.html#mathtext-tutorial
End of explanation
q0 = np.sum(Q_0_colour)
def completeness(lr, threshold, q0):
n = len(lr)
lrt = lr[lr < threshold]
return 1. - np.sum((q0 * lrt)/(q0 * lrt + (1 - q0)))/float(n)/q0
def reliability(lr, threshold, q0):
n = len(lr)
lrt = lr[lr > threshold]
return 1. - np.sum((1. - q0)/(q0 * lrt + (1 - q0)))/float(n)/q0
completeness_v = np.vectorize(completeness, excluded=[0])
reliability_v = np.vectorize(reliability, excluded=[0])
pwl["lrt"] = pwl["lr"]
pwl["lrt"][np.isnan(pwl["lr"])] = 0
n_test = 100
threshold_mean = np.percentile(pwl["lrt"], 100*(1 - q0))
thresholds = np.arange(0., 10., 0.01)
thresholds_fine = np.arange(0.1, 2., 0.001)
completeness_t = completeness_v(pwl["lrt"], thresholds, q0)
reliability_t = reliability_v(pwl["lrt"], thresholds, q0)
average_t = (completeness_t + reliability_t)/2
completeness_t_fine = completeness_v(pwl["lrt"], thresholds_fine, q0)
reliability_t_fine = reliability_v(pwl["lrt"], thresholds_fine, q0)
average_t_fine = (completeness_t_fine + reliability_t_fine)/2
thresholds_fine[np.argmax(average_t_fine)]
thresholds_fine[np.argmin(np.abs(completeness_t_fine-reliability_t_fine))]
np.sum(pwl["lrt"] >= 0.358)
np.sum(pwl["lrt"] >= 0.639)
np.sum(pwl["lrt"] >= 0.639)
len(pwl)
len(pwl[pwl['ID_flag'] == 1])
n0 = np.sum((pwl["lrt"] >= 0.639) & (pwl['ID_flag'] == 1))
print(n0, n0/len(pwl[pwl['ID_flag'] == 1]))
n0 = np.sum((pwl["lrt"] >= 0.81268) & (pwl['ID_flag'] == 1))
print(n0, n0/len(pwl[pwl['ID_flag'] == 1]))
n0 = np.sum((pwl["ML_LR"] >= 0.81268) & (pwl['ID_flag'] == 1))
print(n0, n0/len(pwl[pwl['ID_flag'] == 1]))
np.sum(pwl["lrt"] >= 0.639)/len(pwl)
threshold_sel = 0.639
thresholds_fine = np.arange(0.0, 2., 0.001)
completeness_t_fine = completeness_v(pwl["lrt"], thresholds_fine, q0)
reliability_t_fine = reliability_v(pwl["lrt"], thresholds_fine, q0)
plt.rcParams["figure.figsize"] = (6.64, 6.64*0.74)
plt.rcParams["figure.dpi"] = 100
plt.rcParams['lines.linewidth'] = 1.75
plt.rcParams['lines.markersize'] = 8.0
plt.rcParams['lines.markeredgewidth'] = 0.75
plt.rcParams['font.size'] = 18.0
plt.rcParams['font.family'] = "serif"
plt.rcParams['font.serif'] = "CM"
plt.rcParams['xtick.labelsize'] = 'small'
plt.rcParams['ytick.labelsize'] = 'small'
plt.rcParams['xtick.major.width'] = 1.0
plt.rcParams['xtick.major.size'] = 8
plt.rcParams['ytick.major.width'] = 1.0
plt.rcParams['ytick.major.size'] = 8
plt.rcParams['xtick.minor.width'] = 1.0
plt.rcParams['xtick.minor.size'] = 4
plt.rcParams['ytick.minor.width'] = 1.0
plt.rcParams['ytick.minor.size'] = 4
plt.rcParams['axes.linewidth'] = 1.5
plt.rcParams['legend.fontsize'] = 'small'
plt.rcParams['legend.numpoints'] = 1
plt.rcParams['legend.labelspacing'] = 0.4
plt.rcParams['legend.frameon'] = False
plt.rcParams['text.usetex'] = True
plt.rcParams['savefig.dpi'] = 300
plot(thresholds_fine, completeness_t_fine, "-", label="Completeness")
plot(thresholds_fine, reliability_t_fine, "-", label="Reliability")
text(0.66, 0.971, "0.639")
#plot(thresholds_fine, average_t_fine, "-", label="average")
vlines(threshold_sel, 0.9, 1., "k", linestyles="dashed", label="Threshold\nselected")
#vlines(threshold_mean, 0.9, 1., "y", linestyles="dashed")
ylim([0.97, 1.])
xlim([0.0, 1.5])
legend(loc=4)
xlabel("Threshold")
ylabel("Completeness/Reliability")
# subplots_adjust(left=0.2,
# bottom=0.1,
# right=0.95,
# top=0.9,
# wspace=0.4,
# hspace=0.2)
subplots_adjust(left=0.15,
bottom=0.15,
right=0.95,
top=0.92,
wspace=0.4,
hspace=0.2)
save = True
if save:
plt.savefig("idata/completeness_reliability.png")
plt.savefig("idata/completeness_reliability.svg")
plt.savefig("idata/completeness_reliability.pdf")
plt.savefig("idata/completeness_reliability_high.png", dpi=800)
Explanation: Figure 4 of the paper: completeness and reliability
End of explanation
cond_mlr = (pwl['ID_flag'] == 1)
len(pwl)
np.sum(np.isnan(pwl["category"]))
np.sum(np.isnan(pwl["category"][cond_mlr]))
n_c, n_c_mlr = [], []
for i in np.unique(pwl["category"][~np.isnan(pwl["category"])]):
n_c.append(np.sum((pwl["category"] == i)))
n_c_mlr.append(np.sum((pwl["category"][cond_mlr] == i)))
print(i, n_c[-1], n_c_mlr[-1])
total_n_c = np.sum(n_c)
total_n_c_mlr = np.sum(n_c_mlr)
print(total_n_c, total_n_c_mlr)
print(len(pwl)-np.sum(np.isnan(pwl["category"])), len(pwl)-np.sum(np.isnan(pwl["category"][cond_mlr])))
for i in range(16):
print("{:2d} {:6d} {:6d} {:6.3f} {:6.3f} {:.2%}".format(i,
n_c[i],
n_c_mlr[i],
n_c[i]/total_n_c*100,
n_c_mlr[i]/total_n_c_mlr*100,
(n_c[i]-n_c_mlr[i])/n_c[i]
))
total_n_c
Explanation: N_LOFAR
End of explanation
field = Field(170.0, 190.0, 46.8, 55.9)
combined = Table.read("pw.fits")
combined = field.filter_catalogue(combined,
colnames=("ra", "dec"))
combined["colour"] = combined["i"] - combined["W1mag"]
colour_limits = [0.0, 0.5, 1.0, 1.25, 1.5, 1.75, 2.0, 2.25, 2.5, 2.75, 3.0, 3.5, 4.0]
combined_panstarrs = (~np.isnan(combined["i"]) & np.isnan(combined["W1mag"])) # Sources with only i-band
combined_wise =(np.isnan(combined["i"]) & ~np.isnan(combined["W1mag"])) # Sources with only W1-band
# Start with the W1-only, i-only and "less than lower colour" bins
colour_bin_def = [{"name":"only W1", "condition": combined_wise},
{"name":"only i", "condition": combined_panstarrs},
{"name":"-inf to {}".format(colour_limits[0]),
"condition": (combined["colour"] < colour_limits[0])}]
# Get the colour bins
for i in range(len(colour_limits)-1):
name = "{} to {}".format(colour_limits[i], colour_limits[i+1])
condition = ((combined["colour"] >= colour_limits[i]) &
(combined["colour"] < colour_limits[i+1]))
colour_bin_def.append({"name":name, "condition":condition})
# Add the "more than higher colour" bin
colour_bin_def.append({"name":"{} to inf".format(colour_limits[-1]),
"condition": (combined["colour"] >= colour_limits[-1])})
combined["category"] = np.nan
for i in range(len(colour_bin_def)):
combined["category"][colour_bin_def[i]["condition"]] = i
numbers_combined_bins = np.array([np.sum(a["condition"]) for a in colour_bin_def])
numbers_combined_bins
pwlf = field.filter_catalogue(pwl[cond_mlr],
colnames=("ra", "dec"))
pwlf_all = field.filter_catalogue(pwl,
colnames=("ra", "dec"))
lofar_c, pw_c, lofar_c_all = [], [], []
avg_colour = []
for i in range(16):
lofar_c.append(np.sum(pwlf["category"] == i))
lofar_c_all.append(np.sum(pwlf_all["category"] == i))
pw_c.append(np.sum(combined["category"] == i))
avg_colour.append(np.nanmedian(combined["colour"][combined["category"] == i]))
print("{:2d} {:6.2f} {:6d} {:7d} {:6.3f} +/- {:6.3f}".format(i, avg_colour[-1],
lofar_c_all[-1], pw_c[-1],
lofar_c_all[-1]/pw_c[-1], np.sqrt(lofar_c_all[-1])/pw_c[-1]))
#print("{:2d} {:6d} {:7d} {:5.3f}".format(i, lofar_c[-1], pw_c[-1], lofar_c[-1]/pw_c[-1]))
np.sum(np.array(lofar_c_all)/np.array(pw_c))
np.sum(np.array(lofar_c_all))/np.sum(np.array(pw_c))
len(pwl)/len(combined)
lc = np.array(lofar_c_all)
pc = np.array(pw_c)
plt.rcParams["figure.figsize"] = (6.64, 6.64*0.74)
plt.rcParams["figure.dpi"] = 100
plt.rcParams['lines.linewidth'] = 1.75
plt.rcParams['lines.markersize'] = 8.0
plt.rcParams['lines.markeredgewidth'] = 0.75
plt.rcParams['font.size'] = 18.0
plt.rcParams['font.family'] = "serif"
plt.rcParams['font.serif'] = "CM"
plt.rcParams['xtick.labelsize'] = 'small'
plt.rcParams['ytick.labelsize'] = 'small'
plt.rcParams['xtick.major.width'] = 1.0
plt.rcParams['xtick.major.size'] = 8
plt.rcParams['ytick.major.width'] = 1.0
plt.rcParams['ytick.major.size'] = 8
plt.rcParams['xtick.minor.width'] = 1.0
plt.rcParams['xtick.minor.size'] = 4
plt.rcParams['ytick.minor.width'] = 1.0
plt.rcParams['ytick.minor.size'] = 4
plt.rcParams['axes.linewidth'] = 1.5
plt.rcParams['legend.fontsize'] = 'small'
plt.rcParams['legend.numpoints'] = 1
plt.rcParams['legend.labelspacing'] = 0.4
plt.rcParams['legend.frameon'] = False
plt.rcParams['text.usetex'] = True
plt.rcParams['savefig.dpi'] = 300
cm_subsection = linspace(0., 1., 14)
colors = [ cm.jet(x) for x in cm_subsection ]
# i-only
plot([0,0], [0,lc[1]/pc[1]*100], marker=",", ls="-", color="k")
scatter([0], lc[1]/pc[1]*100, s=lc[1]/10, c="k")
plot([0], lc[1]/pc[1]*100, marker=".", ls="", color="w", ms=2)
# w1-only
plot([15,15], [0,lc[0]/pc[0]*100], marker=",", ls="-", color="k")
scatter([15], lc[0]/pc[0]*100, s=lc[0]/10, c="k")
plot([15], lc[0]/pc[0]*100, marker=".", ls="", color="w", ms=2)
# colours
for i in range(14):
plot([i+1,i+1], [0,lc[i+2]/pc[i+2]*100], marker=",", ls="-", color=colors[i])
scatter([i+1], lc[i+2]/pc[i+2]*100, s=lc[i+2]/10, c=colors[i])
if i == 0:
plot([i+1], lc[i+2]/pc[i+2]*100, marker=".", ls="", color="w", ms=1)
else:
plot([i+1], lc[i+2]/pc[i+2]*100, marker=".", ls="", color="w", ms=2)
#text(0.66, 0.971, "0.639")
ylim([0., 14.])
xlim([-0.5, 15.5])
#legend(fontsize="small")
xlabel("Colour category")
ylabel("Fraction of galaxies detected by LoTSS\n(per cent)")
xt = ["$i$ only",
"$(-\infty, 0.0)$",
"$[0.0, 0.5)$",
"$[0.5, 1.0)$",
"$[1.0, 1.25)$",
"$[1.25, 1.5)$",
"$[1.5, 1.75)$",
"$[1.75, 2.0)$",
"$[2.0, 2.25)$",
"$[2.25, 2.5)$",
"$[2.5, 2.75)$",
"$[2.75, 3.0)$",
"$[3.0, 3.5)$",
"$[3.5, 4.0)$",
"$[4.0, \infty)$",
"$W1$ only"]
xticks(np.arange(16), xt, rotation=90)
subplots_adjust(left=0.2,
bottom=0.3,
right=0.95,
top=0.9,
wspace=0.4,
hspace=0.2)
save = True
if save:
plt.savefig("idata/fractiono.png")
plt.savefig("idata/fractiono.svg")
plt.savefig("idata/fractiono.pdf")
plt.savefig("idata/fractiono_high.png", dpi=800)
plt.rcParams["figure.figsize"] = (6.64, 6.64*0.74)
plt.rcParams["figure.dpi"] = 100
plt.rcParams['lines.linewidth'] = 1.75
plt.rcParams['lines.markersize'] = 8.0
plt.rcParams['lines.markeredgewidth'] = 0.75
plt.rcParams['font.size'] = 18.0
plt.rcParams['font.family'] = "serif"
plt.rcParams['font.serif'] = "CM"
plt.rcParams['xtick.labelsize'] = 'small'
plt.rcParams['ytick.labelsize'] = 'small'
plt.rcParams['xtick.major.width'] = 1.0
plt.rcParams['xtick.major.size'] = 8
plt.rcParams['ytick.major.width'] = 1.0
plt.rcParams['ytick.major.size'] = 8
plt.rcParams['xtick.minor.width'] = 1.0
plt.rcParams['xtick.minor.size'] = 4
plt.rcParams['ytick.minor.width'] = 1.0
plt.rcParams['ytick.minor.size'] = 4
plt.rcParams['axes.linewidth'] = 1.5
plt.rcParams['legend.fontsize'] = 'small'
plt.rcParams['legend.numpoints'] = 1
plt.rcParams['legend.labelspacing'] = 0.4
plt.rcParams['legend.frameon'] = False
plt.rcParams['text.usetex'] = True
plt.rcParams['savefig.dpi'] = 300
cm_subsection = linspace(0., 1., 14)
colors = [ cm.jet(x) for x in cm_subsection ]
# # i-only
# plot([0,0], [0,lc[1]/pc[1]*100], marker=",", ls="-", color="k")
# scatter([0], lc[1]/pc[1]*100, s=lc[1]/10, c="k")
# plot([0], lc[1]/pc[1]*100, marker=".", ls="", color="w", ms=2)
# # w1-only
# plot([15,15], [0,lc[0]/pc[0]*100], marker=",", ls="-", color="k")
# scatter([15], lc[0]/pc[0]*100, s=lc[0]/10, c="k")
# plot([15], lc[0]/pc[0]*100, marker=".", ls="", color="w", ms=2)
# colours
for i in range(14):
#plot([avg_colour[i+2],avg_colour[i+2]], [0,lc[i+2]/pc[i+2]*100], marker=",", ls="-", color=colors[i])
scatter(avg_colour[i+2], lc[i+2]/pc[i+2]*100, s=lc[i+2]/10, c=colors[i])
if i == 0:
plot(avg_colour[i+2], lc[i+2]/pc[i+2]*100, marker=".", ls="", color="w", ms=1)
else:
plot(avg_colour[i+2], lc[i+2]/pc[i+2]*100, marker=".", ls="", color="w", ms=2)
#text(0.66, 0.971, "0.639")
ylim([0., 14.])
#xlim([-0.5, 13.5])
#legend(fontsize="small")
xlabel("$i-W1$ (magnitude)")
ylabel("Fraction of galaxies detected\nby LoTSS (per cent)")
# xt = ["$(-\infty, 0.0)$",
# "$[0.0, 0.5)$",
# "$[0.5, 1.0)$",
# "$[1.0, 1.25)$",
# "$[1.25, 1.5)$",
# "$[1.5, 1.75)$",
# "$[1.75, 2.0)$",
# "$[2.0, 2.25)$",
# "$[2.25, 2.5)$",
# "$[2.5, 2.75)$",
# "$[2.75, 3.0)$",
# "$[3.0, 3.5)$",
# "$[3.5, 4.0)$",
# "$[4.0, \infty)$"]
# xticks(np.arange(14), xt, rotation=90)
# subplots_adjust(left=0.2,
# bottom=0.1,
# right=0.95,
# top=0.9,
# wspace=0.4,
# hspace=0.2)
subplots_adjust(left=0.15,
bottom=0.15,
right=0.95,
top=0.92,
wspace=0.4,
hspace=0.2)
save = True
if save:
plt.savefig("idata/fraction.png")
plt.savefig("idata/fraction.svg")
plt.savefig("idata/fraction.pdf")
plt.savefig("idata/fraction_high.png", dpi=800)
Explanation: Fractions
Load the full catalogue of galaxies and work out the fraction of matched LOFAR. It will be necessary to work in the restricted area with both the pw and the lofar matched catalogue.
The fractions could be corrected if the relative fractions of sources of each type change between the full area and the restricted area for the LOFAR matches.
End of explanation
threshold_sel = 0.639
cond_mlr = (pwl['ID_flag'] == 1) & (pwl['Maj'] < 30.)
pwlaux = pwl[cond_mlr].filled()
Explanation: Create the additional columns for the types of matches
End of explanation
pwlaux_match = pwlaux[~np.isnan(pwlaux['ML_LR'])]
len(pwlaux_match)
cond_match = (
~np.isnan(pwlaux_match['lr']) &
(pwlaux_match['lr'] >= threshold_sel) &
(
(pwlaux_match["AllWISE_input"] != "N/A") |
~np.isnan(pwlaux_match['objID_input'])
) &
(
(
(pwlaux_match["AllWISE"] == pwlaux_match["AllWISE_input"]) &
(pwlaux_match["objID"] == pwlaux_match["objID_input"]) &
~np.isnan(pwlaux_match["objID"]) &
(pwlaux_match["AllWISE"] != "N/A")
) |
(
(pwlaux_match["AllWISE"] == pwlaux_match["AllWISE_input"]) &
np.isnan(pwlaux_match["objID"])
) |
(
(pwlaux_match["AllWISE"] == "N/A") &
(pwlaux_match["objID"] == pwlaux_match["objID_input"])
)
)
)
m_m = np.sum(cond_match)
print(m_m)
cond_diffmatch = (
~np.isnan(pwlaux_match['lr']) &
(pwlaux_match['lr'] >= threshold_sel) &
(
(pwlaux_match["AllWISE_input"] != "N/A") |
~np.isnan(pwlaux_match['objID_input'])
) &
(
(
(pwlaux_match["AllWISE"] != pwlaux_match["AllWISE_input"]) |
(pwlaux_match["objID"] != pwlaux_match["objID_input"])
)
)
)
m_dm = np.sum(cond_diffmatch)
print(m_dm)
cond_nomatch = (
np.isnan(pwlaux_match['lr']) |
(pwlaux_match['lr'] < threshold_sel)
)
m_nm = np.sum(cond_nomatch)
print(m_nm)
m_nm + m_dm + m_m
217070+512+1021
Explanation: Matched sources
End of explanation
pwlaux_nomatch = pwlaux[np.isnan(pwlaux['ML_LR'])]
len(pwlaux_nomatch)
cond2_match = (
~np.isnan(pwlaux_nomatch['lr']) &
(pwlaux_nomatch['lr'] >= threshold_sel)
)
m2_m = np.sum(cond2_match)
print(m2_m)
cond2_nomatch = (
np.isnan(pwlaux_nomatch['lr']) |
(pwlaux_nomatch['lr'] < threshold_sel)
)
m2_nm = np.sum(cond2_nomatch)
print(m2_nm)
m2_nm + m2_m
m2_nm + m2_m + m_nm + m_dm + m_m
Explanation: Non-matched sources
End of explanation
pwl['match_code'] = 0
pwl['match_code'][np.isin(pwl["Source_Name"], pwl[cond_mlr & ~np.isnan(pwl['ML_LR'])][cond_diffmatch]["Source_Name"])] = 2
pwl['match_code'][np.isin(pwl["Source_Name"], pwl[cond_mlr & ~np.isnan(pwl['ML_LR'])][cond_match]["Source_Name"])]=1
pwl['match_code'][np.isin(pwl["Source_Name"], pwl[cond_mlr & ~np.isnan(pwl['ML_LR'])][cond_nomatch]["Source_Name"])] = 3
pwl['match_code'][np.isin(pwl["Source_Name"], pwl[cond_mlr & np.isnan(pwl['ML_LR'])][cond2_match]["Source_Name"])] = 4
pwl['match_code'][np.isin(pwl["Source_Name"], pwl[cond_mlr & np.isnan(pwl['ML_LR'])][cond2_nomatch]["Source_Name"])] = 5
for i in range(6):
print(i, np.sum(pwl['match_code'] == i))
Explanation: Diagnostic columns
End of explanation
pwl['match_code2'] = 0
pwl['match_code2'][np.isin(pwl["Source_Name"], pwl[cond_mlr & ~np.isnan(pwl['ML_LR'])][cond_match]["Source_Name"])]=1
t = pwl[np.isin(pwl["Source_Name"], pwl[cond_mlr & ~np.isnan(pwl['ML_LR'])][cond_diffmatch]["Source_Name"])]
t[t['match_code2'] != 0][['Source_Name', "AllWISE", "AllWISE_input", "objID", "objID_input"]]
pwl[pwl["objID"] == 164861629587942860]
Explanation: Study the 3 repeated sources
3 sources that are in group 1 and 2
End of explanation
most_common(pwl["AllWISE_input"].filled(), n=10)
Explanation: Analyse changes in the matches
End of explanation
np.sum(~np.isnan(pwl["colour"]) & (pwl["match_code"] == 3))
subplot(1,2,1)
val, bins, _ = hist(pwl["colour"][~np.isnan(pwl["colour"]) & (pwl["match_code"] != 0)],
bins=50, normed=True, alpha=0.5, label="Total")
val, bins, _ = hist(pwl["colour"][~np.isnan(pwl["colour"]) & (pwl["match_code"] == 4)],
bins=bins, normed=True, alpha=0.5, label="New matches")
xlabel("Colour")
ylabel("N normed")
legend()
subplot(1,2,2)
val, bins, _ = hist(pwl["colour"][~np.isnan(pwl["colour"]) & (pwl["match_code"] != 0)],
bins=bins, normed=True, alpha=0.5, label="Total")
val, bins, _ = hist(pwl["colour"][~np.isnan(pwl["colour"]) & (pwl["match_code"] == 2)],
bins=bins, normed=True, alpha=0.5, label="Different match")
xlabel("Colour")
ylabel("N normed")
legend()
Explanation: Save data for tests
End of explanation
describe(pwlaux['ML_LR'])
describe(pwlaux['lr'])
len(pwlaux)
np.sum(
(pwlaux["AllWISE"] != pwlaux["AllWISE_input"]) |
(pwlaux["objID"] != pwlaux["objID_input"])
)
np.sum(
(pwlaux["AllWISE"] != pwlaux["AllWISE_input"]) &
(pwlaux["objID"] != pwlaux["objID_input"])
)
np.sum(
(pwl[cond_mlr].filled()["AllWISE"] != pwl[cond_mlr]["AllWISE_input"]) &
(pwl[cond_mlr].filled()["objID"] == pwl[cond_mlr]["objID_input"])
)
np.sum(
(pwl[cond_mlr].filled()["AllWISE"] == pwl[cond_mlr]["AllWISE_input"]) &
(pwl[cond_mlr].filled()["objID"] != pwl[cond_mlr]["objID_input"])
)
for i in ["AllWISE", "AllWISE_input", "objID", "objID_input"]:
print(i)
most_common(pwl[cond_mlr][i].filled(), n=3)
Q_0_colour
np.sum(Q_0_colour)
Explanation: Additional description of the data
End of explanation |
6,674 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Let's take a closer look again on our PC versus Mac menu bar example. Here are the measurements for the first experiments (between-subject, randomized) (we first import some stuff to make plots look nicer etc. press shift enter in the next cell to execute it).
Step1: Now Lets try to calculate the mean ... you can just use mean()
Step2: hmmm ... there seems to be a difference, but it's not so big. Let's use point plots to check the data.
Step3: Let's use boxplots to explore the windows and mac data we recorded so far.
use the command boxplot to plot them.
You can also combine the 2 datasets to place them in one plot using data = [windows,mac]
Step4: hmm ... doesn't look siginificant. yet, just to make sure, let's apply a t-test. Remember t-tests are for comparing only 2 means with eachother NOT more (also assumptions are that samples are independent, normally distributed and the variance is the same!)
Step5: Ok... doesn't look significant. So we should record more data ;)
Step6: Ok ... let's caluclate the means and plot the data.
now perform the t-test (two sided is best). What do you think?
what to do if we have more than 2 samples? (assumming we introduce a 3rd experimental setup where the menu bar is at the bottom of the screen) we need to use ANOVA (again assuming between-subject design, normal distributions etc.) Use the function stats.f_oneway
Step7: usually we don't define all data from the command prompt, but we read in files from disk. | Python Code:
%pylab inline
import matplotlib.pyplot as plt
#use a nicer plotting style
plt.style.use(u'fivethirtyeight')
print(plt.style.available)
#change figure size
pylab.rcParams['figure.figsize'] = (10, 6)
Explanation: Let's take a closer look again on our PC versus Mac menu bar example. Here are the measurements for the first experiments (between-subject, randomized) (we first import some stuff to make plots look nicer etc. press shift enter in the next cell to execute it).
End of explanation
windows = [625, 480, 621, 633]
mac = [647, 503, 559, 586]
print std(windows)
print std(mac)
Explanation: Now Lets try to calculate the mean ... you can just use mean()
End of explanation
plot(windows,"*")
plot(mac,"o")
Explanation: hmmm ... there seems to be a difference, but it's not so big. Let's use point plots to check the data.
End of explanation
data = [windows,mac]
boxplot(data)
xticks([1,2],['windows','mac'])
#save the plot to a file
savefig("boxplot.pdf")
Explanation: Let's use boxplots to explore the windows and mac data we recorded so far.
use the command boxplot to plot them.
You can also combine the 2 datasets to place them in one plot using data = [windows,mac]
End of explanation
from scipy.stats import ttest_ind
from scipy.stats import ttest_rel
import scipy.stats as stats
#onesided t-test
ttest_ind(mac,windows)
#two sided t-test
ttest_rel(mac,windows)
Explanation: hmm ... doesn't look siginificant. yet, just to make sure, let's apply a t-test. Remember t-tests are for comparing only 2 means with eachother NOT more (also assumptions are that samples are independent, normally distributed and the variance is the same!)
End of explanation
more_win = [625, 480, 621, 633,694,599,505,527,651,505]
more_mac = [647, 503, 559, 586, 458, 380, 477, 409, 589,472]
Explanation: Ok... doesn't look significant. So we should record more data ;)
End of explanation
more_win = [625, 480, 621, 633,694,599,505,527,651,505]
more_mac = [647, 503, 559, 586, 458, 380, 477, 409, 589,472]
more_bottom = [485,436, 512, 564, 560, 587, 391, 488, 555, 446]
stats.f_oneway(more_win, more_mac, more_bottom)
boxplot([more_win, more_mac, more_bottom])
xticks([1,2,3],['windows','mac', 'bottom'])
Explanation: Ok ... let's caluclate the means and plot the data.
now perform the t-test (two sided is best). What do you think?
what to do if we have more than 2 samples? (assumming we introduce a 3rd experimental setup where the menu bar is at the bottom of the screen) we need to use ANOVA (again assuming between-subject design, normal distributions etc.) Use the function stats.f_oneway
End of explanation
import pandas as pd
menu_data=pd.read_csv("./data/menu_all.csv")
menu_data.describe()
menu_data.boxplot()
Explanation: usually we don't define all data from the command prompt, but we read in files from disk.
End of explanation |
6,675 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
import fgm tables
Step1: Function libaries
ResBlock
res_block is the backbone of the resnet structure. The resblock has multi branch, bottle neck layer and skip connection build in. This modularized design has made create deep neural network easy.
Step2: data_reader
The read_h5_data function read the table from the hdf5 file.
In the FGM case we chose not to scale the input features, since they all falls between 0 and 1. There are a great variety in the output features. In the reaction region close to stoichiometry the gradient in the output properties are great. A good example is the source term for progress variable, which rises from 0 to 1e5. So the output features are first transformed to logrithmic scale and then rearranged between 0 and 1. The outputs are normalised by its variance. This way the output value will be large where the gradient is great. So during training more focus would be put. The same 'focus design' has been put on the loss function selection as well. mse is selected over mae for that the squared error put more weights on the data samples that shows great changes.
Step3: model
load data
Step4: build neural network model
Step5: model training
gpu training
Step6: TPU training
Step7: Training loss plot
Step8: Inference test
prepare frontend for plotting
Step9: prepare data for plotting
TPU data prepare
Step10: GPU data prepare
Step11: interactive plot | Python Code:
!pip install gdown
!mkdir ./data
import gdown
def data_import():
ids = {
"tables_of_fgm.h5":"1XHPF7hUqT-zp__qkGwHg8noRazRnPqb0"
}
url = 'https://drive.google.com/uc?id='
for title, g_id in ids.items():
try:
output_file = open("/content/data/" + title, 'wb')
gdown.download(url + g_id, output_file, quiet=False)
except IOError as e:
print(e)
finally:
output_file.close()
data_import()
Explanation: import fgm tables
End of explanation
import tensorflow as tf
import keras
from keras.layers import Dense, Activation, Input, BatchNormalization, Dropout, concatenate
from keras import layers
def res_branch(bi, conv_name_base, bn_name_base, scale, input_tensor, n_neuron, stage, block, bn=False):
x_1 = Dense(scale * n_neuron, name=conv_name_base + '2a_'+str(bi))(input_tensor)
if bn:
x_1 = BatchNormalization(axis=-1, name=bn_name_base + '2a_'+str(bi))(x_1)
x_1 = Activation('relu')(x_1)
# x_1 = Dropout(0.)(x_1)
return x_1
def res_block(input_tensor,scale, n_neuron, stage, block, bn=False,branches=0):
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# scale = 2
x = Dense(scale * n_neuron, name=conv_name_base + '2a')(input_tensor)
if bn:
x = BatchNormalization(axis=-1, name=bn_name_base + '2a')(x)
x = Activation('relu')(x)
dp1=0.0
if dp1 >0:
x = Droout(0.)(x)
branch_list=[x]
for i in range(branches-1):
branch_list.append(res_branch(i,conv_name_base, bn_name_base, scale,input_tensor,n_neuron,stage,block,bn))
if branches-1 > 0:
x = Dense(n_neuron, name=conv_name_base + '2b')(concatenate(branch_list,axis=-1))
# x = Dense(n_neuron, name=conv_name_base + '2b')(layers.add(branch_list))
else:
x = Dense(n_neuron, name=conv_name_base + '2b')(x)
if bn:
x = BatchNormalization(axis=-1, name=bn_name_base + '2b')(x)
x = layers.add([x, input_tensor])
x = Activation('relu')(x)
if dp1 >0:
x = Droout(0.)(x)
return x
Explanation: Function libaries
ResBlock
res_block is the backbone of the resnet structure. The resblock has multi branch, bottle neck layer and skip connection build in. This modularized design has made create deep neural network easy.
End of explanation
import numpy as np
import pandas as pd
from sklearn.preprocessing import MinMaxScaler, StandardScaler
class data_scaler(object):
def __init__(self):
self.norm = None
self.norm_1 = None
self.std = None
self.case = None
self.scale = 1
self.bias = 1e-20
# self.bias = 1
self.switcher = {
'min_std': 'min_std',
'std2': 'std2',
'std_min':'std_min',
'min': 'min',
'no':'no',
'log': 'log',
'log_min':'log_min',
'log2': 'log2',
'tan': 'tan'
}
def fit_transform(self, input_data, case):
self.case = case
if self.switcher.get(self.case) == 'min_std':
self.norm = MinMaxScaler()
self.std = StandardScaler()
out = self.norm.fit_transform(input_data)
out = self.std.fit_transform(out)
if self.switcher.get(self.case) == 'std2':
self.std = StandardScaler()
out = self.std.fit_transform(input_data)
if self.switcher.get(self.case) == 'std_min':
self.norm = MinMaxScaler()
self.std = StandardScaler()
out = self.std.fit_transform(input_data)
out = self.norm.fit_transform(out)
if self.switcher.get(self.case) == 'min':
self.norm = MinMaxScaler()
out = self.norm.fit_transform(input_data)
if self.switcher.get(self.case) == 'no':
self.norm = MinMaxScaler()
self.std = StandardScaler()
out = input_data
if self.switcher.get(self.case) == 'log':
out = - np.log(np.asarray(input_data / self.scale) + self.bias)
self.std = StandardScaler()
out = self.std.fit_transform(out)
if self.switcher.get(self.case) == 'log_min':
out = - np.log(np.asarray(input_data / self.scale) + self.bias)
self.norm = MinMaxScaler()
out = self.norm.fit_transform(out)
if self.switcher.get(self.case) == 'log2':
self.norm = MinMaxScaler()
self.norm_1 = MinMaxScaler()
out = self.norm.fit_transform(input_data)
out = np.log(np.asarray(out) + self.bias)
out = self.norm_1.fit_transform(out)
if self.switcher.get(self.case) == 'tan':
self.norm = MaxAbsScaler()
self.std = StandardScaler()
out = self.std.fit_transform(input_data)
out = self.norm.fit_transform(out)
out = np.tan(out / (2 * np.pi + self.bias))
return out
def transform(self, input_data):
if self.switcher.get(self.case) == 'min_std':
out = self.norm.transform(input_data)
out = self.std.transform(out)
if self.switcher.get(self.case) == 'std2':
out = self.std.transform(input_data)
if self.switcher.get(self.case) == 'std_min':
out = self.std.transform(input_data)
out = self.norm.transform(out)
if self.switcher.get(self.case) == 'min':
out = self.norm.transform(input_data)
if self.switcher.get(self.case) == 'no':
out = input_data
if self.switcher.get(self.case) == 'log':
out = - np.log(np.asarray(input_data / self.scale) + self.bias)
out = self.std.transform(out)
if self.switcher.get(self.case) == 'log_min':
out = - np.log(np.asarray(input_data / self.scale) + self.bias)
out = self.norm.transform(out)
if self.switcher.get(self.case) == 'log2':
out = self.norm.transform(input_data)
out = np.log(np.asarray(out) + self.bias)
out = self.norm_1.transform(out)
if self.switcher.get(self.case) == 'tan':
out = self.std.transform(input_data)
out = self.norm.transform(out)
out = np.tan(out / (2 * np.pi + self.bias))
return out
def inverse_transform(self, input_data):
if self.switcher.get(self.case) == 'min_std':
out = self.std.inverse_transform(input_data)
out = self.norm.inverse_transform(out)
if self.switcher.get(self.case) == 'std2':
out = self.std.inverse_transform(input_data)
if self.switcher.get(self.case) == 'std_min':
out = self.norm.inverse_transform(input_data)
out = self.std.inverse_transform(out)
if self.switcher.get(self.case) == 'min':
out = self.norm.inverse_transform(input_data)
if self.switcher.get(self.case) == 'no':
out = input_data
if self.switcher.get(self.case) == 'log':
out = self.std.inverse_transform(input_data)
out = (np.exp(-out) - self.bias) * self.scale
if self.switcher.get(self.case) == 'log_min':
out = self.norm.inverse_transform(input_data)
out = (np.exp(-out) - self.bias) * self.scale
if self.switcher.get(self.case) == 'log2':
out = self.norm_1.inverse_transform(input_data)
out = np.exp(out) - self.bias
out = self.norm.inverse_transform(out)
if self.switcher.get(self.case) == 'tan':
out = (2 * np.pi + self.bias) * np.arctan(input_data)
out = self.norm.inverse_transform(out)
out = self.std.inverse_transform(out)
return out
def read_h5_data(fileName, input_features, labels):
df = pd.read_hdf(fileName)
df = df[df['f']<0.45]
input_df=df[input_features]
in_scaler = data_scaler()
input_np = in_scaler.fit_transform(input_df.values,'no')
label_df=df[labels].clip(0)
# if 'PVs' in labels:
# label_df['PVs']=np.log(label_df['PVs']+1)
out_scaler = data_scaler()
label_np = out_scaler.fit_transform(label_df.values,'std2')
return input_np, label_np, df, in_scaler, out_scaler
Explanation: data_reader
The read_h5_data function read the table from the hdf5 file.
In the FGM case we chose not to scale the input features, since they all falls between 0 and 1. There are a great variety in the output features. In the reaction region close to stoichiometry the gradient in the output properties are great. A good example is the source term for progress variable, which rises from 0 to 1e5. So the output features are first transformed to logrithmic scale and then rearranged between 0 and 1. The outputs are normalised by its variance. This way the output value will be large where the gradient is great. So during training more focus would be put. The same 'focus design' has been put on the loss function selection as well. mse is selected over mae for that the squared error put more weights on the data samples that shows great changes.
End of explanation
%matplotlib inline
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# define the labels
col_labels=['C2H3', 'C2H6', 'CH2', 'H2CN', 'C2H4', 'H2O2', 'C2H', 'CN',
'heatRelease', 'NCO', 'NNH', 'N2', 'AR', 'psi', 'CO', 'CH4', 'HNCO',
'CH2OH', 'HCCO', 'CH2CO', 'CH', 'mu', 'C2H2', 'C2H5', 'H2', 'T', 'PVs',
'O', 'O2', 'N2O', 'C', 'C3H7', 'CH2(S)', 'NH3', 'HO2', 'NO', 'HCO',
'NO2', 'OH', 'HCNO', 'CH3CHO', 'CH3', 'NH', 'alpha', 'CH3O', 'CO2',
'CH3OH', 'CH2CHO', 'CH2O', 'C3H8', 'HNO', 'NH2', 'HCN', 'H', 'N', 'H2O',
'HCCOH', 'HCNN']
# labels = ['T','PVs']
labels = ['T','CH4','O2','CO2','CO','H2O','H2','OH','psi']
# labels = ['CH2OH','HNCO','CH3OH', 'CH2CHO', 'CH2O', 'C3H8', 'HNO', 'NH2', 'HCN']
# labels = np.random.choice(col_labels,9,replace=False).tolist()
labels.append('PVs')
# labels = col_labels
print(labels)
input_features=['f','pv','zeta']
# read in the data
x_input, y_label, df, in_scaler, out_scaler = read_h5_data('./data/tables_of_fgm.h5',input_features=input_features, labels = labels)
Explanation: model
load data
End of explanation
from sklearn.model_selection import train_test_split
import tensorflow as tf
from keras.models import Model
from keras.layers import Dense, Input
from keras.callbacks import ModelCheckpoint
# split into train and test data
x_train, x_test, y_train, y_test = train_test_split(x_input,y_label, test_size=0.01)
n_neuron = 10
scale=3
branches=5
# %%
print('set up ANN')
# ANN parameters
dim_input = x_train.shape[1]
dim_label = y_train.shape[1]
batch_norm = False
# This returns a tensor
inputs = Input(shape=(dim_input,),name='input_1')
# a layer instance is callable on a tensor, and returns a tensor
x = Dense(n_neuron, activation='relu')(inputs)
# less then 2 res_block, there will be variance
x = res_block(x, scale, n_neuron, stage=1, block='a', bn=batch_norm,branches=branches)
x = res_block(x, scale, n_neuron, stage=1, block='b', bn=batch_norm,branches=branches)
# x = res_block(x, scale, n_neuron, stage=1, block='c', bn=batch_norm,branches=branches)
x = Dense(500, activation='relu')(x)
predictions = Dense(dim_label, activation='linear', name='output_1')(x)
model = Model(inputs=inputs, outputs=predictions)
model.summary()
Explanation: build neural network model
End of explanation
import keras.backend as K
def cubic_loss(y_true, y_pred):
return K.mean(K.square(y_true - y_pred)*K.abs(y_true - y_pred), axis=-1)
def coeff_r2(y_true, y_pred):
from keras import backend as K
SS_res = K.sum(K.square( y_true-y_pred ))
SS_tot = K.sum(K.square( y_true - K.mean(y_true) ) )
return ( 1 - SS_res/(SS_tot + K.epsilon()) )
from keras import optimizers
batch_size = 1024*32
epochs = 200
vsplit = 0.1
loss_type='mse'
adam_op = optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999,epsilon=1e-8, decay=0.0, amsgrad=True)
model.compile(loss=loss_type, optimizer=adam_op, metrics=[coeff_r2])
# model.compile(loss=cubic_loss, optimizer=adam_op, metrics=['accuracy'])
# checkpoint (save the best model based validate loss)
!mkdir ./tmp
filepath = "./tmp/weights.best.cntk.hdf5"
checkpoint = ModelCheckpoint(filepath,
monitor='val_loss',
verbose=1,
save_best_only=True,
mode='min',
period=20)
callbacks_list = [checkpoint]
# fit the model
history = model.fit(
x_train, y_train,
epochs=epochs,
batch_size=batch_size,
validation_split=vsplit,
verbose=2,
callbacks=callbacks_list,
shuffle=True)
model.save('trained_fgm_nn.h5')
Explanation: model training
gpu training
End of explanation
import os
batch_size = 1024*128
epochs = 100
vsplit = 0.2
tpu_model = tf.contrib.tpu.keras_to_tpu_model(
model,
strategy=tf.contrib.tpu.TPUDistributionStrategy(
tf.contrib.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR'])
)
)
tpu_model.compile(
optimizer=tf.train.AdamOptimizer(learning_rate=1e-4),
loss=tf.keras.losses.mae,
metrics=['accuracy']
)
tpu_model.fit(
X_train,y_train,
epochs=epochs,
batch_size=batch_size,
validation_split=vsplit
)
Explanation: TPU training
End of explanation
fig = plt.figure()
plt.semilogy(history.history['loss'])
if vsplit:
plt.semilogy(history.history['val_loss'])
plt.title(loss_type)
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper right')
plt.show()
Explanation: Training loss plot
End of explanation
#@title import plotly
import plotly.plotly as py
import numpy as np
from plotly.offline import init_notebook_mode, iplot
# from plotly.graph_objs import Contours, Histogram2dContour, Marker, Scatter
import plotly.graph_objs as go
def configure_plotly_browser_state():
import IPython
display(IPython.core.display.HTML('''
<script src="/static/components/requirejs/require.js"></script>
<script>
requirejs.config({
paths: {
base: '/static/base',
plotly: 'https://cdn.plot.ly/plotly-1.5.1.min.js?noext',
},
});
</script>
'''))
Explanation: Inference test
prepare frontend for plotting
End of explanation
cpu_model = tpu_model.sync_to_cpu()
predict_val=cpu_model.predict(X_test)
X_test_df = pd.DataFrame(in_scaler.inverse_transform(X_test),columns=input_features)
y_test_df = pd.DataFrame(out_scaler.inverse_transform(y_test),columns=labels)
predict_val = model.predict(X_test)
predict_df = pd.DataFrame(out_scaler.inverse_transform(predict_val), columns=labels)
test_data=pd.concat([X_test_df,y_test_df],axis=1)
pred_data=pd.concat([X_test_df,predict_df],axis=1)
test_data.to_hdf('sim_check.H5',key='test')
pred_data.to_hdf('sim_check.H5',key='pred')
df_test=pd.read_hdf('sim_check.H5',key='test')
df_pred=pd.read_hdf('sim_check.H5',key='pred')
zeta_level=list(set(df_test['zeta']))
zeta_level.sort()
Explanation: prepare data for plotting
TPU data prepare
End of explanation
model.load_weights("./tmp/weights.best.cntk.hdf5")
x_test_df = pd.DataFrame(in_scaler.inverse_transform(x_test),columns=input_features)
y_test_df = pd.DataFrame(out_scaler.inverse_transform(y_test),columns=labels)
predict_val = model.predict(x_test,batch_size=1024*8)
predict_df = pd.DataFrame(out_scaler.inverse_transform(predict_val), columns=labels)
test_data=pd.concat([x_test_df,y_test_df],axis=1)
pred_data=pd.concat([x_test_df,predict_df],axis=1)
!rm sim_check.h5
test_data.to_hdf('sim_check.h5',key='test')
pred_data.to_hdf('sim_check.h5',key='pred')
df_test=pd.read_hdf('sim_check.h5',key='test')
df_pred=pd.read_hdf('sim_check.h5',key='pred')
zeta_level=list(set(df_test['zeta']))
zeta_level.sort()
Explanation: GPU data prepare
End of explanation
#@title Default title text
# species = 'PVs' #@param {type:"string"}
species = np.random.choice(labels)
# configure_plotly_browser_state()
# init_notebook_mode(connected=False)
from sklearn.metrics import r2_score
df_t=df_test.loc[df_test['zeta']==zeta_level[5]].sample(frac=0.5)
# df_p=df_pred.loc[df_pred['zeta']==zeta_level[1]].sample(frac=0.1)
df_p=df_pred.loc[df_t.index]
error=df_p[species]-df_t[species]
r2=round(r2_score(df_p[species],df_t[species]),4)
fig_db = {
'data': [
{'name':'test data from table',
'x': df_t['f'],
'y': df_t['pv'],
'z': df_t[species],
'type':'scatter3d',
'mode': 'markers',
'marker':{
'size':1
}
},
{'name':'prediction from neural networks',
'x': df_p['f'],
'y': df_p['pv'],
'z': df_p[species],
'type':'scatter3d',
'mode': 'markers',
'marker':{
'size':1
},
},
{'name':'error in difference',
'x': df_p['f'],
'y': df_p['pv'],
'z': error,
'type':'scatter3d',
'mode': 'markers',
'marker':{
'size':1
},
}
],
'layout': {
'scene':{
'xaxis': {'title':'mixture fraction'},
'yaxis': {'title':'progress variable'},
'zaxis': {'title': species+'_r2:'+str(r2)}
}
}
}
# iplot(fig_db, filename='multiple-scatter')
iplot(fig_db)
print(species,r2)
df_p['HNO']
%run -i k2tf.py --input_model='trained_fgm_nn.h5' --output_model='exported/fgm.pb'
Explanation: interactive plot
End of explanation |
6,676 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercise Introduction
The cameraman who shot our deep learning videos mentioned a problem that we can solve with deep learning.
He offers a service that scans photographs to store them digitally. He uses a machine that quickly scans many photos. But depending on the orientation of the original photo, many images are digitized sideways. He fixes these manually, looking at each photo to determine which ones to rotate.
In this exercise, you will build a model that distinguishes which photos are sideways and which are upright, so an app could automatically rotate each image if necessary.
If you were going to sell this service commercially, you might use a large dataset to train the model. But you'll have great success with even a small dataset. You'll work with a small dataset of dog pictures, half of which are rotated sideways.
Specifying and compiling the model look the same as in the example you've seen. But you'll need to make some changes to fit the model.
Run the following cell to set up automatic feedback.
Step1: 1) Specify the Model
Since this is your first time, we'll provide some starter code for you to modify. You will probably copy and modify code the first few times you work on your own projects.
There are some important parts left blank in the following code.
Fill in the blanks (marked with ____) and run the cell
Step2: 2) Compile the Model
You now compile the model with the following line. Run this cell.
Step3: That ran nearly instantaneously. Deep learning models have a reputation for being computationally demanding. Why did that run so quickly?
After thinking about this, check your answer by uncommenting the cell below.
Step4: 3) Review the Compile Step
You provided three arguments in the compile step.
- optimizer
- loss
- metrics
Which arguments could affect the accuracy of the predictions that come out of the model? After you have your answer, run the cell below to see the solution.
Step5: 4) Fit Model
Your training data is in the directory ../input/dogs-gone-sideways/images/train. The validation data is in ../input/dogs-gone-sideways/images/val. Use that information when setting up train_generator and validation_generator.
You have 220 images of training data and 217 of validation data. For the training generator, we set a batch size of 10. Figure out the appropriate value of steps_per_epoch in your fit_generator call.
Fill in all the blanks (again marked as ____). Then run the cell of code. Watch as your model trains the weights and the accuracy improves. | Python Code:
# Set up code checking
from learntools.core import binder
binder.bind(globals())
from learntools.deep_learning.exercise_4 import *
print("Setup Complete")
Explanation: Exercise Introduction
The cameraman who shot our deep learning videos mentioned a problem that we can solve with deep learning.
He offers a service that scans photographs to store them digitally. He uses a machine that quickly scans many photos. But depending on the orientation of the original photo, many images are digitized sideways. He fixes these manually, looking at each photo to determine which ones to rotate.
In this exercise, you will build a model that distinguishes which photos are sideways and which are upright, so an app could automatically rotate each image if necessary.
If you were going to sell this service commercially, you might use a large dataset to train the model. But you'll have great success with even a small dataset. You'll work with a small dataset of dog pictures, half of which are rotated sideways.
Specifying and compiling the model look the same as in the example you've seen. But you'll need to make some changes to fit the model.
Run the following cell to set up automatic feedback.
End of explanation
from tensorflow.keras.applications import ResNet50
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten, GlobalAveragePooling2D
num_classes = ____
resnet_weights_path = '../input/resnet50/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5'
my_new_model = Sequential()
my_new_model.add(ResNet50(include_top=False, pooling='avg', weights=resnet_weights_path))
my_new_model.add(Dense(num_classes, activation='softmax'))
# Indicate whether the first layer should be trained/changed or not.
my_new_model.layers[0].trainable = ____
# Check your answer
step_1.check()
#%%RM_IF(PROD)%%
num_classes = 2
resnet_weights_path = '../input/resnet50/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5'
my_new_model = Sequential()
my_new_model.add(ResNet50(include_top=False, pooling='avg', weights=resnet_weights_path))
my_new_model.add(Dense(num_classes, activation='softmax'))
# Indicate whether the first layer should be trained/changed or not.
my_new_model.layers[0].trainable = False
step_1.assert_check_passed()
# step_1.hint()
# step_1.solution()
Explanation: 1) Specify the Model
Since this is your first time, we'll provide some starter code for you to modify. You will probably copy and modify code the first few times you work on your own projects.
There are some important parts left blank in the following code.
Fill in the blanks (marked with ____) and run the cell
End of explanation
my_new_model.compile(optimizer='sgd',
loss='categorical_crossentropy',
metrics=['accuracy'])
Explanation: 2) Compile the Model
You now compile the model with the following line. Run this cell.
End of explanation
# Check your answer (Run this code cell to receive credit!)
step_2.solution()
Explanation: That ran nearly instantaneously. Deep learning models have a reputation for being computationally demanding. Why did that run so quickly?
After thinking about this, check your answer by uncommenting the cell below.
End of explanation
# Check your answer (Run this code cell to receive credit!)
step_3.solution()
Explanation: 3) Review the Compile Step
You provided three arguments in the compile step.
- optimizer
- loss
- metrics
Which arguments could affect the accuracy of the predictions that come out of the model? After you have your answer, run the cell below to see the solution.
End of explanation
from tensorflow.keras.applications.resnet50 import preprocess_input
from tensorflow.keras.preprocessing.image import ImageDataGenerator
image_size = 224
data_generator = ImageDataGenerator(preprocess_input)
train_generator = data_generator.flow_from_directory(
directory=____,
target_size=(image_size, image_size),
batch_size=10,
class_mode='categorical')
validation_generator = data_generator.flow_from_directory(
directory=____,
target_size=(image_size, image_size),
class_mode='categorical')
# fit_stats below saves some statistics describing how model fitting went
# the key role of the following line is how it changes my_new_model by fitting to data
fit_stats = my_new_model.fit_generator(train_generator,
steps_per_epoch=____,
validation_data=____,
validation_steps=1)
# Check your answer
step_4.check()
# step_4.solution()
#%%RM_IF(PROD)%%
from tensorflow.keras.applications.resnet50 import preprocess_input
from tensorflow.keras.preprocessing.image import ImageDataGenerator
image_size = 224
data_generator = ImageDataGenerator(preprocess_input)
train_generator = data_generator.flow_from_directory(
directory='../input/dogs-gone-sideways/images/train',
target_size=(image_size, image_size),
batch_size=10,
class_mode='categorical')
validation_generator = data_generator.flow_from_directory(
directory='../input/dogs-gone-sideways/images/val',
target_size=(image_size, image_size),
class_mode='categorical')
# fit_stats below saves some statistics describing how model fitting went
# the key role of the following line is how it changes my_new_model by fitting to data
fit_stats = my_new_model.fit_generator(train_generator,
steps_per_epoch=22,
validation_data=validation_generator,
validation_steps=1)
step_4.assert_check_passed()
Explanation: 4) Fit Model
Your training data is in the directory ../input/dogs-gone-sideways/images/train. The validation data is in ../input/dogs-gone-sideways/images/val. Use that information when setting up train_generator and validation_generator.
You have 220 images of training data and 217 of validation data. For the training generator, we set a batch size of 10. Figure out the appropriate value of steps_per_epoch in your fit_generator call.
Fill in all the blanks (again marked as ____). Then run the cell of code. Watch as your model trains the weights and the accuracy improves.
End of explanation |
6,677 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
What Happens If I Make a Mistake?
This notebook contains an interactive introduction to the OFTR language.
ZOF Codec
For the first step, we are going to show how to use zof.codec. This is a tool for translating OpenFlow messages from YAML to binary and back again.
First, import zof.codec.
Step1: Let's test zof.codec to make sure it is working.
Step2: The output shows a binary OpenFlow version 1.3 (0x04) message. We can decode this using decode. | Python Code:
import zof.codec
Explanation: What Happens If I Make a Mistake?
This notebook contains an interactive introduction to the OFTR language.
ZOF Codec
For the first step, we are going to show how to use zof.codec. This is a tool for translating OpenFlow messages from YAML to binary and back again.
First, import zof.codec.
End of explanation
'type: FEATURES_REQUEST'.encode('openflow')
Explanation: Let's test zof.codec to make sure it is working.
End of explanation
print(b'\x04\x05\x00\x08\x00\x00\x00\x00'.decode('openflow'))
import zof.codec
def dump(s):
try:
print(s.encode('openflow').decode('openflow'))
except Exception as ex:
print(ex)
dump('''
type: HELLO
version: 1
''')
dump('''
type: FLOW_MOD
msg:
command: ADD
table_id: 0
buffer_id: 7
match:
- field: ETH_DST
value: 00:00:00:00:00:01
''')
dump('''
type: ROLE_request
msg:
role: ROLE_MASTER
generation_id: 0x10
''')
dump('''
type: ROLE_REQUEST
msg:
role: ROLE_MASTER
generation: 0x10
''')
dump('''
type: ROLE_REQUEST
msg:
role: ROLE_MASTER
generation_id: 0x10
extra: 1
''')
dump('''
type: ROLE_REQUEST
msg:
role: 1000
generation_id: 0x10
extra: 1
''')
dump('''
{
"type": "ROLE_REQUEST",
"msg": {
role: ROLE_MASTER,
generation_id: "0x10"
}
}
''')
Explanation: The output shows a binary OpenFlow version 1.3 (0x04) message. We can decode this using decode.
End of explanation |
6,678 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualizing attention on self driving car
So far we have seen many examples of attention and activation maximization on Dense layers that outputs a probability distribution. What if we have a regression model instead?
In this example, we will use a pretrained self driving car model that predicts the steering angle output. This model is borrowed from https
Step1: Looks good. The negative value is indicative of left steering.
Attention
By default, visualize_saliency and visualize_cam use positive gradients which shows what parts of the image increase the output value. This makes sense for categorical outputs. For regression cases, it is more interesting to see what parts of the image cause the output to
Step2: That was anti-climactic. Lets try grad-CAM. We know that vanilla saliency can be noisy.
Step3: This makes sense. In order to turn right, the left part of the lane contributes the most towards it. I am guessing it is attending to the fact that it curves left and so changing it to curve right would make the network increase the steering angle.
The maintain_steering visualization shows that its current decision is mostly due to the object in the right corner. This is an undesirable behavior which visualizations like these can help you uncover.
The left steering case is intuitive as well. Interestingly, the objects in the room in the far right also provide it a cue to turn left. This means that, even without the lane marker, the network will probably turn away from obstacles. Lets put this hypothesis to test. Using my awesome photo editing skills, I will remove the lane marker.
Lets see the predicted output first.
Step4: As predicted, it has a left steering output. Lets find out if those objects have anything to do with it. | Python Code:
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
from model import build_model, FRAME_W, FRAME_H
from keras.preprocessing.image import img_to_array
from vis.utils import utils
model = build_model()
model.load_weights('weights.hdf5')
img = utils.load_img('images/left.png', target_size=(FRAME_H, FRAME_W))
plt.imshow(img)
# Convert to BGR, create input with batch_size: 1.
bgr_img = utils.bgr2rgb(img)
img_input = np.expand_dims(img_to_array(bgr_img), axis=0)
pred = model.predict(img_input)[0][0]
print('Predicted {}'.format(pred))
Explanation: Visualizing attention on self driving car
So far we have seen many examples of attention and activation maximization on Dense layers that outputs a probability distribution. What if we have a regression model instead?
In this example, we will use a pretrained self driving car model that predicts the steering angle output. This model is borrowed from https://github.com/experiencor/self-driving-toy-car. Here is the model in action.
<a href="https://www.youtube.com/watch?v=-v6q2dNZTU8" rel="some text"><p align="center"></p></a>
Lets load the model, weights etc and make a prediction.
End of explanation
import matplotlib.cm as cm
from vis.visualization import visualize_saliency, overlay
titles = ['right steering', 'left steering', 'maintain steering']
modifiers = [None, 'negate', 'small_values']
for i, modifier in enumerate(modifiers):
heatmap = visualize_saliency(model, layer_idx=-1, filter_indices=0,
seed_input=bgr_img, grad_modifier=modifier)
plt.figure()
plt.title(titles[i])
# Overlay is used to alpha blend heatmap onto img.
jet_heatmap = np.uint8(cm.jet(heatmap)[..., :3] * 255)
plt.imshow(overlay(img, jet_heatmap, alpha=0.7))
Explanation: Looks good. The negative value is indicative of left steering.
Attention
By default, visualize_saliency and visualize_cam use positive gradients which shows what parts of the image increase the output value. This makes sense for categorical outputs. For regression cases, it is more interesting to see what parts of the image cause the output to:
Increase
Decrease
Maintain
the current predicted value. This is where grad_modifiers shine.
To visualize decrease, we need to consider negative gradients that indicate the decrease. To treat them as positive values (as used by visualization), we need to to negate the gradients. This is easily done by using grad_modifier='negate'.
To visualize what is responsible for current output, we need to highlight small gradients (either positive or negative). This can be done by using a grad modifier that performs grads = np.abs(1. / grads) to magnify small positive or negative values. Alternatively, we can use grad_modifier='small_values' which does the same thing.
Lets use this knowledge to visualize the parts of the image that cause the car to increase, decrease, maintain the predicted steering.
End of explanation
from vis.visualization import visualize_cam
for i, modifier in enumerate(modifiers):
heatmap = visualize_cam(model, layer_idx=-1, filter_indices=0,
seed_input=bgr_img, grad_modifier=modifier)
plt.figure()
plt.title(titles[i])
# Overlay is used to alpha blend heatmap onto img.
jet_heatmap = np.uint8(cm.jet(heatmap)[..., :3] * 255)
plt.imshow(overlay(img, jet_heatmap, alpha=0.7))
Explanation: That was anti-climactic. Lets try grad-CAM. We know that vanilla saliency can be noisy.
End of explanation
img = utils.load_img('images/blank.png', target_size=(FRAME_H, FRAME_W))
plt.imshow(img)
# Convert to BGR, create input with batch_size: 1.
bgr_img = utils.bgr2rgb(img)
img_input = np.expand_dims(img_to_array(bgr_img), axis=0)
img_input.shape
pred = model.predict(img_input)[0][0]
print('Predicted {}'.format(pred))
Explanation: This makes sense. In order to turn right, the left part of the lane contributes the most towards it. I am guessing it is attending to the fact that it curves left and so changing it to curve right would make the network increase the steering angle.
The maintain_steering visualization shows that its current decision is mostly due to the object in the right corner. This is an undesirable behavior which visualizations like these can help you uncover.
The left steering case is intuitive as well. Interestingly, the objects in the room in the far right also provide it a cue to turn left. This means that, even without the lane marker, the network will probably turn away from obstacles. Lets put this hypothesis to test. Using my awesome photo editing skills, I will remove the lane marker.
Lets see the predicted output first.
End of explanation
# We want to use grad_modifier='small_values' to see what is resposible for maintaining current prediction.
heatmap = visualize_cam(model, layer_idx=-1, filter_indices=0,
seed_input=bgr_img, grad_modifier='small_values')
jet_heatmap = np.uint8(cm.jet(heatmap)[..., :3] * 255)
plt.imshow(overlay(img, jet_heatmap, alpha=0.7))
Explanation: As predicted, it has a left steering output. Lets find out if those objects have anything to do with it.
End of explanation |
6,679 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Step9: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
Step10: Thinking about your results
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
data.head()
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
#### Set this to your implemented sigmoid function ####
# Activation function is the sigmoid function
self.activation_function = self.sigmoid
def sigmoid(self, x):
return 1 / (1 + np.exp(-x))
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)
hidden_outputs = self.activation_function(hidden_inputs)
# TODO: Output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)
final_outputs = final_inputs # signals from final output layer --> f(x) = x
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
output_errors = targets - final_outputs # Output layer error is the difference between desired target and actual output.
# TODO: Backpropagated error
hidden_errors = np.dot(self.weights_hidden_to_output.T, output_errors) # errors propagated to the hidden layer
hidden_grad = hidden_outputs * (1 - hidden_outputs)
# TODO: Update the weights
self.weights_hidden_to_output += self.lr * np.dot(output_errors , hidden_outputs.T) # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * np.dot(hidden_errors * hidden_grad, inputs.T) # update input-to-hidden weights with gradient descent step
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
#### Implement the forward pass here ####
# TODO: Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import sys
### Set the hyperparameters here ###
epochs = 2000
learning_rate = 0.1
hidden_nodes = 30
output_nodes = 15
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
step = int(epochs / 3)
print("Learning rate: {0}",network.lr)
for e in range(epochs):
if (e % step == 0 and e != 0):
network.lr /= 5
print("\nNew learning rate: {0}",network.lr)
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(ymax=0.5)
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Thinking about your results
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter
Your answer below
Unit tests
Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.
End of explanation |
6,680 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simple example to compute derivatives of any Fortran routine with DNAD
<hr>
James C. Orr$^1$, Jean-Marie Epitalon$^2$, and James Kermode$^3$
$^1$Laboratoire des Sciences du Climat et de l'Environnement/IPSL, CEA-CNRS-UVSQ, Gif-sur-Yvette, France<br>
$^2$Geoscientific Programming Services, Fanjeax, France<br>
$^3$ Warwick Centre for Predictive Modelling, University of Warwick, UK
21 September 2015<br>
<hr>
Do you want to compute derivatives from an existing fortran code simply and accurately? If so, read on.
There is a simple and accurate way to compute first derivatives from an existing Fortran subroutine with minimal code modification. The method, called Dual Number Automatic Differentiation (DNAD), is described by
<a href="./Yu_Blair_2013_CompPhysComm.pdf">Yu and Blair (2013)</a>. The DNAD approach yields derivatives $\partial y_j / \partial x_i$, where $x_i$ are all input variables ($i = 1,n$) and $y_j$ are all output variables ($j = 1,m$).
To do compute the derivatives, one simply needs to change the TYPE definition of the variables (and import the DNAD module). Results are as accurate as the analytical solution, i.e., to machine precision.
For this little demo, we first wrapped the fortran routines in f90wrap to access them in python. To do that, just download the files in the same directory where you found this Jupyter notebook file, and then type make. Then just execute the cells below, where we show how to run the wrapped code in python.
Specify working directory
Step1: Use print() from Python3 instead of print from Python2
Step2: Import numpy and the fortran routines (including the cylinder demo and the DNAD module)
Step3: Specify definitions to use later
Step4: Some documentation
Step5: Initialize d1 and d2 variables, each of the type dual_num (defined in the DNAD module)
Step6: Run subroutine "cylinder" to compute volume $v$, $dv/dr$, and $dv/dh$ | Python Code:
# Working directory (change as needed)
#cylynder_dnad_dir = "/home/my-user-name/etc/"
cylynder_dnad_dir = "."
import sys
sys.path.append(cylynder_dnad_dir)
Explanation: Simple example to compute derivatives of any Fortran routine with DNAD
<hr>
James C. Orr$^1$, Jean-Marie Epitalon$^2$, and James Kermode$^3$
$^1$Laboratoire des Sciences du Climat et de l'Environnement/IPSL, CEA-CNRS-UVSQ, Gif-sur-Yvette, France<br>
$^2$Geoscientific Programming Services, Fanjeax, France<br>
$^3$ Warwick Centre for Predictive Modelling, University of Warwick, UK
21 September 2015<br>
<hr>
Do you want to compute derivatives from an existing fortran code simply and accurately? If so, read on.
There is a simple and accurate way to compute first derivatives from an existing Fortran subroutine with minimal code modification. The method, called Dual Number Automatic Differentiation (DNAD), is described by
<a href="./Yu_Blair_2013_CompPhysComm.pdf">Yu and Blair (2013)</a>. The DNAD approach yields derivatives $\partial y_j / \partial x_i$, where $x_i$ are all input variables ($i = 1,n$) and $y_j$ are all output variables ($j = 1,m$).
To do compute the derivatives, one simply needs to change the TYPE definition of the variables (and import the DNAD module). Results are as accurate as the analytical solution, i.e., to machine precision.
For this little demo, we first wrapped the fortran routines in f90wrap to access them in python. To do that, just download the files in the same directory where you found this Jupyter notebook file, and then type make. Then just execute the cells below, where we show how to run the wrapped code in python.
Specify working directory
End of explanation
from __future__ import print_function
Explanation: Use print() from Python3 instead of print from Python2
End of explanation
import numpy as np
import Example # or Example_pkg, as you prefer
Explanation: Import numpy and the fortran routines (including the cylinder demo and the DNAD module)
End of explanation
# Some definitions for later use (thanks to James Kermode)
d1 = Example.Dual_Num_Auto_Diff.Dual_Num()
d2 = Example.Dual_Num_Auto_Diff.Dual_Num()
d3 = Example.Mcyldnad.cyldnad(d1, d2)
Explanation: Specify definitions to use later
End of explanation
cylin=Example.Mcyldnad.cyldnad
print(cylin.__doc__)
Explanation: Some documentation
End of explanation
# Specify radius (r)
d1.x_ad_ = 3
# Specify that we want dv/dr, where v is cylinder volume and r is cylinder radius
d1.xp_ad_ =np.array((1.,0.))
print("d1:", d1)
# Specify height (h)
d2.x_ad_ = 5
# Specify that we want dv/dh, where h is cylinder height
d2.xp_ad_ = np.array((0,1.))
print("d2:", d2)
Explanation: Initialize d1 and d2 variables, each of the type dual_num (defined in the DNAD module):
End of explanation
d3 = Example.Mcyldnad.cyldnad(d1, d2)
# Print computed v, dv/dr, dv/dh (thanks to dual numbers)
print("result:", d3)
Explanation: Run subroutine "cylinder" to compute volume $v$, $dv/dr$, and $dv/dh$
End of explanation |
6,681 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example data from the Death Implicit Association Test
Nock, M.K., Park, J.M., Finn, C.T., Deliberto, T.L., Dour, H.J., & Banaji, M.R. (2010). Measuring the suicidal mind
Step1: <blockquote>
Blocks 0,1 & 4 - which contain conditions 'Death,Life', 'Not Me,Me', 'Life,Death' are practice blocks, meaning they do not contain relevant data because they do not contrast the different categories.
</blockquote>
<blockquote>
Therefore, we will enter blocks 2,3,5,6 and contditions 'Life/Not Me,Death/Me', 'Death/Not Me,Life/Me' into analyze_iat.
</blockquote>
<blockquote>
We are entering the "correct" column, which contains 1 for correct and 0 for errors. We could enter the "errors" column and then just set the error_or_correct argument to 'error.'
</blockquote>
<blockquote>
Finally, we have the option to return the total number and percentage of trials that are removed because they are either too fast (default
Step2: output
First 14 columns contain the number of trials, - overall, for each condition and for each block - both before and after excluding fast\slow trials
Step3: Next 7 columns contain error the number of trials - overall, within each condition and within each block
Error rates are calculated prior to excluding fast\slow trials but there is an option - errors_after_fastslow_rmvd - that if set to True will remove fast/slow trials prior to calculating error rates
Step4: Next 7 columns contain pct of too fast trials - overall, within each condition and within each block
Step5: Next 7 columns contain pct of too slow trials - overall, within each condition and within each block
Step6: Column 35 contains the number of blocks
Step7: Next 22 columns contain whether a poor performance criterion\cutoff was flagged - across error rates, too fast rates, too slow rates, and number of blocks
Step8: Column 58 contains a 1 if subject passed any poor performance crierion\cuttoff
Step9: Columns 59-62 contain D scores for early and late trials and a final overall weighted D score
Step10: Compare D scores with R package "iat"
https
Step11: In the pyiat command above, we entered an argument to return fast-slow stats
This returns total perecentage of too fast and too slow trials across all subjects and only unflagged, presumably included subjects
Step12: Other options
D scores for each stimulus (i.e. each word)
Requires each_stim=True and name of the column containing the stimuli in the stimulus column
Step13: D scores for each word as well as all error and fast\slow trial output
Step14: Unweighted D scores
Step15: This produces less output as it does not report any information on a block basis
Step16: Unweighted D scores for each stimulus
Step17: There are a few more options, including (1) setting the too fast\too slow threshold, (2) setting the cutoffs for flags, (3) reporting errors and too fast\slow trial counts instead of percentages (4) printing the output to an excel spreadsheet.
Step18: biatd1=analyze_iat(biatd,subject='session_id',rt='trial_latency',condition='block_pairing_definition',\
correct='trial_error',error_or_correct='error'\
,cond1='(unnamed)/Death,Me/Life',cond2='(unnamed)/Life,Me/Death'\
,block='block_number',blocks=[0, 1, 2, 3,4,5],biat=True) | Python Code:
d=pd.read_csv('iat_data.csv',index_col=0)
d.head()
#Number of trials per subject
#Note that Subject 1 has too few trials
d.groupby('subjnum').subjnum.count().head()
#Number of subjects in this data set
d.subjnum.unique()
#Conditions
d.condition.unique()
#Blocks
d.block.unique()
#Correct coded as 1, errors coded as 0 in correct column
d.correct.unique()
Explanation: Example data from the Death Implicit Association Test
Nock, M.K., Park, J.M., Finn, C.T., Deliberto, T.L., Dour, H.J., & Banaji, M.R. (2010). Measuring the suicidal mind: Implicit cognition predicts suicidal behavior. Psychological Science, 21(4), 511–517. https://doi.org/10.1177/0956797610364762
pyiat will work with any IAT data.
import data
End of explanation
d1,fs1=analyze_iat(d,subject='subjnum',rt='latency',condition='condition',correct='correct'\
,cond1='Death/Not Me,Life/Me',cond2='Life/Not Me,Death/Me'\
,block='block',blocks=[2,3,5,6],fastslow_stats=True)
Explanation: <blockquote>
Blocks 0,1 & 4 - which contain conditions 'Death,Life', 'Not Me,Me', 'Life,Death' are practice blocks, meaning they do not contain relevant data because they do not contrast the different categories.
</blockquote>
<blockquote>
Therefore, we will enter blocks 2,3,5,6 and contditions 'Life/Not Me,Death/Me', 'Death/Not Me,Life/Me' into analyze_iat.
</blockquote>
<blockquote>
We are entering the "correct" column, which contains 1 for correct and 0 for errors. We could enter the "errors" column and then just set the error_or_correct argument to 'error.'
</blockquote>
<blockquote>
Finally, we have the option to return the total number and percentage of trials that are removed because they are either too fast (default : 400ms) or too slow (default : 10000ms). This will return the number and percentage across all subjects and across just subjects that do not receive a flag indicating they had poor performance on some metric.
</blockquote>
pyiat
Return a weighted d-scores. It will also return all error and too fast/too slow trial information and flags indicating poor performance as well as the number of blocks
End of explanation
d1.iloc[:,0:14].head()
Explanation: output
First 14 columns contain the number of trials, - overall, for each condition and for each block - both before and after excluding fast\slow trials
End of explanation
d1.iloc[:,14:21].head()
Explanation: Next 7 columns contain error the number of trials - overall, within each condition and within each block
Error rates are calculated prior to excluding fast\slow trials but there is an option - errors_after_fastslow_rmvd - that if set to True will remove fast/slow trials prior to calculating error rates
End of explanation
d1.iloc[:,21:28].head()
Explanation: Next 7 columns contain pct of too fast trials - overall, within each condition and within each block
End of explanation
d1.iloc[:,28:35].head()
Explanation: Next 7 columns contain pct of too slow trials - overall, within each condition and within each block
End of explanation
d1.iloc[:,35].to_frame().head()
Explanation: Column 35 contains the number of blocks
End of explanation
d1.iloc[:,36:58].head()
Explanation: Next 22 columns contain whether a poor performance criterion\cutoff was flagged - across error rates, too fast rates, too slow rates, and number of blocks
End of explanation
d1.iloc[:,58].to_frame().head()
Explanation: Column 58 contains a 1 if subject passed any poor performance crierion\cuttoff
End of explanation
d1.iloc[:,59:62].head()
Explanation: Columns 59-62 contain D scores for early and late trials and a final overall weighted D score
End of explanation
#Prepare data to enter into r package - need to have blocks be a string and need to divide data into 2 separate
#dataframes for people that received "Death,Me" first and for those that received "Life,Me" first.
d['block_str']=d.block.astype(str)
d1_r_subn=d[(d.condition=='Death/Not Me,Life/Me')&(d.block>4)].subjnum.unique()
d1_r=d[d.subjnum.isin(d1_r_subn)]
d2_r_subn=d[(d.condition=='Life/Not Me,Death/Me')&(d.block>4)].subjnum.unique()
d2_r=d[d.subjnum.isin(d2_r_subn)]
%R -i d1_r
%R -i d2_r
%%R
dscore_first <- cleanIAT(my_data = d1_r,
block_name = "block_str",
trial_blocks = c("2","3", "5", "6"),
session_id = "subjnum",
trial_latency = "latency",
trial_error = "errors",
v_error = 1, v_extreme = 2, v_std = 1)
dscore_second <- cleanIAT(my_data = d2_r,
block_name = "block_str",
trial_blocks = c("2","3", "5", "6"),
session_id = "subjnum",
trial_latency = "latency",
trial_error = "errors",
v_error = 1, v_extreme = 2, v_std = 1)
r_dsc <- rbind(dscore_first, dscore_second)
%R -o dscore_first
%R -o dscore_second
#Then we need to combine the separate dataframes
#One of these the scores are flipped so need to flip back
dscore_second.IAT=dscore_second.IAT*-1
iat_r_dsc=pd.concat([dscore_first,dscore_second])
iat_r_dsc.index=iat_r_dsc.subjnum
iat_r_dsc=iat_r_dsc.sort_index()
py_r_iat=pd.concat([d1.dscore,iat_r_dsc.IAT],axis=1)
py_r_iat.head()
#Correlation between pyiat (dscore) and R package (IAT) = 1
py_r_iat.corr()
Explanation: Compare D scores with R package "iat"
https://cran.r-project.org/web/packages/IAT/
End of explanation
fs1
Explanation: In the pyiat command above, we entered an argument to return fast-slow stats
This returns total perecentage of too fast and too slow trials across all subjects and only unflagged, presumably included subjects
End of explanation
d2,fs2=analyze_iat(d,subject='subjnum',rt='latency',condition='condition',correct='correct'\
,cond1='Death/Not Me,Life/Me',cond2='Life/Not Me,Death/Me'\
,block='block',blocks=[2,3,5,6],fastslow_stats=True,each_stim=True,stimulus='trial_word')
Explanation: Other options
D scores for each stimulus (i.e. each word)
Requires each_stim=True and name of the column containing the stimuli in the stimulus column
End of explanation
d2.iloc[:,59:].head()
Explanation: D scores for each word as well as all error and fast\slow trial output
End of explanation
d3,fs3=analyze_iat(d,subject='subjnum',rt='latency',condition='condition',correct='correct'\
,cond1='Death/Not Me,Life/Me',cond2='Life/Not Me,Death/Me'\
,block='block',blocks=[2,3,5,6],fastslow_stats=True,weighted=False)
Explanation: Unweighted D scores
End of explanation
d3.iloc[:,24:].head()
Explanation: This produces less output as it does not report any information on a block basis
End of explanation
d4,fs4=analyze_iat(d,subject='subjnum',rt='latency',condition='condition',correct='correct'\
,cond1='Death/Not Me,Life/Me',cond2='Life/Not Me,Death/Me'\
,block='block',blocks=[2,3,5,6],fastslow_stats=True,each_stim=True,stimulus='trial_word',weighted=False)
d4.iloc[:,26:].head()
Explanation: Unweighted D scores for each stimulus
End of explanation
dbt=pd.read_csv('_orig_biat.csv',index_col=0)
dbt.head()
db=pd.read_csv('_orig_biat_scored.csv',index_col=0)
Explanation: There are a few more options, including (1) setting the too fast\too slow threshold, (2) setting the cutoffs for flags, (3) reporting errors and too fast\slow trial counts instead of percentages (4) printing the output to an excel spreadsheet.
End of explanation
biatd1=analyze_iat(dbt,subject='session_id',rt='trial_latency',condition='block_pairing_definition',\
correct='trial_error',error_or_correct='error'\
,cond2='(unnamed)/Death,Me/Life',cond1='(unnamed)/Life,Me/Death'\
,block='block_number',blocks=[0, 1, 2, 3,4,5],biat=True,rmv_1st_4trls=True,trl_num='trial_number',\
each_stim=True,stimulus='trial_name')
biatd1
s=['session_id']
s.extend(list(db.filter(like='DS').columns.values))
db[s].head(10)
dscore2=biatd1[biatd1.dscore2.notnull()].index
incld=db[db.session_id.isin(dscore2)][db[db.session_id.isin(dscore2)].DScore1.notnull()].session_id
biatd1[biatd1.index.isin(incld)].dscore2.corr(db[db.session_id.isin(incld)].DScore1)
db.index=db.session_id
t=pd.concat([biatd1.dscore2.apply(lambda x: np.round(x,4)),db.DScore1.apply(lambda x: np.round(x,4))],axis=1)
t.corr()
t[((t.dscore2.notnull())&(t.DScore1.notnull()))].corr()
t.head()
df=dbt[dbt.session_id==2618881218].copy(deep=True)
rt='trial_latency'
df.loc[df[rt]>2000,rt]=2000
df[df[rt]>2000]
df[df[rt]<400]
t[(t.dscore2-t.DScore1)<0]
t[(t.dscore2-t.DScore1)>0]
biatd1.filter(like='dsc')
biatd=pd.read_csv('biat.csv',index_col=0)
biatd1
overall_err_cut=.3
cond_err_cut=.4
block_err_cut=.4
cutoffs=[overall_err_cut,cond_err_cut,cond_err_cut]
cutoffs.extend(list(np.repeat(block_err_cut,len(blocks))))
cutoffs
biatd1
biat
df=biatd
correct='trial_error'
subject='session_id'
condition='block_pairing_definition'
block='block_number'
cond1='(unnamed)/Death,Me/Life'
cond2='(unnamed)/Life,Me/Death'
blocks=[0, 1, 2, 3]
include_blocks=True
flag_outformat='pct'
rt='trial_latency'
fast_rt=400
slow_rt=10000
error_or_correct='error'
weighted=True
errors_after_fastslow_rmvd=False
df_fastslow_rts_rmvd=False
biat=True
var='trial_error'
idx=pd.IndexSlice
outcms=get_error_fastslow_rates(df,correct,subject,condition,block,cond1,cond2,blocks,flag_outformat,include_blocks,\
rt,fast_rt,slow_rt,error_or_correct,weighted,errors_after_fastslow_rmvd,df_fastslow_rts_rmvd,biat)
if flag_outformat=='pct':
all_df=df.groupby(subject)[var].mean()
##By condition
cond1_df=df[(df[condition]==cond1)].groupby(subject)[var].mean()
cond2_df=df[(df[condition]==cond2)].groupby(subject)[var].mean()
##By condition and block
if include_blocks == True:
blcnd=df.groupby([subject,condition,block])[var].mean()
elif flag_outformat=='sum':
all_df=df.groupby(subject)[var].sum()
##By condition
cond1_df=df[(df[condition]==cond1)].groupby(subject)[var].sum()
cond2_df=df[(df[condition]==cond2)].groupby(subject)[var].sum()
##By condition and block
if include_blocks == True:
blcnd=df.groupby([subject,condition,block])[var].sum()
elif flag_outformat=='count':
all_df=df.groupby(subject)[var].count()
##By condition
cond1_df=df[(df[condition]==cond1)].groupby(subject)[var].count()
cond2_df=df[(df[condition]==cond2)].groupby(subject)[var].count()
##By condition and block
if include_blocks == True:
blcnd=df.groupby([subject,condition,block])[var].count()
if (include_blocks == True) and (biat==False):
cond1_bl1=blcnd.loc[idx[:,cond1,[blocks[0],blocks[2]]]]
cond1_bl2=blcnd.loc[idx[:,cond1,[blocks[1],blocks[3]]]]
cond2_bl1=blcnd.loc[idx[:,cond2,[blocks[0],blocks[2]]]]
cond2_bl2=blcnd.loc[idx[:,cond2,[blocks[1],blocks[3]]]]
#Drop block and condidition levels to subtract means
for df_tmp in [cond1_bl1,cond1_bl2,cond2_bl1,cond2_bl2]:
df_tmp.index=df_tmp.index.droplevel([1,2])
out=pd.concat([all_df,cond1_df,cond2_df,cond1_bl1,cond1_bl2,cond2_bl1,cond2_bl2],axis=1)
elif (include_blocks == True) and (biat==True):
if len(blocks)>=2:
cond1_bl1=blcnd.loc[idx[:,cond1,[blocks[0],blocks[1]]]]
cond2_bl1=blcnd.loc[idx[:,cond2,[blocks[0],blocks[1]]]]
for df_tmp in [cond1_bl1,cond2_bl1]:
df_tmp.index=df_tmp.index.droplevel([1,2])
out=pd.concat([all_df,cond1_df,cond2_df,cond1_bl1,cond2_bl1],axis=1)
if len(blocks)>=4:
cond1_bl2=blcnd.loc[idx[:,cond1,[blocks[2],blocks[3]]]]
cond2_bl2=blcnd.loc[idx[:,cond2,[blocks[2],blocks[3]]]]
for df_tmp in [cond1_bl2,cond2_bl2]:
df_tmp.index=df_tmp.index.droplevel([1,2])
out=pd.concat([out,cond1_bl2,cond2_bl2],axis=1)
if len(blocks)==6:
cond1_bl3=blcnd.loc[idx[:,cond1,[blocks[4],blocks[5]]]]
cond2_bl3=blcnd.loc[idx[:,cond2,[blocks[4],blocks[5]]]]
for df_tmp in [cond1_bl3,cond2_bl3]:
df_tmp.index=df_tmp.index.droplevel([1,2])
out=pd.concat([out,cond1_bl3,cond2_bl3],axis=1)
elif include_blocks == False:
out=pd.concat([all_df,cond1_df,cond2_df],axis=1)
out
pd.concat(outcms,axis=1)
biat['correct']=np.abs(1-biat.trial_error)
blcnd=biat.groupby(['session_id','block_pairing_definition','block_number'])['trial_error'].count()
blocks=[0, 1, 2, 3]
cond1='(unnamed)/Death,Me/Life'
cond2='(unnamed)/Life,Me/Death'
cond1_bl1=blcnd.loc[idx[:,cond1,[blocks[0],blocks[2]]]]
cond1_bl2=blcnd.loc[idx[:,cond1,[blocks[1],blocks[3]]]]
cond2_bl1=blcnd.loc[idx[:,cond2,[blocks[0],blocks[2]]]]
cond2_bl2=blcnd.loc[idx[:,cond2,[blocks[1],blocks[3]]]]
cond2_bl2
for df_tmp in [cond1_bl1,cond1_bl2,cond2_bl1,cond2_bl2]:
df_tmp.index=df_tmp.index.droplevel([1,2])
all_df=biat.groupby('session_id')['trial_error'].mean()
Explanation: biatd1=analyze_iat(biatd,subject='session_id',rt='trial_latency',condition='block_pairing_definition',\
correct='trial_error',error_or_correct='error'\
,cond1='(unnamed)/Death,Me/Life',cond2='(unnamed)/Life,Me/Death'\
,block='block_number',blocks=[0, 1, 2, 3,4,5],biat=True)
End of explanation |
6,682 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bnu', 'bnu-esm-1-1', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: BNU
Source ID: BNU-ESM-1-1
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
6,683 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intrinsic dispersion
Likelihood minimization of Gaussian Distribution
The data
Step6: Formula to adjust
The Probability to measure x with and error dx
The Theory
The probability p to observe a point i with a value x given it's gaussian error dx and assuming the system has an intrinsic dispersion sigma is | Python Code:
sigma_int = 0.10
mu = -0.5
error = 0.12
error_noise = 0.03 # This means that the errors will be 0.12 +/- 0,03
npoints = 1000
errors = np.random.normal(loc=error, scale=error_noise, size=npoints)
data = np.random.normal(loc=mu, scale=sigma_int, size=npoints) + np.random.normal(loc=0,scale=errors)
fig = mpl.figure(figsize=[13,5])
ax = fig.add_subplot(111)
ax.errorbar(np.arange(npoints), data, yerr=errors, marker="o", ls="None",
mfc=mpl.cm.Blues(0.6,0.6), mec=mpl.cm.binary(0.8,1), ecolor="0.7",
ms=13, mew=2)
Explanation: Intrinsic dispersion
Likelihood minimization of Gaussian Distribution
The data
End of explanation
from scipy.optimize import minimize
from scipy import stats
class WeightedMean( object ):
Class that allows us to fit for a weighted mean including intrinsic dispersion
PARAMETERS = [mu, sigma_int]
def __init__(self, data, errors):
Initialize the class
Parameters
----------
data: [array]
measured value
errors: [array]
measured errors
Return
------
Void
# ---------------
# Test the input
if len(data) != len(errors):
raise ValueError("data and errors must have the same size (given %d vs. %d)"%(len(data),len(errors)))
self.data = np.asarray(data)
self.errors = np.asarray(errors)
self.npoints = len(self.data)
def set_guesses(self,guesses):
Set the 2 guesses for the fit: mu and sigma_intrinsic
Return
------
Void
if len(guesses) != len(self.PARAMETERS):
raise ValueError("guess must have %d entries (%d given)"%(len(self.PARAMETERS), len(guesses)))
self.guesses = np.asarray(guesses)
def fit(self, guesses):
fit the parameters to the data
This uses scipy.optimize.minimize
self.set_guesses(guesses)
self.fitout = minimize(self.minus_loglikelihood, self.guesses)
print self.fitout
def minus_loglikelihood(self, parameters):
The sum of the minus loglikelihood used for the fit
Parameters
----------
parameters: [array]
list of the values for the free parameters of the model
Return
------
float (- sum loglikelihood)
mu, sigma_int = parameters
return - np.sum( np.log( stats.norm.pdf(self.data, loc=mu, scale=np.sqrt(sigma_int**2 + self.errors**2)) ))
wmean = WeightedMean(data,errors)
wmean.fit()
np.sqrt( 2.76810002e-05)
Explanation: Formula to adjust
The Probability to measure x with and error dx
The Theory
The probability p to observe a point i with a value x given it's gaussian error dx and assuming the system has an intrinsic dispersion sigma is:
$$
p = G(x_i\ |\ \mu,\ \sqrt{dx_i ^2+ sigma^2})
$$
where $G$ is the gaussian probability distribution function (pdf).
The Code
In Python you can measure $p$ using the scipy.stats norm class:
```python
from scipy.stats import norm
import numpy as np
p = norm.pdf(x, loc= mu, scale= np.sqrt(dx2 + sigma2))
```
The Likelihood of your sample
The likelihood to observe your sample given your model (here $\mu$ and $\sigma$) is simply the product of the
probability to observe every given points of your sample. The best model with then be the one maximizing the Likelihood $\mathcal{L}$:
$$
\mathcal{L} = \prod p_i
$$
In practive we work with the log of the likelihood such that the formula is based on the sum of the log of the individual probabilities:
$$
\log\mathcal{L} = \sum \log(p_i)
$$
End of explanation |
6,684 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Master Telefónica Big Data & Analytics
Prueba de Evaluación del Tema 4
Step1: Para evitar problemas de sobrecarga de memoria, o de tiempo de procesado, puede reducir el tamaño el corpus, modificando el valor de la variable n_docs a continuación.
Step2: A continuación cargaremos los datos en un RDD
Step3: 1. Ejercicios
Ejercicio 1 | Python Code:
#nltk.download()
mycorpus = nltk.corpus.reuters
Explanation: Master Telefónica Big Data & Analytics
Prueba de Evaluación del Tema 4:
Topic Modelling.
Date: 2016/04/10
Para realizar esta prueba es necesario tener actualizada la máquina virtual con la versión más reciente de MLlib.
Para la actualización, debe seguir los pasos que se indican a continuación:
Pasos para actualizar MLlib:
Entrar en la vm como root:
vagrant ssh
sudo bash
Ir a /usr/local/bin
Descargar la última versión de spark desde la vm mediante
wget http://www-eu.apache.org/dist/spark/spark-1.6.1/spark-1.6.1-bin-hadoop2.6.tgz
Desempaquetar:
tar xvf spark-1.6.1-bin-hadoop2.6.tgz (y borrar el tgz)
Lo siguiente es un parche, pero suficiente para que funcione:
Guardar copia de spark-1.3: mv spark-1.3.1-bin-hadoop2.6/ spark-1.3.1-bin-hadoop2.6_old
Crear enlace a spark-1.6: ln -s spark-1.6.1-bin-hadoop2.6/ spark-1.3.1-bin-hadoop2.6
Librerías
Puede utilizar este espacio para importar todas las librerías que necesite para realizar el examen.
0. Adquisición de un corpus.
Descargue el contenido del corpus reuters de nltk.
import nltk
nltk.download()
Selecciona el identificador reuters.
End of explanation
n_docs = 500000
filenames = mycorpus.fileids()
fn_train = [f for f in filenames if f[0:5]=='train']
corpus_text = [mycorpus.raw(f) for f in fn_train]
# Reduced dataset:
n_docs = min(n_docs, len(corpus_text))
corpus_text = [corpus_text[n] for n in range(n_docs)]
print 'Loaded {0} files'.format(len(corpus_text))
Explanation: Para evitar problemas de sobrecarga de memoria, o de tiempo de procesado, puede reducir el tamaño el corpus, modificando el valor de la variable n_docs a continuación.
End of explanation
corpusRDD = sc.parallelize(corpus_text, 4)
print "\nRDD created with {0} elements".format(corpusRDD.count())
Explanation: A continuación cargaremos los datos en un RDD
End of explanation
# Compute RDD replacing tokens by token_ids
corpus_sparseRDD = corpus_wcRDD2.map(lambda x: [(invD[t[0]], t[1]) for t in x])
# Convert list of tuplas into Vectors.sparse object.
corpus_sparseRDD = corpus_sparseRDD.map(lambda x: Vectors.sparse(n_tokens, x))
corpus4lda = corpus_sparseRDD.zipWithIndex().map(lambda x: [x[1], x[0]]).cache()
Explanation: 1. Ejercicios
Ejercicio 1: Preprocesamiento de datos.
Prepare los datos para aplicar un algoritmo de modelado de tópicos en pyspark. Para ello, aplique los pasos siguientes:
Tokenización: convierta cada texto a utf-8, y transforme la cadena en una lista de tokens.
Homogeneización: pase todas las palabras a minúsculas y elimine todos los tokens no alfanuméricos.
Limpieza: Elimine todas las stopwords utilizando el fichero de stopwords disponible en NLTK para el idioma inglés.
Guarde el resultado en la variable corpus_tokensRDD
Ejercicio 2: Stemming
Aplique un procedimiento de stemming al corpus, utilizando el SnowballStemmer de NLTK. Guarde el resultado en corpus_stemRDD.
Ejercicio 3: Vectorización
En este punto cada documento del corpus es una lista de tokens.
Calcule un nuevo RDD que contenga, para cada documento, una lista de tuplas. La clave (key) de cada lista será un token y su valor el número de repeticiones del mismo en el documento.
Imprima una muestra de 20 tuplas uno de los documentos del corpus.
Ejercicio 4: Cálculo del diccionario de tokens
Construya, a partir de corpus_wcRDD, un nuevo diccionario con todos los tokens del corpus. El resultado será un diccionario python de nombre wcDict, cuyas entradas serán los tokens y sus valores el número de repeticiones del token en todo el corpus.
wcDict = {token1: valor1, token2, valor2, ...}
Imprima el número de repeticiones del token interpret
Ejercicio 5: Número de tokens.
Determine el número total de tokens en el diccionario. Imprima el resultado.
Ejercicio 6: Términos demasiado frecuentes:
Determine los 5 tokens más frecuentes del corpus. Imprima el resultado.
Ejercicio 7: Número de documentos del token más frecuente.
Determine en qué porcentaje de documentos aparece el token más frecuente.
Ejercicio 8: Filtrado de términos.
Elimine del corpus los dós términos más frecuentes. Guarde el resultado en un nuevo RDD denominado corpus_wcRDD2, con la misma estructura que corpus_wcRDD (es decir, cada documento una lista de tuplas).
Ejercicio 9: Lista de tokens y diccionario inverso.
Determine la lista de topicos de todo el corpus, y construya un dictionario inverso, invD, cuyas entradas sean los números consecutivos de 0 al número total de tokens, y sus salidas cada uno de los tokens, es decir
invD = {0: token0, 1: token1, 2: token2, ...}
Ejercicio 10: Algoritmo LDA.
Para aplicar el algoritmo LDA, es necesario reemplazar las tuplas (token, valor) de wcRDD por tuplas del tipo (token_id, value), sustituyendo cada token por un identificador entero.
El código siguiente se encarga de completar este proceso:
End of explanation |
6,685 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Use this to keep track of useful code bits as I learn Python
Krista, August 19, 2015
Shortcut Action
Shift-Enter run cell
Ctrl-Enter run cell in-place
Alt-Enter run cell, insert below
Ctrl / (Ctrl and then the slash)...will comment out any selected text within a block of code
Step1: /...this is where I learned to not use pip install with scikit-learn...
To upgrade scikit-learn
Step2: OK...can I get a simple scatter plot?
Step3: Write a function to match RI number and cNumbers | Python Code:
#First up...list the files in a directory
import os,sys
os.listdir(os.getcwd())
#read the CSV file into a data frame and use the pandas head tool to show me the first five rows.
#note that this doesn't seem to work: pd.head(CO_RawData)
CO_RawData=pd.read_csv(mtabFile, index_col='RInumber')
CO_RawData.head(n=5)
#insert an image...the gif file here would be in the folder
from IPython.display import Image
Image(url="R02485.gif")
for x in range(0, 3):
print("hello")
fig.suptitle(CO + ' working') #use the plus sign to concatenate strings for the title
from IPython.core.debugger import Tracer #used this to step into the function and debug it, also need line with Tracer()()
for i, CO in enumerate(CO_withKO):
#if i==2:
#break
kos=CO_withKO[CO]['Related KO']
cos=CO_withKO[CO]['Related CO']
for k in kos:
if k in KO_RawData.index:
kData=KO_RawData.loc[kos].dropna()
kData=(kData.T/kData.sum(axis=1)).T
cData=CO_RawData.loc[cos].dropna()
cData=(cData.T/cData.sum(axis=1)).T
fig, ax=plt.subplots(1)
kData.T.plot(color='r', ax=ax)
cData.T.plot(color='k', ax=ax)
Tracer()()
getKmeans = CcoClust.loc['C01909']['kmeans']
makeStringLabel = CO + '_kmeansCluster_' + str(getKmeans)
#fig.suptitle(CO)
fig.suptitle(makeStringLabel)
#fig.savefig(CO+'.png') #stop saving all the images for now...
break
#here, tData is a pandas data frame that I want to plot into a bar graph
#tData.plot(kind = "bar") ##this would be the code to run if tData existed...
#instead I am reading in the file saved and present in my working directory using this:
from IPython.display import Image
Image(filename="SampleBarGraph.png")
#indexing in Python is a bit bizarre, or at least takes some getting used to.
# df.ix[0,'cNumber'] #this will allow me to mix index from integers with index by label
#other way apparently uses iloc and loc, to use integers and labels respectively
# this would be df.iloc[0].loc['cNumber] {can't get that to work in the if statement}
#ways to subset data...
CcoClust.loc['C05356']['kmeans']
tData = CcoClust.loc['C05356']
type(tData)
#want to select only the first group in the kmeans clusters
#(baby steps, eventually do this for each cluster)
CcoClust[CcoClust.kmeans==1]
Explanation: Use this to keep track of useful code bits as I learn Python
Krista, August 19, 2015
Shortcut Action
Shift-Enter run cell
Ctrl-Enter run cell in-place
Alt-Enter run cell, insert below
Ctrl / (Ctrl and then the slash)...will comment out any selected text within a block of code
End of explanation
import sklearn.cluster
#from sklearn.cluster import KMeans
silAverage = [0.4227, 0.33299, 0.354, 0.3768, 0.3362, 0.3014, 0.3041, 0.307, 0.313, 0.325,
0.3109, 0.2999, 0.293, 0.289, 0.2938, 0.29, 0.288, 0.3, 0.287]
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: /...this is where I learned to not use pip install with scikit-learn...
To upgrade scikit-learn:
conda update scikit-learn
End of explanation
plt.scatter(range(0,len(silAverage)), silAverage)
plt.grid() #put on a grid
plt.xlim(-1,20)
#get list of column names in pandas data frame
list(my_dataframe.columns.values)
for i in range(0,len(ut)):
if i == 10:
break
p = ut.iloc[i,:]
n = p.name
if n[0] == 'R':
#do the plotting,
#print 'yes'
CO = p.KEGG
kos = CO_withKO[CO]['Related KO']
cos = CO_withKO[CO]['Related CO']
#Tracer()()
for k in kos:
if k in KO_RawData.index:
kData=KO_RawData.loc[kos].dropna()
kData=(kData.T/kData.sum(axis=1)).T
#? why RawData, the output from the K-means will have the normalized data, use that for CO
#bc easier since that is the file I am working with right now.
#cData=CO_RawData.loc[cos].dropna()
#cData=(cData.T/cData.sum(axis=1)).T
cData = pd.DataFrame(p[dayList]).T
#go back and check, but I think this next step is already done
#cData=(cData.T/cData.sum(axis=1)).T
fig, ax=plt.subplots(1)
kData.T.plot(color='r', ax=ax)
cData.T.plot(color='k', ax=ax)
else:
#skip over the KO plotting, so effectively doing nothing
#print 'no'
Explanation: OK...can I get a simple scatter plot?
End of explanation
def findRInumber(dataIn,KEGGin):
#find possible RI numbers for a given KEGG number.
for i,KEGG in enumerate(dataIn['KEGG']):
if KEGG == KEGGin:
t = dataIn.index[i]
print t
#For example: this will give back one row, C18028 will be multiple
m = findRInumber(forRelatedness,'C00031')
m
#to copy a matrix I would think this works: NOPE
#forRelatedness = CcoClust# this is NOT making a new copy...
#instead it makes a new pointing to an existing data frame. So you now have two ways to
#reference the same data frame. Make a change with one term and you can see the same change
#using the other name. Odd. No idea why you would want that.
##this is the test that finally let me understand enumerate
# for index, KEGG in enumerate(useSmall['KEGG']):
# print index,KEGG
# Windows
chrome_path = 'C:/Program Files (x86)/Google/Chrome/Application/chrome.exe %s'
url = "http://www.genome.jp/dbget-bin/www_bget?cpd:C00019"
webbrowser.get(chrome_path).open_new(url)
#while a nice idea, this stays open until you close the web browser window.
from IPython.display import HTML
tList = ['C02265','C00001']
for i in tList:
ml = '<iframe src = http://www.genome.jp/dbget-bin/www_bget?cpd:' + i + ' width=700 height=350></iframe>'
print ml
from IPython.display import HTML
CO='C02265'
HTML('<iframe src = http://www.genome.jp/dbget-bin/www_bget?cpd:' + CO + ' width=700 height=350></iframe>')
Explanation: Write a function to match RI number and cNumbers
End of explanation |
6,686 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Applying classifiers to Shalek2013
We're going to use the classifier knowledge that we've learned so far and apply it to the shalek2013 and macaulay2016 datasets.
For the GO analysis, we'll need a few other packages
Step3: Utility functions for gene ontology and SVM decision boundary plotting
Step4: Read in the Shalek2013 data
Step5: Side note
Step6: Assign the variable lps_response_genes based on the gene ids pulled out from this subset
Step7: For this analysis We want to compare the difference between the "mature" and "immature" cells in the Shalek2013 data.
Step8: Use only the genes that are substantially expressed in single cells
Step9: Now because computers only understand numbers, we'll convert the category label of "mature" and "immature" into integers to a using a LabelEncoder. Let's look at that column again, only for mature cells
Step10: Run the classifier!!
Yay so now we can run a classifier!
Step11: We'll use PCA or ICA to reduce our data for visualizing the SVM decision boundary. Stick to 32 or fewer components because the next steps will die if you use more than 32. Also, this n_components variable will be used later so pay attention
Step12: Let's add the group identifier here for plotting
Step13: And plot our components in
Step14: Now we'll make a dataframe of 20 equally spaced intervals to show the full range of the data
Step15: Add zero to the top (the reason for this will make sense later)
Step16: You'll notice that the top (head()) has the minimum values and the bottom (tail()) has the maximum values
Step17: Just to convince ourselves that this actually shows the range of all values, lets plot the smushed intervals and the smushed data in teh same spot
Step18: Now we'll make a 2-dimensional grid of the whole space so we can plot the decision boundary
Step19: Let's plot the grid so we can see it!
Step20: Convert the smushed area into unsmushed high dimensional space
Step21: Get the surface of the decision function
Step22: Evaluating classifiers through Gene Ontology (GO) Enrichment
Gene ontology is a tree (aka directed acyclic graph or "dag") of gene annotations. The topmost node is the most general, and the bottommost nodes are the most specific. Here is an example GO graph.
Three GO Domains
Step23: GOEA Step 2
Step24: GOEA Step 3
Step25: GOEA Step 4
Step26: GOEA Step 5
Step27: Exercise 1
Try the same analysis, but use ICA instead of PCA.
How does that change the classification?
How does it change the enriched genes?
Are the cells closer or farther from the decision boundary?
Is that a "better" or "worse" classification? Why?
Why does the reduction algorithm affect the visualization of the classification?
Could you use MDS or t-SNE for plotting of the classifier boundary? Why or why not?
Try the same analysis, but use the "LPS Response" genes and a dimensionality reduction algorithm of your choice. (... how do you subset only certain columns out of the dataframe?)
How does that change the classification?
How does it change the enriched genes?
Are the cells closer or farther from the decision boundary?
Is that a "better" or "worse" classification? Why?
For (1) and (2) above, also fry using radial basis kernel (kernel="rbf") for SVC.
How does that change the classification?
How does it change the enriched genes?
Are the cells closer or farther from the decision boundary?
Is that a "better" or "worse" classification? Why?
Decision trees
Step28: Macaulay2016 | Python Code:
# Alphabetical order is standard
# We're doing "import superlongname as abbrev" for our laziness - this way we don't have to type out the whole thing each time.
# From python standard library
import collections
# Python plotting library
import matplotlib.pyplot as plt
# Numerical python library (pronounced "num-pie")
import numpy as np
# Dataframes in Python
import pandas as pd
# Statistical plotting library we'll use
import seaborn as sns
sns.set(style='whitegrid')
# Label processing
from sklearn import preprocessing
# Matrix decomposition
from sklearn.decomposition import PCA, FastICA
# Matrix decomposition
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
# Manifold learning
from sklearn.manifold import MDS, TSNE
# Gene ontology
import goatools
import mygene
# This is necessary to show the plotted figures inside the notebook -- "inline" with the notebook cells
%matplotlib inline
Explanation: Applying classifiers to Shalek2013
We're going to use the classifier knowledge that we've learned so far and apply it to the shalek2013 and macaulay2016 datasets.
For the GO analysis, we'll need a few other packages:
mygene for looking up the gene ontology categories of genes
goatools for performing gene ontology enrichment analysis
fishers_exact_test for goatools
Use the following commands at your terminal to install the packages. Some of them are on Github so it's important to get the whole command right.
$ pip install mygene
$ pip install git+git://github.com/olgabot/goatools.git
$ pip install git+https://github.com/brentp/fishers_exact_test.git
End of explanation
def plot_svc_decision_function(clf, ax=None):
Plot the decision function for a 2D SVC
if ax is None:
ax = plt.gca()
x = np.linspace(plt.xlim()[0], plt.xlim()[1], 30)
y = np.linspace(plt.ylim()[0], plt.ylim()[1], 30)
Y, X = np.meshgrid(y, x)
P = np.zeros_like(X)
for i, xi in enumerate(x):
for j, yj in enumerate(y):
P[i, j] = clf.decision_function([[xi, yj]])
# plot the margins
ax.contour(X, Y, P, colors='k',
levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
GO_KEYS = 'go.BP', 'go.MF', 'go.CC'
def parse_mygene_output(mygene_output):
Convert mygene.querymany output to a gene id to go term mapping (dictionary)
Parameters
----------
mygene_output : dict or list
Dictionary (returnall=True) or list (returnall=False) of
output from mygene.querymany
Output
------
gene_name_to_go : dict
Mapping of gene name to a set of GO ids
# if "returnall=True" was specified, need to get just the "out" key
if isinstance(mygene_output, dict):
mygene_output = mygene_output['out']
gene_name_to_go = collections.defaultdict(set)
for line in mygene_output:
gene_name = line['query']
for go_key in GO_KEYS:
try:
go_terms = line[go_key]
except KeyError:
continue
if isinstance(go_terms, dict):
go_ids = set([go_terms['id']])
else:
go_ids = set(x['id'] for x in go_terms)
gene_name_to_go[gene_name] |= go_ids
return gene_name_to_go
Explanation: Utility functions for gene ontology and SVM decision boundary plotting
End of explanation
metadata = pd.read_csv('../data/shalek2013/metadata.csv',
# Sets the first (Python starts counting from 0 not 1) column as the row names
index_col=0)
expression = pd.read_csv('../data/shalek2013/expression.csv',
# Sets the first (Python starts counting from 0 not 1) column as the row names
index_col=0)
expression_feature = pd.read_csv('../data/shalek2013/expression_feature.csv',
# Sets the first (Python starts counting from 0 not 1) column as the row names
index_col=0)
# creating new column indicating color
metadata['color'] = metadata['maturity'].map(
lambda x: 'MediumTurquoise' if x == 'immature' else 'Teal')
metadata.loc[metadata['pooled'], 'color'] = 'black'
# Create a column indicating both maturity and pooled for coloring with seaborn, e.g. sns.pairplot
metadata['group'] = metadata['maturity']
metadata.loc[metadata['pooled'], 'group'] = 'pooled'
# Create a palette and ordering for using with sns.pairplot
palette = ['MediumTurquoise', 'Teal', 'black']
order = ['immature', 'mature', 'pooled']
metadata
Explanation: Read in the Shalek2013 data
End of explanation
subset = expression_feature.query('gene_category == "LPS Response"')
subset.head()
Explanation: Side note: getting LPS response genes using query
Get the "LPS response genes" using a query:
End of explanation
lps_response_genes = subset.index
lps_response_genes
Explanation: Assign the variable lps_response_genes based on the gene ids pulled out from this subset:
End of explanation
singles_ids = [x for x in expression.index if x.startswith('S')]
singles = expression.loc[singles_ids]
singles.shape
Explanation: For this analysis We want to compare the difference between the "mature" and "immature" cells in the Shalek2013 data.
End of explanation
singles = singles.loc[:, (singles > 1).sum() >= 3]
singles.shape
Explanation: Use only the genes that are substantially expressed in single cells
End of explanation
singles_maturity = metadata.loc[singles.index, 'maturity']
singles_maturity
# Instantiate the encoder
encoder = preprocessing.LabelEncoder()
# Get number of categories and transform "mature"/"immature" to numbers
target = encoder.fit_transform(singles_maturity)
target
Explanation: Now because computers only understand numbers, we'll convert the category label of "mature" and "immature" into integers to a using a LabelEncoder. Let's look at that column again, only for mature cells:
End of explanation
from sklearn.svm import SVC
classifier = SVC(kernel='linear')
classifier.fit(singles, target)
Explanation: Run the classifier!!
Yay so now we can run a classifier!
End of explanation
n_components = 3
smusher = PCA(n_components=n_components)
smushed = pd.DataFrame(smusher.fit_transform(singles), index=singles.index)
print(smushed.shape)
smushed.head()
singles.head()
Explanation: We'll use PCA or ICA to reduce our data for visualizing the SVM decision boundary. Stick to 32 or fewer components because the next steps will die if you use more than 32. Also, this n_components variable will be used later so pay attention :)
End of explanation
smushed_with_group = smushed.join(metadata['group'])
smushed_with_group
Explanation: Let's add the group identifier here for plotting:
End of explanation
sns.pairplot(smushed_with_group, hue='group', palette=palette,
hue_order=order, plot_kws=dict(s=100, edgecolor='white', linewidth=2))
Explanation: And plot our components in
End of explanation
n_intervals = 10
smushed_intervals = pd.DataFrame(smushed).apply(lambda x: pd.Series(np.linspace(x.min(), x.max(), n_intervals)))
print(smushed_intervals.shape)
smushed_intervals.head()
Explanation: Now we'll make a dataframe of 20 equally spaced intervals to show the full range of the data:
End of explanation
smushed_intervals = pd.concat([pd.DataFrame([[0, 0, 0]]), smushed_intervals], ignore_index=True)
print(smushed_intervals.shape)
smushed_intervals.head()
Explanation: Add zero to the top (the reason for this will make sense later)
End of explanation
smushed_intervals.tail()
Explanation: You'll notice that the top (head()) has the minimum values and the bottom (tail()) has the maximum values:
End of explanation
fig, ax = plt.subplots()
ax.scatter(smushed_intervals[0], smushed_intervals[1], color='pink')
ax.scatter(smushed[0], smushed[1], color=metadata['color'])
Explanation: Just to convince ourselves that this actually shows the range of all values, lets plot the smushed intervals and the smushed data in teh same spot:
End of explanation
low_d_grid = np.meshgrid(*[smushed_intervals[col] for col in smushed_intervals])
print(len(low_d_grid))
print([x.shape for x in low_d_grid])
Explanation: Now we'll make a 2-dimensional grid of the whole space so we can plot the decision boundary
End of explanation
fig, ax = plt.subplots()
ax.scatter(low_d_grid[0], low_d_grid[1], color='pink')
ax.scatter(smushed[0], smushed[1], color=metadata['color'])
new_nrows = smushed_intervals.shape[0]**n_components
new_ncols = n_components
low_dimensional_vectors = pd.DataFrame(
np.concatenate([x.flatten() for x in low_d_grid]).reshape(new_nrows, new_ncols, order='F'))
print(low_dimensional_vectors.shape)
low_dimensional_vectors.head()
Explanation: Let's plot the grid so we can see it!
End of explanation
smusher =
smusher.components_.shape
high_dimensional_vectors = smusher.inverse_transform(low_dimensional_vectors)
high_dimensional_vectors.shape
Explanation: Convert the smushed area into unsmushed high dimensional space
End of explanation
low_d_grid[0].shape
decision_surface = classifier.decision_function(high_dimensional_vectors)
print(decision_surface.shape)
decision_surface = decision_surface.reshape(low_d_grid[0].shape, order='F')
decision_surface.shape
.shape
import itertools
low_d_grid[0].shape
pairgrid = sns.pairplot(smushed_with_group, hue='group')
for i, j in itertools.permutations(range(n_components), 2):
ax = pairgrid.axes[i, j]
# Commands to get decision surface
z_coords = [':' if x in (i, j) else -1 for x in range(n_components)]
z_command = 'Z[{}]'.format(', '.join(map(str, z_coords)))
Z_subset = eval(z_command)
print('Z_subset.shape', Z_subset.shape)
# Z_smushed = pd.DataFrame(smusher.fit_transform(Z_subset))
# print('Z_smushed.shape', Z_smushed.shape)
# print('Z_smushed.head()', Z_smushed.head())
ax.contour(smushed_intervals[i], smushed_intervals[j], Z_subset, colors='k',
levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
fig, ax = plt.subplots()
ax.scatter(reduced_data[:, 0], reduced_data[:, 1], c=target, cmap='Dark2')
ax.contour(X, Y, Z, colors='k',
levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
np.reshape?
Explanation: Get the surface of the decision function
End of explanation
from goatools.base import download_go_basic_obo
obo_fname = download_go_basic_obo()
# Show the filename
obo_fname
Explanation: Evaluating classifiers through Gene Ontology (GO) Enrichment
Gene ontology is a tree (aka directed acyclic graph or "dag") of gene annotations. The topmost node is the most general, and the bottommost nodes are the most specific. Here is an example GO graph.
Three GO Domains:
Cellular Component (CC)
Molecular Function (MF)
Biological Process (BP)
Perform GO enrichment analysis (GOEA)
GOEA Step 1: Download GO graph file of "obo" type (same for all species)
This will download the file "go-basic.obo" if it doesn't already exist. This only needs to be done once.
End of explanation
obo_dag = goatools.obo_parser.GODag(obo_file='go-basic.obo')
Explanation: GOEA Step 2: Create the GO graph (same for all species)
End of explanation
# Initialize the "mygene.info" (http://mygene.info/) interface
mg = mygene.MyGeneInfo()
mygene_output = mg.querymany(expression.columns,
scopes='symbol', fields=['go.BP', 'go.MF', 'go.CC'], species='mouse',
returnall=True)
gene_name_to_go = parse_mygene_output(mygene_output)
Explanation: GOEA Step 3: Get gene ID to GO id mapping (species-specific and experiment-specific)
Here we are establishing the background for our GOEA. Defining your background is very important because, for example, tehre are lots of neural genes so if you use all human genes as background in your study of which genes are upregulated in Neuron Type X vs Neuron Type Y, you'll get a bunch of neuron genes (which is true) but not the smaller differences between X and Y. Typicall, you use all expressed genes as the background.
For our data, we can access all expressed genes very simply by getting the column names in the dataframe: expression.columns.
End of explanation
go_enricher = goatools.GOEnrichmentStudy(expression.columns, gene_name_to_go, obo_dag)
Explanation: GOEA Step 4: Create a GO enrichment calculator object go_enricher (species- and experiment-specific)
In this step, we are using the two objects we've created (obo_dag from Step 2 and gene_name_to_go from Step 3) plus the gene ids to create a go_enricher object
End of explanation
genes_of_interest =
results = go.run_study(genes[:5])
go_enrichment = pd.DataFrame([r.__dict__ for r in results])
go_enrichment.head()
import pandas.util.testing as pdt
pdt.assert_numpy_array_equal(two_d_space_v1, two_d_space_v2)
two_d_space.shape
plt.scatter(two_d_space[:, 0], two_d_space[:, 1], color='black')
expression.index[:10]
clf = ExtraTreesClassifier(n_estimators=100000, n_jobs=-1, verbose=1)
expression.index.duplicated()
expression.drop_duplicates()
# assoc = pd.read_table('danio-rerio-gene-ontology.txt').dropna()
# assoc_df = assoc.groupby('Ensembl Gene ID').agg(lambda s: ';'.join(s))
# assoc_s = assoc_df['GO Term Accession'].apply(lambda s: set(s.split(';')))
# assoc_dict = assoc_s.to_dict()
import goatools
# cl = gene_annotation.sort(col, ascending=False)[gene_annotation[col] > 5e-4].index
g = goatools.GOEnrichmentStudy(list(gene_annotation.index), assoc_dict, obo_dag, study=list(cl))
for r in g.results[:25]:
print r.goterm.id, '{:.2}'.format(r.p_bonferroni), r.ratio_in_study, r.goterm.name, r.goterm.namespace
unsmushed = smusher.inverse_transform(two_d_space)
Z = classifier.decision_function(unsmushed)
Z = Z.reshape(xx.shape)
fig, ax = plt.subplots()
ax.scatter(reduced_data[:, 0], reduced_data[:, 1], c=target, cmap='Dark2')
ax.contour(X, Y, Z, colors='k',
levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
Explanation: GOEA Step 5: Calculate go enrichment!!! (species- and experiment-specific)
Now we are ready to run go enrichment!! Let's take our enriched genes of interest and
End of explanation
def visualize_tree(estimator, X, y, smusher, boundaries=True,
xlim=None, ylim=None):
estimator.fit(X, y)
smushed = smusher.fit_transform(X)
if xlim is None:
xlim = (smushed[:, 0].min() - 0.1, smushed[:, 0].max() + 0.1)
if ylim is None:
ylim = (smushed[:, 1].min() - 0.1, smushed[:, 1].max() + 0.1)
x_min, x_max = xlim
y_min, y_max = ylim
xx, yy = np.meshgrid(np.linspace(x_min, x_max, 100),
np.linspace(y_min, y_max, 100))
two_d_space = np.c_[xx.ravel(), yy.ravel()]
unsmushed = smusher.inverse_transform(two_d_space)
Z = estimator.predict(unsmushed)
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, alpha=0.2, cmap='Paired')
plt.clim(y.min(), y.max())
# Plot also the training points
plt.scatter(smushed[:, 0], smushed[:, 1], c=y, s=50, cmap='Paired')
plt.axis('off')
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.clim(y.min(), y.max())
# Plot the decision boundaries
def plot_boundaries(i, xlim, ylim):
if i < 0:
return
tree = estimator.tree_
if tree.feature[i] == 0:
plt.plot([tree.threshold[i], tree.threshold[i]], ylim, '-k')
plot_boundaries(tree.children_left[i],
[xlim[0], tree.threshold[i]], ylim)
plot_boundaries(tree.children_right[i],
[tree.threshold[i], xlim[1]], ylim)
elif tree.feature[i] == 1:
plt.plot(xlim, [tree.threshold[i], tree.threshold[i]], '-k')
plot_boundaries(tree.children_left[i], xlim,
[ylim[0], tree.threshold[i]])
plot_boundaries(tree.children_right[i], xlim,
[tree.threshold[i], ylim[1]])
if boundaries:
plot_boundaries(0, plt.xlim(), plt.ylim())
from sklearn.tree import DecisionTreeClassifier
classifier = DecisionTreeClassifier()
from sklearn.decomposition import PCA, FastICA
from sklearn.manifold import TSNE, MDS
smusher = PCA(n_components=2)
# reduced_data = smusher.fit_transform(singles+1)
visualize_tree(classifier, singles, np.array(target), smusher)
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
classifier = RandomForestClassifier()
smusher = PCA(n_components=2)
# reduced_data = smusher.fit_transform(singles+1)
visualize_tree(classifier, singles, np.array(target), smusher, boundaries=False)
classifier = ExtraTreesClassifier()
smusher = PCA(n_components=2)
# reduced_data = smusher.fit_transform(singles+1)
visualize_tree(classifier, singles, np.array(target), smusher, boundaries=False)
Explanation: Exercise 1
Try the same analysis, but use ICA instead of PCA.
How does that change the classification?
How does it change the enriched genes?
Are the cells closer or farther from the decision boundary?
Is that a "better" or "worse" classification? Why?
Why does the reduction algorithm affect the visualization of the classification?
Could you use MDS or t-SNE for plotting of the classifier boundary? Why or why not?
Try the same analysis, but use the "LPS Response" genes and a dimensionality reduction algorithm of your choice. (... how do you subset only certain columns out of the dataframe?)
How does that change the classification?
How does it change the enriched genes?
Are the cells closer or farther from the decision boundary?
Is that a "better" or "worse" classification? Why?
For (1) and (2) above, also fry using radial basis kernel (kernel="rbf") for SVC.
How does that change the classification?
How does it change the enriched genes?
Are the cells closer or farther from the decision boundary?
Is that a "better" or "worse" classification? Why?
Decision trees
End of explanation
pd.options.display.max_columns = 50
macaulay2016_metadata = pd.read_csv('../4._Case_Study/macaulay2016/sample_info_qc.csv', index_col=0)
macaulay2016_metadata.head()
macaulay2016_cluster_names = tuple(sorted(macaulay2016_metadata['cluster'].unique()))
macaulay2016_cluster_names
macaulay2016_target = macaulay2016_metadata['cluster'].map(lambda x: macaulay2016_cluster_names.index(x))
macaulay2016_target
macaulay2016_expression = pd.read_csv('../4._Case_Study/macaulay2016/gene_expression_s.csv', index_col=0).T
macaulay2016_expression.head()
macaulay2016_expression_filtered = macaulay2016_expression[[x for x in macaulay2016_expression if x.startswith("ENS")]]
macaulay2016_expression_filtered.shape
macaulay2016_expression_filtered = macaulay2016_expression_filtered.loc[macaulay2016_metadata.index]
macaulay2016_expression_filtered = 1e6*macaulay2016_expression_filtered.divide(macaulay2016_expression_filtered.sum(axis=1), axis=0)
macaulay2016_expression_filtered.head()
macaulay2016_expression_filtered = np.log10(macaulay2016_expression_filtered+1)
macaulay2016_expression_filtered.head()
macaulay2016_expression_filtered = macaulay2016_expression_filtered.loc[:, (macaulay2016_expression_filtered > 1).sum() >=3]
macaulay2016_expression_filtered.shape
# classifier = SVC(kernel='linear')
# classifier = DecisionTreeClassifier(max_depth=10)
classifier = ExtraTreesClassifier(n_estimators=1000)
classifier.fit(macaulay2016_expression_filtered, macaulay2016_target)
smusher = FastICA(n_components=2, random_state=0)
smushed_data = smusher.fit_transform(macaulay2016_expression_filtered)
x_min, x_max = smushed_data[:, 0].min(), smushed_data[:, 0].max()
y_min, y_max = smushed_data[:, 1].min(), smushed_data[:, 1].max()
delta_x = 0.05 * abs(x_max - x_min)
delta_y = 0.05 * abs(x_max - x_min)
x_min -= delta_x
x_max += delta_x
y_min -= delta_y
y_max += delta_y
X = np.linspace(x_min, x_max, 100)
Y = np.linspace(y_min, y_max, 100)
xx, yy = np.meshgrid(X, Y)
two_d_space = np.c_[xx.ravel(), yy.ravel()]
two_d_space
high_dimensional_space = smusher.inverse_transform(two_d_space)
# Get the class boundaries
Z = classifier.predict(high_dimensional_space)
import matplotlib as mpl
macaulay2016_metadata['cluster_color_hex'] = macaulay2016_metadata['cluster_color'].map(lambda x: mpl.colors.rgb2hex(eval(x)))
int_to_cluster_name = dict(zip(range(len(macaulay2016_cluster_names)), macaulay2016_cluster_names))
int_to_cluster_name
cluster_name_to_color = dict(zip(macaulay2016_metadata['cluster'], macaulay2016_metadata['cluster_color_hex']))
cluster_name_to_color
macaulay2016_palette = [mpl.colors.hex2color(cluster_name_to_color[int_to_cluster_name[i]])
for i in range(len(macaulay2016_cluster_names))]
macaulay2016_palette
cmap = mpl.colors.ListedColormap(macaulay2016_palette)
cmap
x_min, x_max
y = macaulay2016_target
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, alpha=0.2, cmap=cmap)
plt.clim(y.min(), y.max())
# Plot also the training points
plt.scatter(smushed_data[:, 0], smushed_data[:, 1], s=50, color=macaulay2016_metadata['cluster_color_hex'],
edgecolor='k') #c=macaulay2016_target, s=50, cmap='Set2')
plt.axis('off')
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.clim(y.min(), y.max())
smusher = FastICA(n_components=4, random_state=354)
smushed_data = pd.DataFrame(smusher.fit_transform(macaulay2016_expression_filtered))
# x_min, x_max = smushed_data[:, 0].min(), smushed_data[:, 0].max()
# y_min, y_max = smushed_data[:, 1].min(), smushed_data[:, 1].max()
# delta_x = 0.05 * abs(x_max - x_min)
# delta_y = 0.05 * abs(x_max - x_min)
# x_min -= delta_x
# x_max += delta_x
# y_min -= delta_y
# y_max += delta_y
# X = np.linspace(x_min, x_max, 100)
# Y = np.linspace(y_min, y_max, 100)
# xx, yy = np.meshgrid(X, Y)
# low_dimensional_space = np.c_[xx.ravel(), yy.ravel()]
# low_dimensional_space
smushed_data.max() - smushed_data.min()
grid = smushed_data.apply(lambda x: pd.Series(np.linspace(x.min(), x.max(), 50)))
grid.head()
# grid = [x.ravel() for x in grid]
# grid
# low_dimensional_space = np.concatenate(grid, axis=0)
# low_dimensional_space.shape
# # low_dimensional_space = low_dimensional_space.reshape(shape)
x1, x2, x3, x4 = np.meshgrid(*[grid[col] for col in grid])
low_dimensional_space = np.c_[x1.ravel(), x2.ravel(), x3.ravel(), x4.ravel()]
high_dimensional_space = smusher.inverse_transform(low_dimensional_space)
smushed_data['hue'] = macau
sns.pairplot(smushed_data)
Explanation: Macaulay2016
End of explanation |
6,687 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intorduction to PyMC2
Balint Szoke
Installation
Step1: Probabilistic model
Suppose you have a sample ${y_t}_{t=0}^{T}$ and want to characeterize it by the following probabilistic model; for $t\geq 0$
$$ y_{t+1} = \rho y_t + \sigma_x \varepsilon_{t+1}, \quad \varepsilon_{t+1}\stackrel{iid}{\sim}\cal{N}(0,1) $$
with the initial value $y_0 \sim {\cal N}\left(0, \frac{\sigma_x^2}{1-\rho^2}\right)$ and suppose the following (independent) prior beliefs for the parameters $\theta \equiv (\rho, \sigma_x)$
- $\rho \sim \text{U}(-1, 1)$
- $\sigma_x \sim \text{IG}(a, b)$
Aim
Step2: Probabilistic models in pymc
Model instance $\approx$ collection of random variables linked together according to some rules
Linkages (hierarchical structure)
Step3: 2) Determinsitic variable
Step4: (b) Conditional mean of $y_t$, $\mu_y$, is a deterministic function of $\rho$ and $y_{t-1}$
Step5: Let's see the parents of y0_stdev...
Step6: Notice that this is a dictionary, so for example...
Step7: ... and as we alter the parent's value, the child's value changes accordingly
Step8: and similarly for mu_y
Step9: How to tell pymc what you 'know' about the data?
We define the data as a stochastic variable with fixed values and set the observed flag equal to True
For the sample $y^T$, depending on the question at hand, we might want to define
- either $T + 1$ scalar random variables
- or a scalar $y_0$ and a $T$-vector valued $Y$
In the current setup, as we fix the value of $y$ (observed), it doesn't really matter (approach A is easier). However, if we have an array-valued stochastic variable with mutable value, the restriction that we cannot update the values of stochastic variables' in-place becomes onerous in the sampling step (where the step method should propose array-valued variable). Straight from the pymc documentation
Step10: Notice that the value of this variable is fixed (even if the parent's value changes)
Step11: (B) $T+1$ scalar random variables
Define an array with dtype=object, fill it with scalar variables (use loops) and define it as a pymc.Container (this latter step is not necessary, but based on my experience Container types work much more smoothly in the blocking step when we are sampling).
Step12: Currently, this is just a numpy array of pymc.Deterministic functions. We can make it a pymc object by using the pymc.Container type.
Step13: and the pymc methods are applied element-wise.
Create a pymc.Model instance
Remember that it is just a collection of random variables (Stochastic and Deterministic), hence
Step14: This object have very limited awareness of the structure of the probabilistic model that it describes and does not itslef possess methods for updating the values in the sampling methods.
Fitting the model to the data (MCMC algorithm)
MCMC algorithms
The joint prior distribution is sitting on an $N$-dimensional space, where $N$ is the number of parameters we are about to make inference on (see the figure below). Looking at the data through the probabilistic model deform the prior surface into the posterior surface, that we need to explore. In principle, we could naively search this space by picking random points in $\mathbb{R}^N$ and calculate the corresponding posterior value (Monte Carlo methods), but a more efficient (especially in higher dimensions) way is to do Markov Chain Monte Carlo (MCMC), which is basically an intelligent way of discovering the posterior surface.
MCMC is an iterative procedure
Step15: Notice that the step_methods are not assigned yet
Step16: You can specify them now, or if you call the sample method, pymc will assign the step_methods automatically according to some rule
Step17: ... and you can check what kind of step methods have been assigned (the default in most cases is the Metropolis step method for non-observed stochastic variables, while in case of observed stochastics, we simply draw from the prior)
Step18: The sample can be reached by the trace method (use the names you used at the initialization not the python name -- useful if the two coincide)
Step19: Then this is just a numpy array, so you can do different sort of things with it. For example plot
Step20: Acutally, you don't have to waste your time on construction different subplots. pymc's built-in plotting functionality creates pretty informative plots for you (baed on matplotlib). On the figure below
- Upper left subplot
Step21: For a non-graphical summary of the posterior use the stats() method | Python Code:
%matplotlib inline
import numpy as np
import scipy as sp
import pymc as pm
import seaborn as sb
import matplotlib.pyplot as plt
Explanation: Intorduction to PyMC2
Balint Szoke
Installation:
>> conda install pymc
End of explanation
def sample_path(rho, sigma, T, y0=None):
'''
Simulates the sample path for y of length T+1 starting from a specified initial value OR if y0
is None, it initializes the path with a draw from the stationary distribution of y.
Arguments
-----------------
rho (Float) : AR coefficient
sigma (Float) : standard deviation of the error
T (Int) : length of the sample path without x0
y0 (Float) : initial value of X
Return:
-----------------
y_path (Numpy Array) : simulated path
'''
if y0 == None:
stdev_erg = sigma / np.sqrt(1 - rho**2)
y0 = np.random.normal(0, stdev_erg)
y_path = np.empty(T+1)
y_path[0] = y0
eps_path = np.random.normal(0, 1, T)
for t in range(T):
y_path[t + 1] = rho * y_path[t] + sigma * eps_path[t]
return y_path
#-------------------------------------------------------
# Pick true values:
rho_true, sigma_x_true, T = 0.5, 1.0, 20
#np.random.seed(1453534)
sample = sample_path(rho_true, sigma_x_true, T)
Explanation: Probabilistic model
Suppose you have a sample ${y_t}_{t=0}^{T}$ and want to characeterize it by the following probabilistic model; for $t\geq 0$
$$ y_{t+1} = \rho y_t + \sigma_x \varepsilon_{t+1}, \quad \varepsilon_{t+1}\stackrel{iid}{\sim}\cal{N}(0,1) $$
with the initial value $y_0 \sim {\cal N}\left(0, \frac{\sigma_x^2}{1-\rho^2}\right)$ and suppose the following (independent) prior beliefs for the parameters $\theta \equiv (\rho, \sigma_x)$
- $\rho \sim \text{U}(-1, 1)$
- $\sigma_x \sim \text{IG}(a, b)$
Aim: given the statistical model and the prior $\pi(\theta)$ we want to ''compute'' the posterior distribution $p\left( \theta \hspace{1mm} | \hspace{1mm} y^T \right)$ associated with the sample $y^T$.
How: if no conjugate form available, sample from $p\left( \theta \hspace{1mm} | \hspace{1mm} y^T \right)$ and learn about the posterior's properties from that sample
Remark: We go from the prior $\pi$ to the posterior $p$ by using Bayes rule:
\begin{equation}
p\left( \theta \hspace{1mm} | \hspace{1mm} y^T \right) = \frac{f( y^T \hspace{1mm}| \hspace{1mm}\theta) \pi(\theta) }{f( y^T)}
\end{equation}
The first-order autoregression implies that the likelihood function of $y^T$ can be factored as follows:
$$ f(y^T \hspace{1mm}|\hspace{1mm} \theta) = f(y_T| y_{T-1}; \theta)\cdot f(y_{T-1}| y_{T-2}; \theta) \cdots f(y_1 | y_0;\theta )\cdot f(y_0 |\theta) $$
where for all $t\geq 1$
$$ f(y_t | y_{t-1}; \theta) = {\mathcal N}(\rho y_{t-1}, \sigma_x^2) = {\mathcal N}(\mu_t, \sigma_x^2)$$
Generate a sample with $T=100$ for known parameter values:
$$\rho = 0.5\quad \sigma_x = 1.0$$
End of explanation
# Priors:
rho = pm.Uniform('rho', lower = -1, upper = 1) # note the capitalized distribution name (rule for pymc distributions)
sigma_x = pm.InverseGamma('sigma_x', alpha = 3, beta = 1)
# random() method
print('Initialization:')
print("Current value of rho = {: f}".format(rho.value.reshape(1,)[0]))
print("Current logprob of rho = {: f}".format(rho.logp))
rho.random()
print('\nAfter redrawing:')
print("Current value of rho = {: f}".format(rho.value.reshape(1,)[0]))
print("Current logprob of rho = {: f}".format(rho.logp))
Explanation: Probabilistic models in pymc
Model instance $\approx$ collection of random variables linked together according to some rules
Linkages (hierarchical structure):
parent: variables that influence another variable
e.g. $\rho$ and $\sigma_x$ are parents of $y_0$, $a$ and $b$ are parents of $sigma_x$
child: variables that are affected by other variables (subjects of parent variables)
e.g. $y_t$ is a child of $y_{t-1}$, $\rho$ and $\sigma_x$
Why are they useful?
child variable's current value automatically changes whenever its parents' values change
Random variables:
have a value attribute producing the current internal value (given the values of the parents)
computed on-demand and cached for efficiency.
other important attributes: parents (gives dictionary), children (gives a set)
Two main classes of random variables in pymc:
1) Stochastic variable:
variable whose value is not completely determined by its parents
Examples:
parameters with a given distribution
observable variables (data) = particular realizations of a random variable (see below)
treated by the back end as random number generators (see built-in random() method)
logp attribute: evaluate the logprob (mass or density) at the current value; for vector-valued variables it returns the sum of the (joint) logprob
Initialization:
define the distribution (built-in or your own) with name + params of the distribution (can be pymc variable)
optional flags:
value: for a default initial value; if not specified, initialized by a draw from the given distribution
size: for multivariate array of independent stochastic variables. (Alternatively: use array as a distribution parameter)
Initialize stochastic variables
End of explanation
@pm.deterministic(trace = False)
def y0_stdev(rho = rho, sigma = sigma_x):
return sigma / np.sqrt(1 - rho**2)
# Alternatively:
#y0_stdev = pm.Lambda('y0_stdev', lambda r = rho, s = sigma_x: s / np.sqrt(1 - r**2) )
Explanation: 2) Determinsitic variable:
variable that is entirely determined by its parents
''exact functions'' of stochastic variables, however, we can treat them as a variable and not a Python function.
Examples:
model implied restrictions on how the parameters and the observable variables are related
$\text{var}(y_0)$ is a function of $\rho$ and $\sigma_x$
$\mu_{t}$ is an exact function of $\rho$ and $y_{t-1}$
sample statistics, i.e. deterministic functions of the sample
Initialization:
decorator form:
Python function of stochastic variables AND default values + the decorator pm.deterministic
elementary operations (no need to write a function or decorate): $+$, $-$, $*$, $/$
pymc.Lambda
Initialize deterministic variables:
(a) Standard deviation of $y_0$ is a deterministic function of $\rho$ and $\sigma$
End of explanation
# For elementary operators simply write
mu_y = rho * sample[:-1]
print(type(mu_y))
# You could also write, to generate a list of Determinisitc functions
#MU_y = [rho * sample[j] for j in range(T)]
#print(type(MU_y))
#print(type(MU_y[1]))
#MU_y = pm.Container(MU_y)
#print(type(MU_y))
Explanation: (b) Conditional mean of $y_t$, $\mu_y$, is a deterministic function of $\rho$ and $y_{t-1}$
End of explanation
y0_stdev.parents
Explanation: Let's see the parents of y0_stdev...
End of explanation
y0_stdev.parents['rho'].value
rho.random()
y0_stdev.parents['rho'].value # if the parent is a pymc variable, the current value will be always 'updated'
Explanation: Notice that this is a dictionary, so for example...
End of explanation
print("Current value of y0_stdev = {: f}".format(y0_stdev.value))
rho.random()
print('\nAfter redrawing rho:')
print("Current value of y0_stdev = {: f}".format(y0_stdev.value))
Explanation: ... and as we alter the parent's value, the child's value changes accordingly
End of explanation
print("Current value of mu_y:")
print(mu_y.value[:4])
rho.random()
print('\nAfter redrawing rho:')
print("Current value of mu_y:")
print(mu_y.value[:4])
Explanation: and similarly for mu_y
End of explanation
y0 = pm.Normal('y0', mu = 0.0, tau = 1 / y0_stdev, observed = True, value = sample[0])
Y = pm.Normal('Y', mu = mu_y, tau = 1 / sigma_x, observed=True, value = sample[1:])
Y.value
Explanation: How to tell pymc what you 'know' about the data?
We define the data as a stochastic variable with fixed values and set the observed flag equal to True
For the sample $y^T$, depending on the question at hand, we might want to define
- either $T + 1$ scalar random variables
- or a scalar $y_0$ and a $T$-vector valued $Y$
In the current setup, as we fix the value of $y$ (observed), it doesn't really matter (approach A is easier). However, if we have an array-valued stochastic variable with mutable value, the restriction that we cannot update the values of stochastic variables' in-place becomes onerous in the sampling step (where the step method should propose array-valued variable). Straight from the pymc documentation:
''In this case, it may be preferable to partition the variable into several scalar-valued variables stored in an array or list.''
(A) $y_0$ as a scalar and $Y$ as a vector valued random variable
End of explanation
Y.parents['tau'].value
sigma_x.random()
print(Y.parents['tau'].value)
Y.value
Explanation: Notice that the value of this variable is fixed (even if the parent's value changes)
End of explanation
Y_alt = np.empty(T + 1, dtype = object)
Y_alt[0] = y0 # definition of y0 is the same as above
for i in range(1, T + 1):
Y_alt[i] = pm.Normal('y_{:d}'.format(i), mu = mu_y[i-1], tau = 1 / sigma_x)
print(type(Y_alt))
Y_alt
Explanation: (B) $T+1$ scalar random variables
Define an array with dtype=object, fill it with scalar variables (use loops) and define it as a pymc.Container (this latter step is not necessary, but based on my experience Container types work much more smoothly in the blocking step when we are sampling).
End of explanation
Y_alt = pm.Container(Y_alt)
type(Y_alt)
Explanation: Currently, this is just a numpy array of pymc.Deterministic functions. We can make it a pymc object by using the pymc.Container type.
End of explanation
ar1_model = pm.Model([rho, sigma_x, y0, Y, y0_stdev, mu_y])
ar1_model.stochastics # notice that this is an unordered set (!)
ar1_model.deterministics
Explanation: and the pymc methods are applied element-wise.
Create a pymc.Model instance
Remember that it is just a collection of random variables (Stochastic and Deterministic), hence
End of explanation
M = pm.MCMC(ar1_model)
Explanation: This object have very limited awareness of the structure of the probabilistic model that it describes and does not itslef possess methods for updating the values in the sampling methods.
Fitting the model to the data (MCMC algorithm)
MCMC algorithms
The joint prior distribution is sitting on an $N$-dimensional space, where $N$ is the number of parameters we are about to make inference on (see the figure below). Looking at the data through the probabilistic model deform the prior surface into the posterior surface, that we need to explore. In principle, we could naively search this space by picking random points in $\mathbb{R}^N$ and calculate the corresponding posterior value (Monte Carlo methods), but a more efficient (especially in higher dimensions) way is to do Markov Chain Monte Carlo (MCMC), which is basically an intelligent way of discovering the posterior surface.
MCMC is an iterative procedure: at every iteration, it proposes a nearby point in the space, then ask 'how likely that this point is close to the maximizer of the posterior surface?', it accepts the proposed point if the likelihood exceeds a particular level and rejects it otherwise (by going back to the old position). The key feature of MCMC is that it produces proposals by simulating a Markov chain for which the posterior is the unique, invariant limiting distribution. In other words, after a possible 'trasition period' (i.e. post converegence), it starts producing draws from the posterior.
MCMC algorithm in pymc
By default it uses the Metropolis-within-Gibbs algorithm (in my oppinion), which is based on two simple principles:
1. Blocking and conditioning:
- Divide the $N$ variables of $\theta$ into $K\leq N$ blocks and update every block by sampling from the conditional density, i.e. from the distribuition of the block parameters conditioned on all parameters in the other $K-1$ blocks being at their current values.
* At scan $t$, cycle through the $K$ blocks
$$\theta^{(t)} = [\theta^{(t)}1, \theta^{(t)}_2, \theta^{(t)}_3, \dots, \theta^{(t)}_K] $$
* Sample from the conditionals
\begin{align}
\theta_1^{(t+1)} &\sim f(\theta_1\hspace{1mm} | \hspace{1mm} \theta^{(t)}_2, \theta^{(t)}_3, \dots, \theta^{(t)}_K; \text{data}) \
\theta_2^{(t+1)} &\sim f(\theta_2\hspace{1mm} | \hspace{1mm} \theta^{(t+1)}_1, \theta^{(t)}_3, \dots, \theta^{(t)}_K; \text{data}) \
\theta_3^{(t+1)} &\sim f(\theta_3\hspace{1mm} | \hspace{1mm} \theta^{(t+1)}_1, \theta^{(t+1)}_2, \dots, \theta^{(t)}_K; \text{data}) \
\dots & \
\theta_K^{(t+1)} &\sim f(\theta_3\hspace{1mm} | \hspace{1mm} \theta^{(t+1)}_1, \theta^{(t+1)}_2, \dots, \theta^{(t+1)}{K-1}; \text{data})
\end{align}
Sampling (choose/construct pymc.StepMethod): if for a given block the conditional density $f$ can be expressed in (semi-)analytic form, use it, if not, use Metrololis-Hastings
Semi-closed form example: Foreward-backward sampler (Carter and Kohn, 1994):
Metropolis(-Hastings) algorithm:
Start at $\theta$
Propose a new point in the parameterspace according to some proposal density $J(\theta' | \theta)$ (e.g. random walk)
Accept the proposed point with probability
$$\alpha = \min\left( 1, \frac{p(\theta'\hspace{1mm} |\hspace{1mm} \text{data})\hspace{1mm} J(\theta \hspace{1mm}|\hspace{1mm} \theta')}{ p(\theta\hspace{1mm} |\hspace{1mm} \text{data})\hspace{1mm} J(\theta' \hspace{1mm}| \hspace{1mm}\theta)} \right) $$
If accept: Move to the proposed point $\theta'$ and return to Step 1.
If reject: Don't move, keep the point $\theta$ and return to Step 1.
After a large number of iterations (once the Markov Chain convereged), return all accepted $\theta$ as a sample from the posterior
Again, a pymc.Model instance is not much more than a collection, for example, the model variables (blocks) are not matched with step methods determining how to update values in the sampling step. In order to do that, first we need to construct an MCMC instance, which is then ready to be sampled from.
MCMC‘s primary job is to create and coordinate a collection of step methods, each of which is responsible for updating one or more variables (blocks) at each step of the MCMC algorithm. By default, step methods are automatically assigned to variables by PyMC (after we call the sample method).
Main built-in pymc.StepMethods
* Metropolis
* AdaptiveMetropolis
* Slicer
* Gibbs
you can assign step methods manually by calling the method use_step_method(method, *args, **kwargs):
End of explanation
M.step_method_dict
Explanation: Notice that the step_methods are not assigned yet
End of explanation
# draw a sample of size 20,000, drop the first 1,000 and keep only every 5th draw
M.sample(iter = 50000, burn = 1000, thin = 5)
Explanation: You can specify them now, or if you call the sample method, pymc will assign the step_methods automatically according to some rule
End of explanation
M.step_method_dict
Explanation: ... and you can check what kind of step methods have been assigned (the default in most cases is the Metropolis step method for non-observed stochastic variables, while in case of observed stochastics, we simply draw from the prior)
End of explanation
M.trace('rho')[:20]
M.trace('sigma_x')[:].shape
Explanation: The sample can be reached by the trace method (use the names you used at the initialization not the python name -- useful if the two coincide)
End of explanation
sigma_sample = M.trace('sigma_x')[:]
rho_sample = M.trace('rho')[:]
fig, ax = plt. subplots(1, 2, figsize = (15, 5))
ax[0].plot(sigma_sample)
ax[1].hist(sigma_sample)
Explanation: Then this is just a numpy array, so you can do different sort of things with it. For example plot
End of explanation
from pymc.Matplot import plot as fancy_plot
fancy_plot(M.trace('rho'))
Explanation: Acutally, you don't have to waste your time on construction different subplots. pymc's built-in plotting functionality creates pretty informative plots for you (baed on matplotlib). On the figure below
- Upper left subplot: trace,
- Lower left subplot: autocorrelation (try to resample the model with thin=1),
- Right subplot: histogram with the mean
End of explanation
M.stats('rho')
# Try also:
#M.summary()
N = len(rho_sample)
rho_pr = [rho.random() for i in range(N)]
sigma_pr = [sigma_x.random() for i in range(N)]
Prior = np.vstack([rho_pr, sigma_pr]).T
Posterior = np.vstack([rho_sample, sigma_sample]).T
fig, bx = plt.subplots(1, 2, figsize = (17, 10), sharey = True)
sb.kdeplot(Prior, shade = True, cmap = 'PuBu', ax = bx[0])
bx[0].patch.set_facecolor('white')
bx[0].collections[0].set_alpha(0)
bx[0].axhline(y = sigma_x_true, color = 'DarkRed', lw =2)
bx[0].axvline(x = rho_true, color = 'DarkRed', lw =2)
bx[0].set_xlabel(r'$\rho$', fontsize = 18)
bx[0].set_ylabel(r'$\sigma_x$', fontsize = 18)
bx[0].set_title('Prior', fontsize = 20)
sb.kdeplot(Posterior, shade = True, cmap = 'PuBu', ax = bx[1])
bx[1].patch.set_facecolor('white')
bx[1].collections[0].set_alpha(0)
bx[1].axhline(y = sigma_x_true, color = 'DarkRed', lw =2)
bx[1].axvline(x = rho_true, color = 'DarkRed', lw =2)
bx[1].set_xlabel(r'$\rho$', fontsize = 18)
bx[1].set_ylabel(r'$\sigma_x$', fontsize = 18)
bx[1].set_title('Posterior', fontsize = 20)
plt.xlim(-1, 1)
plt.ylim(0, 1.5)
plt.tight_layout()
plt.savefig('beamer/prior_post.pdf')
rho_grid = np.linspace(-1, 1, 100)
sigmay_grid = np.linspace(0, 1.5, 100)
U = sp.stats.uniform(-1, 2)
IG = sp.stats.invgamma(3)
fig2, cx = plt.subplots(2, 2, figsize = (17, 12), sharey = True)
cx[0, 0].plot(rho_grid, U.pdf(rho_grid), 'r-', lw = 3, alpha = 0.6, label = r'$\rho$ prior')
cx[0, 0].set_title(r"Marginal prior for $\rho$", fontsize = 18)
cx[0, 0].axvline(x = rho_true, color = 'DarkRed', lw = 2, linestyle = '--', label = r'True $\rho$')
cx[0, 0].legend(loc='best', fontsize = 16)
cx[0, 0].set_xlim(-1, 1)
sb.distplot(rho_sample, ax = cx[0,1], kde_kws={"color": "r", "lw": 3, "label": r"$\rho$ posterior"})
cx[0, 1].set_title(r"Marginal posterior for $\rho$", fontsize = 18)
cx[0, 1].axvline(x = rho_true, color = 'DarkRed', lw = 2, linestyle = '--', label = r'True $\rho$')
cx[0, 1].legend(loc='best', fontsize = 16)
cx[0, 1].set_xlim(-1, 1)
cx[1, 0].plot(sigmay_grid, IG.pdf(sigmay_grid), 'r-', lw=3, alpha=0.6, label=r'$\sigma_y$ prior')
cx[1, 0].set_title(r"Marginal prior for $\sigma_y$", fontsize = 18)
cx[1, 0].axvline(x = sigma_x_true, color = 'DarkRed', lw = 2, linestyle = '--', label = r'True $\sigma_y$')
cx[1, 0].legend(loc = 'best', fontsize = 16)
cx[1, 0].set_xlim(0, 3)
sb.distplot(sigma_sample, ax = cx[1,1], kde_kws={"color": "r", "lw": 3, "label": r"$\sigma_y$ posterior"})
cx[1, 1].set_title(r"Marginal posterior for $\sigma_y$", fontsize = 18)
cx[1, 1].axvline(x = sigma_x_true, color = 'DarkRed', lw = 2, linestyle = '--', label = r'True $\sigma_y$')
cx[1, 1].legend(loc = 'best', fontsize = 16)
cx[1, 1].set_xlim(0, 3)
plt.tight_layout()
plt.savefig('beamer/marginal_prior_post.pdf')
Explanation: For a non-graphical summary of the posterior use the stats() method
End of explanation |
6,688 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework 6
Step1: Observa-se no gráfico uma relação clara entre o número de reinvidicações e o pagamento total.
Step2: Como esperado, o modelo de regressão linear explica bem os dados, tendo um valor RSME menor que a predição pelo valor médio de pagamento. | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
% matplotlib inline
# Define uma função para carregar os dados
def load_csv(path):
df = pd.read_csv(path,names=['num_reinv','pag_total'])
return df
insdf = load_csv('insurance.csv')
insdf.head()
plt.scatter(insdf.num_reinv,insdf.pag_total)
plt.xlabel('Número de Reinvidicações')
plt.ylabel('Pagamento Total')
plt.grid(True)
Explanation: Homework 6: Regressão Linear Simples
Isac do Nascimento Lira, 371890
Estudo de caso: Seguro de automóvel sueco
Agora, sabemos como implementar um modelo de regressão linear simples. Vamos aplicá-lo ao conjunto de dados do seguro de automóveis sueco. Esta seção assume que você baixou o conjunto de dados para o arquivo insurance.csv, o qual está disponível no notebook respectivo.
O conjunto de dados envolve a previsão do pagamento total de todas as reclamações em milhares de Kronor sueco, dado o número total de reclamações. É um dataset composto por 63 observações com 1 variável de entrada e 1 variável de saída. Os nomes das variáveis são os seguintes:
Número de reivindicações.
Pagamento total para todas as reclamações em milhares de Kronor sueco.
Voce deve adicionar algumas funções acessórias à regressão linear simples. Especificamente, uma função para carregar o arquivo CSV chamado load_csv (), uma função para converter um conjunto de dados carregado para números chamado str_column_to_float (), uma função para avaliar um algoritmo usando um conjunto de treino e teste chamado split_train_split (), a função para calcular RMSE chamado rmse_metric () e uma função para avaliar um algoritmo chamado evaluate_algorithm().
Utilize um conjunto de dados de treinamento de 60% dos dados para preparar o modelo. As previsões devem ser feitas nos restantes 40%.
Compare a performabce do seu algoritmo com o algoritmo baseline, o qual utiliza a média dos pagamentos realizados para realizar a predição ( a média é 72,251 mil Kronor).
End of explanation
#Funções úteis
# Calculate the mean value of a list of numbers
def mean(values):
return sum(values) / float(len(values))
# Calculate the variance of a list of numbers
def variance(values, mean):
return sum([(x-mean)**2 for x in values])
def covariance(x, mean_x, y, mean_y):
covar = 0.0
for i in range(len(x)):
covar += (x[i] - mean_x) * (y[i] - mean_y)
return covar
# Implementa a classe do modelo de regressão linear simples
class simple_linear_regression(object):
def _init_(self):
self.train = None
self.coef = None
self.intercept = None
def fit(self,train):
self.train = train
self.get_coefficients()
def get_coefficients(self):
x = [row[0] for row in self.train]
y = [row[1] for row in self.train]
x_mean, y_mean = mean(x), mean(y)
self.coef = covariance(x, x_mean, y, y_mean) / variance(x, x_mean)
self.intercept = y_mean - self.coef * x_mean
def predict(self,data):
predictions = []
for row in data:
ypred = self.intercept + self.coef * row[0]
predictions.append(ypred)
return predictions
# Avalia o modelo
from math import sqrt
# Realiza o cálculo da metrica RMSE
def rmse_metric(actual, predicted):
sum_error = 0.0
for i in range(len(actual)):
prediction_error = predicted[i] - actual[i]
sum_error += (prediction_error ** 2)
mean_error = sum_error / float(len(actual))
return sqrt(mean_error)
# Função que separa os dados em treino e teste
def split_train_split(data,splitRatio):
msk = np.random.rand(len(data)) < splitRatio
trainSet = data[msk]#.values
testSet = data[~msk]#.values
return trainSet,testSet
# Função para avaliar o modelo
def evaluate_algorithm(data,algorithm):
train,test = split_train_split(data,0.6)
lm = algorithm()
lm.fit(train)
predicted = lm.predict(test)
actual = test[:,1]
pred_rmse= rmse_metric(actual, predicted)
predictedMean = np.zeros_like(actual)
predictedMean[:] = np.mean(actual)
mean_rmse = rmse_metric(actual, predictedMean)
return pred_rmse,mean_rmse,lm
pred_rmse,mean_rmse,lm = evaluate_algorithm(insdf.values,simple_linear_regression)
print('')
print("RMSE - Regressão Linear", pred_rmse)
print("RMSE - Predição pelo valor média", mean_rmse)
print('')
Explanation: Observa-se no gráfico uma relação clara entre o número de reinvidicações e o pagamento total.
End of explanation
# plota as predições vs dados reais e mostra o erro de predição
x = insdf['num_reinv'].values
y = insdf['pag_total'].values
y_hat = lm.intercept + x.dot(lm.coef)
fig,axs = plt.subplots(1,2,figsize=[15,5])
axs[0].plot(x,y_hat,'g-',label='Regression Line',alpha=0.6)
axs[0].scatter(x,y,alpha=0.5,label='Actual Values')
axs[0].set_xlabel('Nº Reinvidicações')
axs[0].set_ylabel('Pagamento Total')
axs[0].legend(loc='best')
error = (y_hat-y)
axs[1].scatter(x,error,alpha=0.5,color='b')
axs[1].axhline(0,color='k')
axs[1].set_xlabel('Nº Reinvidicações')
axs[1].set_ylabel('Erro de predição')
Explanation: Como esperado, o modelo de regressão linear explica bem os dados, tendo um valor RSME menor que a predição pelo valor médio de pagamento.
End of explanation |
6,689 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
3-Way Merge Sort
Step1: The function mergeSort is called with 4 arguments.
- The first parameter L is the list that is to be sorted.
However, the task of mergeSort is not to sort the entire list L but only
the part of L that is given as L[start
Step2: The function merge3 takes six arguments.
- L is a list,
- start is an integer such that $\texttt{start} \in {0, \cdots, \texttt{len}(L)-1 }$,
- left is an integer such that $\texttt{left} \in {0, \cdots, \texttt{len}(L)-1 }$,
- right is an integer such that $\texttt{right} \in {0, \cdots, \texttt{len}(L)-1 }$,
- end is an integer such that $\texttt{end} \in {0, \cdots, \texttt{len}(L)-1 }$,
- A is a list of the same length as L.
Furthermore, the indices start, left, right, and end have to satisfy the following
Step3: Testing
We import the module random in order to be able to create lists of random numbers that are then sorted.
Step4: The function isOrdered(L) checks that the list L is sorted in ascending order.
Step5: The function sameElements(L, S) returns Trueif the lists L and S contain the same elements and, furthermore, each
element $x$ occurring in L occurs in S the same number of times it occurs in L.
Step6: The function $\texttt{testSort}(n, k)$ generates $n$ random lists of length $k$, sorts them, and checks whether the output is sorted and contains the same elements as the input. | Python Code:
def sort(L):
A = L[:]
mergeSort(L, 0, len(L), A)
Explanation: 3-Way Merge Sort: An Array-Based Implementation
The function $\texttt{sort}(L)$ sorts the list $L$ in place using merge sort.
It takes advantage of the fact that, in Python, lists are stored internally as arrays.
The function sort is a wrapper for the function merge_sort. Its sole purpose is to allocate the auxiliary array A, which has the same size as the array holding L.
End of explanation
def mergeSort(L, start, end, A):
if end - start < 2:
return
left = start + (end - start) // 3
right = start + 2 * (end - start) // 3
mergeSort(L, start, left , A)
mergeSort(L, left, right, A)
mergeSort(L, right, end , A)
merge3(L, start, left, right, end, A)
Explanation: The function mergeSort is called with 4 arguments.
- The first parameter L is the list that is to be sorted.
However, the task of mergeSort is not to sort the entire list L but only
the part of L that is given as L[start:end].
- Hence, the parameters start and end are indices specifying the
subarray that needs to be sorted.
- The final parameter A is used as an auxiliary array. This array is needed
as temporary storage and is required to have the same size as the list L.
End of explanation
def merge3(L, start, left, right, end, A):
A[start:end] = L[start:end]
idx1 = start
idx2 = left
idx3 = right
i = start
while idx1 < left and idx2 < right and idx3 < end:
if A[idx1] <= A[idx2]:
if A[idx1] <= A[idx3]:
L[i] = A[idx1]
idx1 += 1
else:
L[i] = A[idx3]
idx3 +=1
elif A[idx2] <= A[idx3]:
L[i] = A[idx2]
idx2 += 1
else:
L[i] = A[idx3]
idx3 += 1
i += 1
if idx1 == left: # first list empty, merge second list and third list
while idx2 < right and idx3 < end:
if A[idx2] <= A[idx3]:
L[i] = A[idx2]
idx2 += 1
else:
L[i] = A[idx3]
idx3 += 1
i += 1
elif idx2 == right: # second list empty, merge first list and third list
while idx1 < left and idx3 < end:
if A[idx1] <= A[idx3]:
L[i] = A[idx1]
idx1 += 1
else:
L[i] = A[idx3]
idx3 += 1
i += 1
elif idx3 == end: # third list empty, merge first list and second list
while idx1 < left and idx2 < right:
if A[idx1] <= A[idx2]:
L[i] = A[idx1]
idx1 += 1
else:
L[i] = A[idx2]
idx2 += 1
i += 1
if idx1 < left: # second list and third list are empty
L[i:end] = A[idx1:left]
if idx2 < right: # first list and third list are empty
L[i:end] = A[idx2:right]
if idx3 < end: # first list and second list are empty
L[i:end] = A[idx3:end]
L = [7, 8, 11, 12, 2, 5, 3, 7, 9, 3, 2]
sort(L)
L
Explanation: The function merge3 takes six arguments.
- L is a list,
- start is an integer such that $\texttt{start} \in {0, \cdots, \texttt{len}(L)-1 }$,
- left is an integer such that $\texttt{left} \in {0, \cdots, \texttt{len}(L)-1 }$,
- right is an integer such that $\texttt{right} \in {0, \cdots, \texttt{len}(L)-1 }$,
- end is an integer such that $\texttt{end} \in {0, \cdots, \texttt{len}(L)-1 }$,
- A is a list of the same length as L.
Furthermore, the indices start, left, right, and end have to satisfy the following:
$$ 0 \leq \texttt{start} \leq \texttt{left} \leq \texttt{right} \leq \texttt{end} \leq \texttt{len}(L) $$
The function assumes that the sublists L[start:left], L[left:right], and L[right:end] are already
sorted. The function merges these sublists so that when the call returns the sublist L[start:end]
is sorted. The last argument A is used as auxiliary memory.
End of explanation
import random as rnd
from collections import Counter
Explanation: Testing
We import the module random in order to be able to create lists of random numbers that are then sorted.
End of explanation
def isOrdered(L):
for i in range(len(L) - 1):
assert L[i] <= L[i+1], f'{L} not sorted at index {i}'
Explanation: The function isOrdered(L) checks that the list L is sorted in ascending order.
End of explanation
def sameElements(L, S):
assert Counter(L) == Counter(S), f'{Counter(L)} != {Counter(S)}'
Explanation: The function sameElements(L, S) returns Trueif the lists L and S contain the same elements and, furthermore, each
element $x$ occurring in L occurs in S the same number of times it occurs in L.
End of explanation
def testSort(n, k):
for i in range(n):
L = [ rnd.randrange(2*k) for x in range(k) ]
oldL = L[:]
sort(L)
isOrdered(L)
sameElements(oldL, L)
print('.', end='')
print()
print("All tests successful!")
%%time
testSort(100, 20000)
Explanation: The function $\texttt{testSort}(n, k)$ generates $n$ random lists of length $k$, sorts them, and checks whether the output is sorted and contains the same elements as the input.
End of explanation |
6,690 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Network Data Access - USGS NWIS Service-based Data Access
Karl Benedict
Director, Earth Data Analysis Center
Associate Professor, University Libraries
University of New Mexico
[email protected]
An Analysis
This analysis demonstrates searching for datasets that meet a set of specified conditions, accessing via advertised services, processing and plotting the data from the service.
Service Documentation
Step1: Set some initial variables
Step2: Options | Python Code:
import urllib
import zipfile
import StringIO
import string
import pandas
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import HTML
import json
Explanation: Network Data Access - USGS NWIS Service-based Data Access
Karl Benedict
Director, Earth Data Analysis Center
Associate Professor, University Libraries
University of New Mexico
[email protected]
An Analysis
This analysis demonstrates searching for datasets that meet a set of specified conditions, accessing via advertised services, processing and plotting the data from the service.
Service Documentation: http://waterservices.usgs.gov/rest/IV-Service.html
Enable the needed python libraries
End of explanation
county_name = ""
start_date = "20140101"
end_date = "20150101"
diag = False
Explanation: Set some initial variables
End of explanation
## Retrieve the bounding box of the specified county - if no county is specified, the bounding boxes for all NM counties will be requested
countyBBOXlink = "http://gstore.unm.edu/apps/epscor/search/nm_counties.json?limit=100&query=" + county_name ## define the request URL
print countyBBOXlink ## print the request URL for verification
print
bboxFile = urllib.urlopen(countyBBOXlink) ## request the bounding box information from the server
bboxData = json.load(bboxFile)
# print bboxData
# Get data for BBOX defined by specified county(ies)
myCounties = []
for countyBBOX in bboxData["results"]:
minx,miny,maxx,maxy = countyBBOX[u'box']
myDownloadLink = "http://waterservices.usgs.gov/nwis/iv/?bBox=%f,%f,%f,%f&format=json&period=P7D¶meterCd=00060" % (minx,miny,maxx,maxy) # retrieve data for the specified BBOX for the last 7 days as JSON
print myDownloadLink
myCounty = {u'name':countyBBOX[u'text'],u'minx':minx,u'miny':miny,u'maxx':maxx,u'maxy':maxy,u'downloadLink':myDownloadLink}
myCounties.append(myCounty)
#countySubset = [myCounties[0]]
#print countySubset
valueList = []
for county in myCounties:
print "processing: %s" % county["downloadLink"]
try:
datafile = urllib.urlopen(county["downloadLink"])
data = json.load(datafile)
values = data["value"]["timeSeries"][0]["values"]
for item in values:
for valueItem in item["value"]:
#print json.dumps(item["value"], sort_keys=True, indent=4)
myValue = {"dateTime":valueItem["dateTime"].replace("T"," ").replace(".000-06:00",""),"value":valueItem["value"], "county":county["name"]}
#print myValue
valueList.append(myValue)
#print valueList
except:
print "\tfailed for this one ..."
#print json.dumps(values, sort_keys=True, indent=4)
df = pandas.DataFrame(valueList)
df['dateTime'] = pandas.to_datetime(df["dateTime"])
df['value'] = df['value'].astype(float).fillna(-1)
print df.shape
print df.dtypes
print "column names"
print "------------"
for colName in df.columns:
print colName
print
print df.head()
%matplotlib inline
fig,ax = plt.subplots(figsize=(10,8))
ax.width = 1
ax.height = .5
plt.xkcd()
#plt.ylim(-25,30)
ax.plot_date(df['dateTime'], df['value'], '.', label="Discharge (cf/sec)", color="0.2")
fig.autofmt_xdate()
plt.legend(loc=2, bbox_to_anchor=(1.0,1))
plt.title("15-minute Discharge - cubic feet per second")
plt.ylabel("Discharge")
plt.xlabel("Date")
plt.show()
Explanation: Options
End of explanation |
6,691 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example for kNearestNeighbor using the Iris Data
First we need the standard import
Step1: Load the Data
Step2: Look at the data
it's a good idea to look at the data a little bit, know the shapes, etc...
Step3: since you can't plot 4 dimensions, try plotting some 2D subsets
I don't like the automatic placement of the legend, so lets set it manually
Step4: I don't want to do the classification on this subset, so make sure to use the entire data set.
Classification
First, we choose a classifier
Step5: Split the data into test and train subsets...
Step6: ...and then train... | Python Code:
%pylab inline
from classy import *
Explanation: Example for kNearestNeighbor using the Iris Data
First we need the standard import
End of explanation
data=load_excel('data/iris.xls',verbose=True)
Explanation: Load the Data
End of explanation
print(data.vectors.shape)
print(data.targets)
print(data.target_names)
print(data.feature_names)
Explanation: Look at the data
it's a good idea to look at the data a little bit, know the shapes, etc...
End of explanation
subset=extract_features(data,[0,2])
plot2D(subset,legend_location='upper left')
Explanation: since you can't plot 4 dimensions, try plotting some 2D subsets
I don't like the automatic placement of the legend, so lets set it manually
End of explanation
C=kNearestNeighbor()
Explanation: I don't want to do the classification on this subset, so make sure to use the entire data set.
Classification
First, we choose a classifier
End of explanation
data_train,data_test=split(data,test_size=0.2)
Explanation: Split the data into test and train subsets...
End of explanation
timeit(reset=True)
C.fit(data_train.vectors,data_train.targets)
print("Training time: ",timeit())
print("On Training Set:",C.percent_correct(data_train.vectors,data_train.targets))
print("On Test Set:",C.percent_correct(data_test.vectors,data_test.targets))
Explanation: ...and then train...
End of explanation |
6,692 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gaussian Process Latent Variable Model
The Gaussian Process Latent Variable Model (GPLVM) is a dimensionality reduction method that uses a Gaussian process to learn a low-dimensional representation of (potentially) high-dimensional data. In the typical setting of Gaussian process regression, where we are given inputs $X$ and outputs $y$, we choose a kernel and learn hyperparameters that best describe the mapping from $X$ to $y$. In the GPLVM, we are not given $X$
Step1: Dataset
The data we are going to use consists of single-cell qPCR data for 48 genes obtained from mice (Guo et al., [1]). This data is available at the Open Data Science repository. The data contains 48 columns, with each column corresponding to (normalized) measurements of each gene. Cells differentiate during their development and these data were obtained at various stages of development. The various stages are labelled from the 1-cell stage to the 64-cell stage. For the 32-cell stage, the data is further differentiated into 'trophectoderm' (TE) and 'inner cell mass' (ICM). ICM further differentiates into 'epiblast' (EPI) and 'primitive endoderm' (PE) at the 64-cell stage. Each of the rows in the dataset is labelled with one of these stages.
Step2: Modelling
First, we need to define the output tensor $y$. To predict values for all $48$ genes, we need $48$ Gaussian processes. So the required shape for $y$ is num_GPs x num_data = 48 x 437.
Step3: Now comes the most interesting part. We know that the observed data $y$ has latent structure
Step4: We will use a sparse version of Gaussian process inference to make training faster. Remember that we also need to define $X$ as a Parameter so that we can set a prior and guide (variational distribution) for it.
Step5: We will use the autoguide() method from the Parameterized class to set an auto Normal guide for $X$.
Step6: Inference
As mentioned in the Gaussian Processes tutorial, we can use the helper function gp.util.train to train a Pyro GP module. By default, this helper function uses the Adam optimizer with a learning rate of 0.01.
Step7: After inference, the mean and standard deviation of the approximated posterior $q(X) \sim p(X | y)$ will be stored in the parameters X_loc and X_scale. To get a sample from $q(X)$, we need to set the mode of gplvm to "guide".
Step8: Visualizing the result
Let’s see what we got by applying GPLVM to our dataset. | Python Code:
import os
import matplotlib.pyplot as plt
import pandas as pd
import torch
from torch.nn import Parameter
import pyro
import pyro.contrib.gp as gp
import pyro.distributions as dist
import pyro.ops.stats as stats
smoke_test = ('CI' in os.environ) # ignore; used to check code integrity in the Pyro repo
assert pyro.__version__.startswith('1.7.0')
pyro.set_rng_seed(1)
Explanation: Gaussian Process Latent Variable Model
The Gaussian Process Latent Variable Model (GPLVM) is a dimensionality reduction method that uses a Gaussian process to learn a low-dimensional representation of (potentially) high-dimensional data. In the typical setting of Gaussian process regression, where we are given inputs $X$ and outputs $y$, we choose a kernel and learn hyperparameters that best describe the mapping from $X$ to $y$. In the GPLVM, we are not given $X$: we are only given $y$. So we need to learn $X$ along with the kernel hyperparameters.
We do not do maximum likelihood inference on $X$. Instead, we set a Gaussian prior for $X$ and learn the mean and variance of the approximate (gaussian) posterior $q(X|y)$. In this notebook, we show how this can be done using the pyro.contrib.gp module. In particular we reproduce a result described in [2].
End of explanation
# license: Copyright (c) 2014, the Open Data Science Initiative
# license: https://www.elsevier.com/legal/elsevier-website-terms-and-conditions
URL = "https://raw.githubusercontent.com/sods/ods/master/datasets/guo_qpcr.csv"
df = pd.read_csv(URL, index_col=0)
print("Data shape: {}\n{}\n".format(df.shape, "-" * 21))
print("Data labels: {}\n{}\n".format(df.index.unique().tolist(), "-" * 86))
print("Show a small subset of the data:")
df.head()
Explanation: Dataset
The data we are going to use consists of single-cell qPCR data for 48 genes obtained from mice (Guo et al., [1]). This data is available at the Open Data Science repository. The data contains 48 columns, with each column corresponding to (normalized) measurements of each gene. Cells differentiate during their development and these data were obtained at various stages of development. The various stages are labelled from the 1-cell stage to the 64-cell stage. For the 32-cell stage, the data is further differentiated into 'trophectoderm' (TE) and 'inner cell mass' (ICM). ICM further differentiates into 'epiblast' (EPI) and 'primitive endoderm' (PE) at the 64-cell stage. Each of the rows in the dataset is labelled with one of these stages.
End of explanation
data = torch.tensor(df.values, dtype=torch.get_default_dtype())
# we need to transpose data to correct its shape
y = data.t()
Explanation: Modelling
First, we need to define the output tensor $y$. To predict values for all $48$ genes, we need $48$ Gaussian processes. So the required shape for $y$ is num_GPs x num_data = 48 x 437.
End of explanation
capture_time = y.new_tensor([int(cell_name.split(" ")[0]) for cell_name in df.index.values])
# we scale the time into the interval [0, 1]
time = capture_time.log2() / 6
# we setup the mean of our prior over X
X_prior_mean = torch.zeros(y.size(1), 2) # shape: 437 x 2
X_prior_mean[:, 0] = time
Explanation: Now comes the most interesting part. We know that the observed data $y$ has latent structure: in particular different datapoints correspond to different cell stages. We would like our GPLVM to learn this structure in an unsupervised manner. In principle, if we do a good job of inference then we should be able to discover this structure---at least if we choose reasonable priors. First, we have to choose the dimension of our latent space $X$. We choose $dim(X)=2$, since we would like our model to disentangle 'capture time' ($1$, $2$, $4$, $8$, $16$, $32$, and $64$) from cell branching types (TE, ICM, PE, EPI). Next, when we set the mean of our prior over $X$, we set the first dimension to be equal to the observed capture time. This will help the GPLVM discover the structure we are interested in and will make it more likely that that structure will be axis-aligned in a way that is easier for us to interpret.
End of explanation
kernel = gp.kernels.RBF(input_dim=2, lengthscale=torch.ones(2))
# we clone here so that we don't change our prior during the course of training
X = Parameter(X_prior_mean.clone())
# we will use SparseGPRegression model with num_inducing=32;
# initial values for Xu are sampled randomly from X_prior_mean
Xu = stats.resample(X_prior_mean.clone(), 32)
gplvm = gp.models.SparseGPRegression(X, y, kernel, Xu, noise=torch.tensor(0.01), jitter=1e-5)
Explanation: We will use a sparse version of Gaussian process inference to make training faster. Remember that we also need to define $X$ as a Parameter so that we can set a prior and guide (variational distribution) for it.
End of explanation
# we use `.to_event()` to tell Pyro that the prior distribution for X has no batch_shape
gplvm.X = pyro.nn.PyroSample(dist.Normal(X_prior_mean, 0.1).to_event())
gplvm.autoguide("X", dist.Normal)
Explanation: We will use the autoguide() method from the Parameterized class to set an auto Normal guide for $X$.
End of explanation
# note that training is expected to take a minute or so
losses = gp.util.train(gplvm, num_steps=4000)
# let's plot the loss curve after 4000 steps of training
plt.plot(losses)
plt.show()
Explanation: Inference
As mentioned in the Gaussian Processes tutorial, we can use the helper function gp.util.train to train a Pyro GP module. By default, this helper function uses the Adam optimizer with a learning rate of 0.01.
End of explanation
gplvm.mode = "guide"
X = gplvm.X # draw a sample from the guide of the variable X
Explanation: After inference, the mean and standard deviation of the approximated posterior $q(X) \sim p(X | y)$ will be stored in the parameters X_loc and X_scale. To get a sample from $q(X)$, we need to set the mode of gplvm to "guide".
End of explanation
plt.figure(figsize=(8, 6))
colors = plt.get_cmap("tab10").colors[::-1]
labels = df.index.unique()
X = gplvm.X_loc.detach().numpy()
for i, label in enumerate(labels):
X_i = X[df.index == label]
plt.scatter(X_i[:, 0], X_i[:, 1], c=[colors[i]], label=label)
plt.legend()
plt.xlabel("pseudotime", fontsize=14)
plt.ylabel("branching", fontsize=14)
plt.title("GPLVM on Single-Cell qPCR data", fontsize=16)
plt.show()
Explanation: Visualizing the result
Let’s see what we got by applying GPLVM to our dataset.
End of explanation |
6,693 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Filtering Sequence Elements
Problem
You have data inside of a sequence, and need to extract values or reduce the sequence using some criteria.
Solution
The easiest way to filter sequence data is often to use a list comprehension.
Step1: You can use generator expressions to produce the filtered values iteratively.
Step2: Discussion
Filtering
The filtering criteria cannot be easily expressed in a list comprehension or generator expression. For example, suppose that the filtering process involves exception handling or some other complicated detail. You need to use filter() function.
Step3: Using itertools
Another notable filtering tool is itertools.compress(), which takes an iterable and an accompanying Boolean selector sequence as input. As output, it gives you all of the items in the iterable where the corresponding element in the selector is True. This can be useful if you’re trying to apply the results of filtering one sequence to another related sequence. | Python Code:
mylist = [1, 4, -5, 10, -7, 2, 3, -1]
# All positive values
pos = [n for n in mylist if n > 0]
print(pos)
# All negative values
neg = [n for n in mylist if n < 0]
print(neg)
# Negative values clipped to 0
neg_clip = [n if n > 0 else 0 for n in mylist]
print(neg_clip)
# Positive values clipped to 0
pos_clip = [n if n < 0 else 0 for n in mylist]
print(pos_clip)
Explanation: Filtering Sequence Elements
Problem
You have data inside of a sequence, and need to extract values or reduce the sequence using some criteria.
Solution
The easiest way to filter sequence data is often to use a list comprehension.
End of explanation
pos = (n for n in mylist if n > 0)
pos
for x in pos:
print(x)
Explanation: You can use generator expressions to produce the filtered values iteratively.
End of explanation
values = ['1', '2', '-3', '-', '4', 'N/A', '5']
def is_int(val):
try:
x = int(val)
return True
except ValueError:
return False
ivals = list(filter(is_int, values))
print(ivals)
# Outputs ['1', '2', '-3', '4', '5']
Explanation: Discussion
Filtering
The filtering criteria cannot be easily expressed in a list comprehension or generator expression. For example, suppose that the filtering process involves exception handling or some other complicated detail. You need to use filter() function.
End of explanation
addresses = [
'5412 N CLARK',
'5148 N CLARK',
'5800 E 58TH',
'2122 N CLARK',
'5645 N RAVENSWOOD',
'1060 W ADDISON',
'4801 N BROADWAY',
'1039 W GRANVILLE',
]
counts = [ 0, 3, 10, 4, 1, 7, 6, 1]
from itertools import compress
more5 = [ n > 5 for n in counts ]
a = list(compress(addresses, more5))
print(a)
Explanation: Using itertools
Another notable filtering tool is itertools.compress(), which takes an iterable and an accompanying Boolean selector sequence as input. As output, it gives you all of the items in the iterable where the corresponding element in the selector is True. This can be useful if you’re trying to apply the results of filtering one sequence to another related sequence.
End of explanation |
6,694 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Preprocessing Data with Simple Example using TensorFlow Transform
Learning objectives
Create a preprocessing function.
Use the resulting transform_fn directory.
Export the model.
In this notebook, you learn a very simple example of how <a target='_blank' href='https
Step1: Install TensorFlow Transform
Step2: Restart the kernel to use updated packages. (On the Notebook menu, select Kernel > Restart Kernel > Restart).
Step3: Imports
Step4: Data
Step6: Transform
Step7: Syntax
You're almost ready to put everything together and use <a target='_blank' href='https
Step8: Is this the right answer?
Previously, you used tf.Transform to do this
Step9: The transform_fn/ directory contains a tf.saved_model implementing with all the constants tensorflow-transform analysis results built into the graph.
It is possible to load this directly with tf.saved_model.load, but this not easy to use
Step10: A better approach is to load it using tft.TFTransformOutput. The TFTransformOutput.transform_features_layer method returns a tft.TransformFeaturesLayer object that can be used to apply the transformation
Step11: This tft.TransformFeaturesLayer expects a dictionary of batched features. So create a Dict[str, tf.Tensor] from the List[Dict[str, Any]] in raw_data
Step12: You can use the tft.TransformFeaturesLayer on it's own
Step13: Export
A more typical use case would use tf.Transform to apply the transformation to the training and evaluation datasets (see the next tutorial for an example). Then, after training, before exporting the model attach the tft.TransformFeaturesLayer as the first layer so that you can export it as part of your tf.saved_model. For a concrete example, keep reading.
An example training model
Below is a model that
Step14: Imagine you trained the model.
trained_model.compile(loss=..., optimizer='adam')
trained_model.fit(...)
This model runs on the transformed inputs
Step15: An example export wrapper
Imagine you've trained the above model and want to export it.
You'll want to include the transform function in the exported model
Step16: This combined model works on the raw data, and produces exactly the same results as calling the trained model directly
Step17: This export_model includes the tft.TransformFeaturesLayer and is entierly self-contained. You can save it and restore it in another environment and still get exactly the same result | Python Code:
!pip install --upgrade pip
Explanation: Preprocessing Data with Simple Example using TensorFlow Transform
Learning objectives
Create a preprocessing function.
Use the resulting transform_fn directory.
Export the model.
In this notebook, you learn a very simple example of how <a target='_blank' href='https://www.tensorflow.org/tfx/transform/get_started/'>TensorFlow Transform</a> (<code>tf.Transform</code>) can be used to preprocess data using exactly the same code for both training a model and serving inferences in production.
TensorFlow Transform is a library for preprocessing input data for TensorFlow, including creating features that require a full pass over the training dataset. For example, using TensorFlow Transform you could:
Normalize an input value by using the mean and standard deviation.
Convert strings to integers by generating a vocabulary over all of the input values.
Convert floats to integers by assigning them to buckets, based on the observed data distribution.
TensorFlow has built-in support for manipulations on a single example or a batch of examples. tf.Transform extends these capabilities to support full passes over the entire training dataset.
The output of tf.Transform is exported as a TensorFlow graph which you can use for both training and serving. Using the same graph for both training and serving can prevent skew, since the same transformations are applied in both stages.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Upgrade Pip
End of explanation
!pip install -q -U tensorflow_transform
Explanation: Install TensorFlow Transform
End of explanation
# This cell is only necessary because packages were installed while python was running.
import pkg_resources
import importlib
importlib.reload(pkg_resources)
Explanation: Restart the kernel to use updated packages. (On the Notebook menu, select Kernel > Restart Kernel > Restart).
End of explanation
import pathlib
import pprint
import tempfile
import tensorflow as tf
import tensorflow_transform as tft
import tensorflow_transform.beam as tft_beam
from tensorflow_transform.tf_metadata import dataset_metadata
from tensorflow_transform.tf_metadata import schema_utils
Explanation: Imports
End of explanation
raw_data = [
{'x': 1, 'y': 1, 's': 'hello'},
{'x': 2, 'y': 2, 's': 'world'},
{'x': 3, 'y': 3, 's': 'hello'}
]
raw_data_metadata = dataset_metadata.DatasetMetadata(
schema_utils.schema_from_feature_spec({
'y': tf.io.FixedLenFeature([], tf.float32),
'x': tf.io.FixedLenFeature([], tf.float32),
's': tf.io.FixedLenFeature([], tf.string),
}))
Explanation: Data: Create some dummy data
You'll create some simple dummy data for your simple example:
raw_data is the initial raw data that you're going to preprocess
raw_data_metadata contains the schema that tells us the types of each of the columns in raw_data. In this case, it's very simple.
End of explanation
def preprocessing_fn(inputs):
Preprocess input columns into transformed columns.
x = inputs['x']
y = inputs['y']
s = inputs['s']
x_centered = x - tft.mean(x)
y_normalized = tft.scale_to_0_1(y)
s_integerized = tft.compute_and_apply_vocabulary(s)
x_centered_times_y_normalized = (x_centered * y_normalized)
return {
'x_centered': x_centered,
'y_normalized': y_normalized,
's_integerized': s_integerized,
'x_centered_times_y_normalized': x_centered_times_y_normalized,
}
Explanation: Transform: Create a preprocessing function
The preprocessing function is the most important concept of tf.Transform. A preprocessing function is where the transformation of the dataset really happens. It accepts and returns a dictionary of tensors, where a tensor means a <a target='_blank' href='https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/Tensor'><code>Tensor</code></a> or <a target='_blank' href='https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/SparseTensor'><code>SparseTensor</code></a>. There are two main groups of API calls that typically form the heart of a preprocessing function:
TensorFlow Ops: Any function that accepts and returns tensors, which usually means TensorFlow ops. These add TensorFlow operations to the graph that transforms raw data into transformed data one feature vector at a time. These will run for every example, during both training and serving.
Tensorflow Transform Analyzers/Mappers: Any of the analyzers/mappers provided by tf.Transform. These also accept and return tensors, and typically contain a combination of Tensorflow ops and Beam computation, but unlike TensorFlow ops they only run in the Beam pipeline during analysis requiring a full pass over the entire training dataset. The Beam computation runs only once, (prior to training, during analysis), and typically makes a full pass over the entire training dataset. They create tf.constant tensors, which are added to your graph. For example, tft.min computes the minimum of a tensor over the training dataset.
Caution: When you apply your preprocessing function to serving inferences, the constants that were created by analyzers during training do not change. If your data has trend or seasonality components, plan accordingly.
Note: The preprocessing_fn is not directly callable. This means that
calling preprocessing_fn(raw_data) will not work. Instead, it must
be passed to the Transform Beam API as shown in the following cells.
End of explanation
def main(output_dir):
# Ignore the warnings
with tft_beam.Context(temp_dir=tempfile.mkdtemp()):
transformed_dataset, transform_fn = ( # pylint: disable=unused-variable
(raw_data, raw_data_metadata) | tft_beam.AnalyzeAndTransformDataset(
preprocessing_fn))
transformed_data, transformed_metadata = transformed_dataset # pylint: disable=unused-variable
# TODO 1: Save the transform_fn to the output_dir
_ = (
transform_fn
| 'WriteTransformFn' >> tft_beam.WriteTransformFn(output_dir))
return transformed_data, transformed_metadata
output_dir = pathlib.Path(tempfile.mkdtemp())
transformed_data, transformed_metadata = main(str(output_dir))
print('\nRaw data:\n{}\n'.format(pprint.pformat(raw_data)))
print('Transformed data:\n{}'.format(pprint.pformat(transformed_data)))
Explanation: Syntax
You're almost ready to put everything together and use <a target='_blank' href='https://beam.apache.org/'>Apache Beam</a> to run it.
Apache Beam uses a <a target='_blank' href='https://beam.apache.org/documentation/programming-guide/#applying-transforms'>special syntax to define and invoke transforms</a>. For example, in this line:
result = pass_this | 'name this step' >> to_this_call
The method to_this_call is being invoked and passed the object called pass_this, and <a target='_blank' href='https://stackoverflow.com/questions/50519662/what-does-the-redirection-mean-in-apache-beam-python'>this operation will be referred to as name this step in a stack trace</a>. The result of the call to to_this_call is returned in result. You will often see stages of a pipeline chained together like this:
result = apache_beam.Pipeline() | 'first step' >> do_this_first() | 'second step' >> do_this_last()
and since that started with a new pipeline, you can continue like this:
next_result = result | 'doing more stuff' >> another_function()
Putting it all together
Now you're ready to transform your data. You'll use Apache Beam with a direct runner, and supply three inputs:
raw_data - The raw input data that you created above
raw_data_metadata - The schema for the raw data
preprocessing_fn - The function that you created to do your transformation
End of explanation
!ls -l {output_dir}
Explanation: Is this the right answer?
Previously, you used tf.Transform to do this:
x_centered = x - tft.mean(x)
y_normalized = tft.scale_to_0_1(y)
s_integerized = tft.compute_and_apply_vocabulary(s)
x_centered_times_y_normalized = (x_centered * y_normalized)
x_centered - With input of [1, 2, 3] the mean of x is 2, and you subtract it from x to center your x values at 0. So your result of [-1.0, 0.0, 1.0] is correct.
y_normalized - You wanted to scale your y values between 0 and 1. Your input was [1, 2, 3] so your result of [0.0, 0.5, 1.0] is correct.
s_integerized - You wanted to map your strings to indexes in a vocabulary, and there were only 2 words in your vocabulary ("hello" and "world"). So with input of ["hello", "world", "hello"] your result of [0, 1, 0] is correct. Since "hello" occurs most frequently in this data, it will be the first entry in the vocabulary.
x_centered_times_y_normalized - You wanted to create a new feature by crossing x_centered and y_normalized using multiplication. Note that this multiplies the results, not the original values, and your new result of [-0.0, 0.0, 1.0] is correct.
Use the resulting transform_fn directory
End of explanation
# TODO 2: Load a SavedModel from export_dir
loaded = tf.saved_model.load(str(output_dir/'transform_fn'))
loaded.signatures['serving_default']
Explanation: The transform_fn/ directory contains a tf.saved_model implementing with all the constants tensorflow-transform analysis results built into the graph.
It is possible to load this directly with tf.saved_model.load, but this not easy to use:
End of explanation
tf_transform_output = tft.TFTransformOutput(output_dir)
tft_layer = tf_transform_output.transform_features_layer()
tft_layer
Explanation: A better approach is to load it using tft.TFTransformOutput. The TFTransformOutput.transform_features_layer method returns a tft.TransformFeaturesLayer object that can be used to apply the transformation:
End of explanation
raw_data_batch = {
's': tf.constant([ex['s'] for ex in raw_data]),
'x': tf.constant([ex['x'] for ex in raw_data], dtype=tf.float32),
'y': tf.constant([ex['y'] for ex in raw_data], dtype=tf.float32),
}
Explanation: This tft.TransformFeaturesLayer expects a dictionary of batched features. So create a Dict[str, tf.Tensor] from the List[Dict[str, Any]] in raw_data:
End of explanation
transformed_batch = tft_layer(raw_data_batch)
{key: value.numpy() for key, value in transformed_batch.items()}
Explanation: You can use the tft.TransformFeaturesLayer on it's own:
End of explanation
class StackDict(tf.keras.layers.Layer):
def call(self, inputs):
values = [
tf.cast(v, tf.float32)
for k,v in sorted(inputs.items(), key=lambda kv: kv[0])]
return tf.stack(values, axis=1)
class TrainedModel(tf.keras.Model):
def __init__(self):
super().__init__(self)
self.concat = StackDict()
self.body = tf.keras.Sequential([
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10),
])
def call(self, inputs, training=None):
x = self.concat(inputs)
return self.body(x, training)
trained_model = TrainedModel()
Explanation: Export
A more typical use case would use tf.Transform to apply the transformation to the training and evaluation datasets (see the next tutorial for an example). Then, after training, before exporting the model attach the tft.TransformFeaturesLayer as the first layer so that you can export it as part of your tf.saved_model. For a concrete example, keep reading.
An example training model
Below is a model that:
takes the transformed batch,
stacks them all together into a simple (batch, features) matrix,
runs them through a few dense layers, and
produces 10 linear outputs.
In a real use case you would apply a one-hot to the s_integerized feature.
You could train this model on a dataset transformed by tf.Transform:
End of explanation
trained_model_output = trained_model(transformed_batch)
trained_model_output.shape
Explanation: Imagine you trained the model.
trained_model.compile(loss=..., optimizer='adam')
trained_model.fit(...)
This model runs on the transformed inputs
End of explanation
class ExportModel(tf.Module):
def __init__(self, trained_model, input_transform):
self.trained_model = trained_model
self.input_transform = input_transform
@tf.function
def __call__(self, inputs, training=None):
x = self.input_transform(inputs)
return self.trained_model(x)
# TODO 3: Export the model
export_model = ExportModel(trained_model=trained_model,
input_transform=tft_layer)
Explanation: An example export wrapper
Imagine you've trained the above model and want to export it.
You'll want to include the transform function in the exported model:
End of explanation
export_model_output = export_model(raw_data_batch)
export_model_output.shape
tf.reduce_max(abs(export_model_output - trained_model_output)).numpy()
Explanation: This combined model works on the raw data, and produces exactly the same results as calling the trained model directly:
End of explanation
import tempfile
model_dir = tempfile.mkdtemp(suffix='tft')
tf.saved_model.save(export_model, model_dir)
reloaded = tf.saved_model.load(model_dir)
reloaded_model_output = reloaded(raw_data_batch)
reloaded_model_output.shape
tf.reduce_max(abs(export_model_output - reloaded_model_output)).numpy()
Explanation: This export_model includes the tft.TransformFeaturesLayer and is entierly self-contained. You can save it and restore it in another environment and still get exactly the same result:
End of explanation |
6,695 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
So 1.0.0 doesn't actually convincingly perform better than 0.2.1 (RF) so I'm seeing if more hidden layers changes anything
Step1: DO NOT FORGET TO DROP ISSUE_D AFTER PREPPING
Step2: Until I figure out a good imputation method (e.g. bayes PCA), just drop columns with null still
Step3: instantiate network
Step4: get the weights and biases of the nn into np since at this size np is faster (correction, pytorch was faster)
Step5: check that they output the same and speedtest (pytorch was faster)
Step6: Examine performance on test set
Step7: Making model info and saving it
Step8: Examine scores distributions | Python Code:
import data_science.lendingclub.dataprep_and_modeling.modeling_utils.data_prep_new as data_prep
import dir_constants as dc
from sklearn.externals import joblib
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
import torch.nn.functional as F
from torch.utils.data import Dataset, DataLoader
import time
from sklearn.metrics import mean_squared_error
from tqdm import tqdm_notebook
import matplotlib.pyplot as plt
%matplotlib notebook
# from IPython.display import HTML
# HTML('''<script>
# code_show_err=false;
# function code_toggle_err() {
# if (code_show_err){
# $('div.output_stderr').hide();
# } else {
# $('div.output_stderr').show();
# }
# code_show_err = !code_show_err
# }
# $( document ).ready(code_toggle_err);
# </script>
# To toggle on/off output_stderr, click <a href="javascript:code_toggle_err()">here</a>.''')
Explanation: So 1.0.0 doesn't actually convincingly perform better than 0.2.1 (RF) so I'm seeing if more hidden layers changes anything
End of explanation
platform = 'lendingclub'
use_cuda = True
dtype = torch.cuda.FloatTensor
save_path = "model_dump/nn_1_0_1/"
if not os.path.isdir(save_path):
os.mkdir(save_path)
store = pd.HDFStore(
dc.home_path+'/justin_tinkering/data_science/lendingclub/{0}_store.h5'.
format(platform),
append=True)
loan_info = store['train_filtered_columns']
columns = loan_info.columns.values
# checking dtypes to see which columns need one hotting, and which need null or not
to_one_hot = []
to_null_or_not = []
do_nothing = []
for col in columns:
if loan_info[col].dtypes == np.dtype('O'):
# print(col, loan_info[col].isnull().value_counts(dropna=False).to_dict())
to_one_hot.append(col)
elif len(loan_info[col].isnull().value_counts(dropna=False)) > 1:
# print(col, loan_info[col].isnull().value_counts(dropna=False).to_dict())
to_null_or_not.append(col)
else:
# print(col, loan_info[col].isnull().value_counts(dropna=False).to_dict())
do_nothing.append(col)
Explanation: DO NOT FORGET TO DROP ISSUE_D AFTER PREPPING
End of explanation
train_X, train_y, mean_series, std_dev_series = data_prep.process_data_train(
loan_info)
class TrainDataset(Dataset):
def __init__(self, data, targets):
self.data = data
self.targets = targets
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
return self.data[idx,:], self.targets[idx,:]
def get_loader(dataset, use_cuda, batch_size=6400, shuffle=True):
return DataLoader(dataset, batch_size=batch_size, shuffle=shuffle, pin_memory=use_cuda)
train_dataset = TrainDataset(train_X.values, train_y.values)
train_loader = get_loader(train_dataset, use_cuda)
Explanation: Until I figure out a good imputation method (e.g. bayes PCA), just drop columns with null still
End of explanation
# %%writefile model_dump/nn_1_0_1/net_class.py
# import torch
# import torch.nn as nn
# import torch.nn.functional as F
# from torch.autograd import Variable
# import numpy as np
# dtype = torch.FloatTensor
# nn_input_dim = 223
# hly1_n = 300
# hly2_n = 400
# hly3_n = 300
# hly4_n = 200
# hly5_n = 100
# hly6_n = 100
# hly7_n = 100
# # hly8_n = 100
# nn_output_dim = 1
# class Net(nn.Module):
# def __init__(self):
# super(Net, self).__init__()
# self.hl1 = nn.Linear(nn_input_dim, hly1_n)
# self.hl2 = nn.Linear(hly1_n, hly2_n)
# self.hl3 = nn.Linear(hly2_n, hly3_n)
# self.hl4 = nn.Linear(hly3_n, hly4_n)
# self.hl5 = nn.Linear(hly4_n, hly5_n)
# self.hl6 = nn.Linear(hly5_n, hly6_n)
# self.hl7 = nn.Linear(hly6_n, hly7_n)
# # self.hl8 = nn.Linear(hly7_n, hly8_n)
# self.out = nn.Linear(hly7_n, nn_output_dim)
# def forward(self, x):
# x = F.leaky_relu(self.hl1(x))
# x = F.leaky_relu(self.hl2(x))
# x = F.leaky_relu(self.hl3(x))
# x = F.leaky_relu(self.hl4(x))
# x = F.leaky_relu(self.hl5(x))
# x = F.leaky_relu(self.hl6(x))
# x = F.leaky_relu(self.hl7(x))
# # x = F.leaky_relu(self.hl8(x))
# x = self.out(x)
# return x
# def torch_version(df_inputs, net):
# input = Variable(torch.from_numpy(df_inputs.values)).type(dtype)
# return np.round(net(input).data.cpu().numpy(),5).ravel()
nn_input_dim = 223
hly1_n = 300
hly2_n = 400
hly3_n = 300
hly4_n = 200
hly5_n = 100
hly6_n = 100
hly7_n = 100
# hly8_n = 100
nn_output_dim = 1
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.hl1 = nn.Linear(nn_input_dim, hly1_n)
self.hl2 = nn.Linear(hly1_n, hly2_n)
self.hl3 = nn.Linear(hly2_n, hly3_n)
self.hl4 = nn.Linear(hly3_n, hly4_n)
self.hl5 = nn.Linear(hly4_n, hly5_n)
self.hl6 = nn.Linear(hly5_n, hly6_n)
self.hl7 = nn.Linear(hly6_n, hly7_n)
# self.hl8 = nn.Linear(hly7_n, hly8_n)
self.out = nn.Linear(hly7_n, nn_output_dim)
def forward(self, x):
x = F.leaky_relu(self.hl1(x))
x = F.leaky_relu(self.hl2(x))
x = F.leaky_relu(self.hl3(x))
x = F.leaky_relu(self.hl4(x))
x = F.leaky_relu(self.hl5(x))
x = F.leaky_relu(self.hl6(x))
x = F.leaky_relu(self.hl7(x))
# x = F.leaky_relu(self.hl8(x))
x = self.out(x)
return x
net = Net()
params = list(net.parameters())
criterion = nn.MSELoss()
optimizer = optim.Adam(net.parameters(), lr=0.0001, weight_decay=0.00135)
# scheduler = torch.optim.lr
if use_cuda:
net.cuda()
criterion.cuda()
n_epochs = 600
epoch_list = []
loss_list = []
fig = plt.gcf()
fig.show()
fig.canvas.draw()
for epoch in range(n_epochs):
running_loss = 0
for i, data in enumerate(train_loader):
inputs, targets = data
# wrap in Variable
inputs, targets = Variable(inputs.cuda()).type(dtype), Variable(targets.cuda()).type(dtype)
# in your training loop:
optimizer.zero_grad() # zero the gradient buffers
output = net(inputs)
loss = criterion(output, targets)
loss.backward()
optimizer.step()
running_loss += loss.data[0]
try:
last_loss = loss_list[-1]
except:
last_loss = 9999999999999
if running_loss > (2*last_loss):
pass
else:
epoch_list.append(epoch)
loss_list.append(running_loss)
if epoch+1 % 100 == 0:
# drop learning rate at 250 epoch
optimizer.param_groups[0]['lr'] *= .97
if epoch % 1 == 0:
plt.plot(epoch_list, loss_list)
plt.title("Epoch: {0}".format(epoch))
fig.canvas.draw()
if (epoch >= 99) & ((epoch+1) % 20 == 0):
torch.save(net.state_dict(), save_path+'1.0.1_e{0}'.format(epoch+1))
Explanation: instantiate network
End of explanation
# np_hl1_weight = net.hl1.weight.data.numpy()
# np_hl1_bias = net.hl1.bias.data.numpy()
# np_hl2_weight = net.hl2.weight.data.numpy()
# np_hl2_bias = net.hl2.bias.data.numpy()
# np_out_weight = net.out.weight.data.numpy()
# np_out_bias = net.out.bias.data.numpy()
Explanation: get the weights and biases of the nn into np since at this size np is faster (correction, pytorch was faster)
End of explanation
# def np_version(df_inputs):
# np_hl1_z = df_inputs.dot(np_hl1_weight.T) + np_hl1_bias
# np_hl1_a = np.maximum(.01*np_hl1_z, np_hl1_z)
# np_hl2_z = np_hl1_a.dot(np_hl2_weight.T) + np_hl2_bias
# np_hl2_a = np.maximum(.01*np_hl2_z, np_hl2_z)
# np_out = np_hl2_a.dot(np_out_weight.T) + np_out_bias
# return np_out
class FeedDataset(Dataset):
def __init__(self, data):
self.data = data
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
return self.data.iloc[idx,:].values
def torch_version(df_inputs, net):
feed_dataset = FeedDataset(df_inputs)
feed_loader = get_loader(feed_dataset, batch_size=6400, shuffle=False, use_cuda = True)
all_results = []
for i, data in enumerate(feed_loader):
# wrap in Variable
inputs = data
inputs = Variable(inputs.cuda()).type(dtype)
# inputs = Variable(inputs.cuda()).type(dtype)
outputs = np.round(net(inputs).data.cpu().numpy(),5).ravel().tolist()
all_results += outputs
return all_results
#%timeit np_version(standardized)
# %timeit torch_version(train_X, net)
Explanation: check that they output the same and speedtest (pytorch was faster)
End of explanation
store.open()
test = store['test_filtered_columns']
train = store['train_filtered_columns']
loan_npv_rois = store['loan_npv_rois']
default_series = test['target_strict']
results = store['results']
store.close()
train_X, train_y = data_prep.process_data_test(train)
train_y = train_y['npv_roi_10'].values
test_X, test_y = data_prep.process_data_test(test)
test_y = test_y['npv_roi_10'].values
# regr = joblib.load('model_dump/model_0.2.1.pkl')
regr_version = '1.0.1'
test_yhat = torch_version(test_X, net)
train_yhat = torch_version(train_X, net)
test_mse = mean_squared_error(test_yhat,test_y)
train_mse = mean_squared_error(train_yhat,train_y)
test_mse
train_mse
def eval_models_net(trials, port_size, available_loans, net, regr_version, test, loan_npv_rois,
default_series):
results = {}
pct_default = {}
test_copy = test.copy()
for trial in tqdm_notebook(np.arange(trials)):
loan_ids = np.random.choice(
test_copy.index.values, available_loans, replace=False)
loans_to_pick_from = test_copy.loc[loan_ids, :]
scores = torch_version(loans_to_pick_from, net)
scores_series = pd.Series(dict(zip(loan_ids, scores)))
scores_series.sort_values(ascending=False, inplace=True)
picks = scores_series[:900].index.values
results[trial] = loan_npv_rois.loc[picks, :].mean().to_dict()
pct_default[trial] = (default_series.loc[picks].sum()) / port_size
pct_default_series = pd.Series(pct_default)
results_df = pd.DataFrame(results).T
results_df['pct_def'] = pct_default_series
return results_df
# as per done with baseline models, say 3000 loans available
# , pick 900 of them
trials = 20000
port_size = 900
available_loans = 3000
model_results = eval_models_net(trials, port_size, available_loans, net, regr_version, test_X, loan_npv_rois, default_series)
multi_index = []
for col in model_results.columns.values:
multi_index.append((str(col),regr_version))
append_results = model_results.copy()
append_results.columns = pd.MultiIndex.from_tuples(multi_index, names = ['discount_rate', 'model'])
multi_index_results = []
for col in results.columns.values:
multi_index_results.append((str(col[0]), col[1]))
results.columns = pd.MultiIndex.from_tuples(multi_index_results, names = ['discount_rate', 'model'])
full_results = results.join(append_results)
full_results.sort_index(axis=1, inplace=True)
full_results.describe()
store.open()
store['results'] = full_results
model_info = store['model_info']
store.close()
Explanation: Examine performance on test set
End of explanation
# dump the model
# joblib.dump(regr, 'model_dump/model_0.2.1.pkl')
joblib.dump((mean_series, std_dev_series), 'model_dump/mean_stddev.pkl')
test_mse
train_mse
now = time.strftime("%Y_%m_%d_%Hh_%Mm_%Ss")
# info to stick in detailed dataframe describing each model
model_info_dict = {'model_version': regr_version,
'target': 'npv_roi_10',
'weights': 'None',
'algo_model': 'feedforward NN',
'hyperparams': "nn_input_dim = 223, hly1_n = 300, hly2_n = 400, hly3_n = 300, hly4_n = 200, hly5_n = 100, hly6_n = 100, hly7_n = 100, nn_output_dim = 1, criterion = nn.MSELoss(),optimizer = optim.Adam(net.parameters(), lr=0.0001, weight_decay=0.00135), if epoch+1 % 100 == 0: optimizer.param_groups[0]['lr'] *= .97",
'cost_func': 'criterion = nn.MSELoss(),',
'useful_notes': 'test_mse: 0.0642635, train_mse: 0.061784, epoch_600',
'date': now}
model_info_df = pd.DataFrame(model_info_dict, index = [regr_version])
model_info.ix[regr_version,:] = model_info_df.values
model_info.sort_index(inplace=True)
model_info
store.open()
store.append(
'model_info',
model_info,
data_columns=True,
index=True,
append=False,
)
store.close()
Explanation: Making model info and saving it
End of explanation
train_preds = pd.Series(train_yhat)
test_preds = pd.Series(test_yhat)
train_preds.hist(bins=50)
test_preds.hist(bins=50)
train_preds.describe()
test_preds.describe()
train_preds.value_counts()
test_preds.value_counts()
# try:
# results = results.join(append_results)
# except ValueError:
# results.loc[:, (slice(None), slice('1.0.0','1.0.0'))] = append_results
# results.sort_index(axis=1, inplace = True)
Explanation: Examine scores distributions
End of explanation |
6,696 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Enter State Farm
Step1: Create sample
The following assumes you've already created your validation set - remember that the training and validation set should contain different drivers, as mentioned on the Kaggle competition page.
Step2: Create batches
Step3: Basic models
Linear model
First, we try the simplest model and use default parameters. Note the trick of making the first layer a batchnorm layer - that way we don't have to worry about normalizing the input ourselves.
Step4: As you can see below, this training is going nowhere...
Step5: Let's first check the number of parameters to see that there's enough parameters to find some useful relationships
Step6: Over 1.5 million parameters - that should be enough. Incidentally, it's worth checking you understand why this is the number of parameters in this layer
Step7: Since we have a simple model with no regularization and plenty of parameters, it seems most likely that our learning rate is too high. Perhaps it is jumping to a solution where it predicts one or two classes with high confidence, so that it can give a zero prediction to as many classes as possible - that's the best approach for a model that is no better than random, and there is likely to be where we would end up with a high learning rate. So let's check
Step8: Our hypothesis was correct. It's nearly always predicting class 1 or 6, with very high confidence. So let's try a lower learning rate
Step9: Great - we found our way out of that hole... Now we can increase the learning rate and see where we can get to.
Step10: We're stabilizing at validation accuracy of 0.39. Not great, but a lot better than random. Before moving on, let's check that our validation set on the sample is large enough that it gives consistent results
Step11: Yup, pretty consistent - if we see improvements of 3% or more, it's probably not random, based on the above samples.
L2 regularization
The previous model is over-fitting a lot, but we can't use dropout since we only have one layer. We can try to decrease overfitting in our model by adding l2 regularization (i.e. add the sum of squares of the weights to our loss function)
Step12: Looks like we can get a bit over 50% accuracy this way. This will be a good benchmark for our future models - if we can't beat 50%, then we're not even beating a linear model trained on a sample, so we'll know that's not a good approach.
Single hidden layer
The next simplest model is to add a single hidden layer.
Step13: Not looking very encouraging... which isn't surprising since we know that CNNs are a much better choice for computer vision problems. So we'll try one.
Single conv layer
2 conv layers with max pooling followed by a simple dense network is a good simple CNN to start with
Step14: The training set here is very rapidly reaching a very high accuracy. So if we could regularize this, perhaps we could get a reasonable result.
So, what kind of regularization should we try first? As we discussed in lesson 3, we should start with data augmentation.
Data augmentation
To find the best data augmentation parameters, we can try each type of data augmentation, one at a time. For each type, we can try four very different levels of augmentation, and see which is the best. In the steps below we've only kept the single best result we found. We're using the CNN we defined above, since we have already observed it can model the data quickly and accurately.
Width shift
Step15: Height shift
Step16: Random shear angles (max in radians) -
Step17: Rotation
Step18: Channel shift
Step19: And finally, putting it all together!
Step20: At first glance, this isn't looking encouraging, since the validation set is poor and getting worse. But the training set is getting better, and still has a long way to go in accuracy - so we should try annealing our learning rate and running more epochs, before we make a decisions.
Step21: Lucky we tried that - we starting to make progress! Let's keep going. | Python Code:
from theano.sandbox import cuda
cuda.use('gpu1')
%matplotlib inline
from __future__ import print_function, division
#path = "data/state/"
path = "data/state/sample/"
import utils; reload(utils)
from utils import *
from IPython.display import FileLink
batch_size=64
Explanation: Enter State Farm
End of explanation
%cd data/state
%cd train
%mkdir ../sample
%mkdir ../sample/train
%mkdir ../sample/valid
for d in glob('c?'):
os.mkdir('../sample/train/'+d)
os.mkdir('../sample/valid/'+d)
from shutil import copyfile
g = glob('c?/*.jpg')
shuf = np.random.permutation(g)
for i in range(1500): copyfile(shuf[i], '../sample/train/' + shuf[i])
%cd ../valid
g = glob('c?/*.jpg')
shuf = np.random.permutation(g)
for i in range(1000): copyfile(shuf[i], '../sample/valid/' + shuf[i])
%cd ../../..
%mkdir data/state/results
%mkdir data/state/sample/test
Explanation: Create sample
The following assumes you've already created your validation set - remember that the training and validation set should contain different drivers, as mentioned on the Kaggle competition page.
End of explanation
batches = get_batches(path+'train', batch_size=batch_size)
val_batches = get_batches(path+'valid', batch_size=batch_size*2, shuffle=False)
(val_classes, trn_classes, val_labels, trn_labels, val_filenames, filenames,
test_filename) = get_classes(path)
Explanation: Create batches
End of explanation
model = Sequential([
BatchNormalization(axis=1, input_shape=(3,224,224)),
Flatten(),
Dense(10, activation='softmax')
])
Explanation: Basic models
Linear model
First, we try the simplest model and use default parameters. Note the trick of making the first layer a batchnorm layer - that way we don't have to worry about normalizing the input ourselves.
End of explanation
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample, verbose=2)
Explanation: As you can see below, this training is going nowhere...
End of explanation
model.summary()
Explanation: Let's first check the number of parameters to see that there's enough parameters to find some useful relationships:
End of explanation
10*3*224*224
Explanation: Over 1.5 million parameters - that should be enough. Incidentally, it's worth checking you understand why this is the number of parameters in this layer:
End of explanation
np.round(model.predict_generator(batches, batches.n)[:10],2)
Explanation: Since we have a simple model with no regularization and plenty of parameters, it seems most likely that our learning rate is too high. Perhaps it is jumping to a solution where it predicts one or two classes with high confidence, so that it can give a zero prediction to as many classes as possible - that's the best approach for a model that is no better than random, and there is likely to be where we would end up with a high learning rate. So let's check:
End of explanation
model = Sequential([
BatchNormalization(axis=1, input_shape=(3,224,224)),
Flatten(),
Dense(10, activation='softmax')
])
model.compile(Adam(lr=1e-5), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample, verbose=2)
Explanation: Our hypothesis was correct. It's nearly always predicting class 1 or 6, with very high confidence. So let's try a lower learning rate:
End of explanation
model.optimizer.lr=0.001
model.fit_generator(batches, batches.nb_sample, nb_epoch=4, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
Explanation: Great - we found our way out of that hole... Now we can increase the learning rate and see where we can get to.
End of explanation
rnd_batches = get_batches(path+'valid', batch_size=batch_size*2, shuffle=True)
val_res = [model.evaluate_generator(rnd_batches, rnd_batches.nb_sample) for i in range(10)]
np.round(val_res, 2)
Explanation: We're stabilizing at validation accuracy of 0.39. Not great, but a lot better than random. Before moving on, let's check that our validation set on the sample is large enough that it gives consistent results:
End of explanation
model = Sequential([
BatchNormalization(axis=1, input_shape=(3,224,224)),
Flatten(),
Dense(10, activation='softmax', W_regularizer=l2(0.01))
])
model.compile(Adam(lr=10e-5), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample, verbose=2)
model.optimizer.lr=0.001
model.fit_generator(batches, batches.nb_sample, nb_epoch=4, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample, verbose=2)
Explanation: Yup, pretty consistent - if we see improvements of 3% or more, it's probably not random, based on the above samples.
L2 regularization
The previous model is over-fitting a lot, but we can't use dropout since we only have one layer. We can try to decrease overfitting in our model by adding l2 regularization (i.e. add the sum of squares of the weights to our loss function):
End of explanation
model = Sequential([
BatchNormalization(axis=1, input_shape=(3,224,224)),
Flatten(),
Dense(100, activation='relu'),
BatchNormalization(),
Dense(10, activation='softmax')
])
model.compile(Adam(lr=1e-5), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample,verbose=2)
model.optimizer.lr = 0.01
model.fit_generator(batches, batches.nb_sample, nb_epoch=5, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample,verbose=2)
Explanation: Looks like we can get a bit over 50% accuracy this way. This will be a good benchmark for our future models - if we can't beat 50%, then we're not even beating a linear model trained on a sample, so we'll know that's not a good approach.
Single hidden layer
The next simplest model is to add a single hidden layer.
End of explanation
def conv1(batches):
model = Sequential([
BatchNormalization(axis=1, input_shape=(3,224,224)),
Convolution2D(32,3,3, activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D((3,3)),
Convolution2D(64,3,3, activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D((3,3)),
Flatten(),
Dense(200, activation='relu'),
BatchNormalization(),
Dense(10, activation='softmax')
])
model.compile(Adam(lr=1e-4), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample,verbose=2)
model.optimizer.lr = 0.001
model.fit_generator(batches, batches.nb_sample, nb_epoch=4, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample, verbose=2)
return model
conv1(batches)
Explanation: Not looking very encouraging... which isn't surprising since we know that CNNs are a much better choice for computer vision problems. So we'll try one.
Single conv layer
2 conv layers with max pooling followed by a simple dense network is a good simple CNN to start with:
End of explanation
gen_t = image.ImageDataGenerator(width_shift_range=0.1)
batches = get_batches(path+'train', gen_t, batch_size=batch_size)
model = conv1(batches)
Explanation: The training set here is very rapidly reaching a very high accuracy. So if we could regularize this, perhaps we could get a reasonable result.
So, what kind of regularization should we try first? As we discussed in lesson 3, we should start with data augmentation.
Data augmentation
To find the best data augmentation parameters, we can try each type of data augmentation, one at a time. For each type, we can try four very different levels of augmentation, and see which is the best. In the steps below we've only kept the single best result we found. We're using the CNN we defined above, since we have already observed it can model the data quickly and accurately.
Width shift: move the image left and right -
End of explanation
gen_t = image.ImageDataGenerator(height_shift_range=0.05)
batches = get_batches(path+'train', gen_t, batch_size=batch_size)
model = conv1(batches)
Explanation: Height shift: move the image up and down -
End of explanation
gen_t = image.ImageDataGenerator(shear_range=0.1)
batches = get_batches(path+'train', gen_t, batch_size=batch_size)
model = conv1(batches)
Explanation: Random shear angles (max in radians) -
End of explanation
gen_t = image.ImageDataGenerator(rotation_range=15)
batches = get_batches(path+'train', gen_t, batch_size=batch_size)
model = conv1(batches)
Explanation: Rotation: max in degrees -
End of explanation
gen_t = image.ImageDataGenerator(channel_shift_range=20)
batches = get_batches(path+'train', gen_t, batch_size=batch_size)
model = conv1(batches)
Explanation: Channel shift: randomly changing the R,G,B colors -
End of explanation
gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05,
shear_range=0.1, channel_shift_range=20, width_shift_range=0.1)
batches = get_batches(path+'train', gen_t, batch_size=batch_size)
model = conv1(batches)
Explanation: And finally, putting it all together!
End of explanation
model.optimizer.lr = 0.0001
model.fit_generator(batches, batches.nb_sample, nb_epoch=5, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample, verbose=2)
Explanation: At first glance, this isn't looking encouraging, since the validation set is poor and getting worse. But the training set is getting better, and still has a long way to go in accuracy - so we should try annealing our learning rate and running more epochs, before we make a decisions.
End of explanation
model.fit_generator(batches, batches.nb_sample, nb_epoch=25, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample, verbose=2)
Explanation: Lucky we tried that - we starting to make progress! Let's keep going.
End of explanation |
6,697 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vidic, Fajfar and Fischinger (1994)
This procedure, proposed by Vidic, Fajfar and Fischinger (1994), aims to determine the displacements from an inelastic design spectra for systems with a given ductility factor. The inelastic displacement spectra is determined by means of applying a reduction factor, which depends on the natural period of the system, its ductility factor, the hysteretic behaviour, the damping, and the frequency content of the ground motion.
Note
Step1: Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
Step2: Load ground motion records
Please indicate the path to the folder containing the ground motion records to be used in the analysis through the parameter gmrs_folder.
Note
Step3: Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below.
The damage types currently supported are
Step4: Obtain the damage probability matrix
The following parameters need to be defined in the cell below in order to calculate the damage probability matrix
Step5: Fit lognormal CDF fragility curves
The following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above
Step6: Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above
Step7: Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above
Step8: Obtain vulnerability function
A vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribution of loss ratio for each intensity measure level.
The following parameters need to be defined in the cell below in order to calculate vulnerability functions using the above derived fragility functions
Step9: Plot vulnerability function
Step10: Save vulnerability function
The derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above | Python Code:
from rmtk.vulnerability.derivation_fragility.equivalent_linearization.vidic_etal_1994 import vidic_etal_1994
from rmtk.vulnerability.common import utils
%matplotlib inline
Explanation: Vidic, Fajfar and Fischinger (1994)
This procedure, proposed by Vidic, Fajfar and Fischinger (1994), aims to determine the displacements from an inelastic design spectra for systems with a given ductility factor. The inelastic displacement spectra is determined by means of applying a reduction factor, which depends on the natural period of the system, its ductility factor, the hysteretic behaviour, the damping, and the frequency content of the ground motion.
Note: To run the code in a cell:
Click on the cell to select it.
Press SHIFT+ENTER on your keyboard or press the play button (<button class='fa fa-play icon-play btn btn-xs btn-default'></button>) in the toolbar above.
End of explanation
capacity_curves_file = "../../../../../../rmtk_data/capacity_curves_Sa-Sd.csv"
capacity_curves = utils.read_capacity_curves(capacity_curves_file)
utils.plot_capacity_curves(capacity_curves)
Explanation: Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
End of explanation
gmrs_folder = "../../../../../../rmtk_data/accelerograms"
gmrs = utils.read_gmrs(gmrs_folder)
minT, maxT = 0.1, 2.0
utils.plot_response_spectra(gmrs, minT, maxT)
Explanation: Load ground motion records
Please indicate the path to the folder containing the ground motion records to be used in the analysis through the parameter gmrs_folder.
Note: Each accelerogram needs to be in a separate CSV file as described in the RMTK manual.
The parameters minT and maxT are used to define the period bounds when plotting the spectra for the provided ground motion fields.
End of explanation
damage_model_file = "../../../../../../rmtk_data/damage_model_Sd.csv"
damage_model = utils.read_damage_model(damage_model_file)
Explanation: Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below.
The damage types currently supported are: capacity curve dependent, spectral displacement and interstorey drift. If the damage model type is interstorey drift the user can provide the pushover curve in terms of Vb-dfloor to be able to convert interstorey drift limit states to roof displacements and spectral displacements, otherwise a linear relationship is assumed.
End of explanation
damping_model = "mass"
damping_ratio = 0.05
hysteresis_model = 'Q'
PDM, Sds = vidic_etal_1994.calculate_fragility(capacity_curves, gmrs,
damage_model, damping_ratio,
hysteresis_model, damping_model)
Explanation: Obtain the damage probability matrix
The following parameters need to be defined in the cell below in order to calculate the damage probability matrix:
1. damping_model: This parameter defines the type of damping model to be used in the analysis. The valid options are "mass" and "stiffness".
2. damping_ratio: This parameter defines the damping ratio for the structure.
3. hysteresis_model: The valid options are 'Q' or "bilinear".
End of explanation
IMT = "Sa"
period = 2.0
regression_method = "least squares"
fragility_model = utils.calculate_mean_fragility(gmrs, PDM, period, damping_ratio,
IMT, damage_model, regression_method)
Explanation: Fit lognormal CDF fragility curves
The following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above:
IMT: This parameter specifies the intensity measure type to be used. Currently supported options are "PGA", "Sd" and "Sa".
period: this parameter defines the time period of the fundamental mode of vibration of the structure.
regression_method: This parameter defines the regression method to be used for estimating the parameters of the fragility functions. The valid options are "least squares" and "max likelihood".
End of explanation
minIML, maxIML = 0.01, 2.00
utils.plot_fragility_model(fragility_model, minIML, maxIML)
Explanation: Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above:
* minIML and maxIML: These parameters define the limits of the intensity measure level for plotting the functions
End of explanation
taxonomy = "RC"
minIML, maxIML = 0.01, 2.00
output_type = "csv"
output_path = "../../../../../../rmtk_data/output/"
utils.save_mean_fragility(taxonomy, fragility_model, minIML, maxIML, output_type, output_path)
Explanation: Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
2. minIML and maxIML: These parameters define the bounds of applicability of the functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation
cons_model_file = "../../../../../../rmtk_data/cons_model.csv"
imls = [0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50,
0.60, 0.70, 0.80, 0.90, 1.00, 1.20, 1.40, 1.60, 1.80, 2.00]
distribution_type = "lognormal"
cons_model = utils.read_consequence_model(cons_model_file)
vulnerability_model = utils.convert_fragility_vulnerability(fragility_model, cons_model,
imls, distribution_type)
Explanation: Obtain vulnerability function
A vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribution of loss ratio for each intensity measure level.
The following parameters need to be defined in the cell below in order to calculate vulnerability functions using the above derived fragility functions:
1. cons_model_file: This parameter specifies the path of the consequence model file.
2. imls: This parameter specifies a list of intensity measure levels in increasing order at which the distribution of loss ratios are required to be calculated.
3. distribution_type: This parameter specifies the type of distribution to be used for calculating the vulnerability function. The distribution types currently supported are "lognormal", "beta", and "PMF".
End of explanation
utils.plot_vulnerability_model(vulnerability_model)
Explanation: Plot vulnerability function
End of explanation
taxonomy = "RC"
output_type = "csv"
output_path = "../../../../../../rmtk_data/output/"
utils.save_vulnerability(taxonomy, vulnerability_model, output_type, output_path)
Explanation: Save vulnerability function
The derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation |
6,698 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using python
Python is an interpreted language. This means that there is a "python program" that reads your input and excecutes it. If you open your ipython interpreter, you'll see the following message (or something very similar)
Step1: I mean that the command is written in the ipython command prompt (or in the ipython notebook if you prefer to use that). There are some lines above the command prompt. These are information on the current python and ipython version and tips on how to get help. But first, let us look at some of the basic operations, addition, subtraction, multiplication, division and exponentiation. In python, these would have the symbols +, -, *, / and **. If we would like to calculate
\begin{align}
\frac{3.6\cdot 5 + (3 - 2)^{3}}{2}
\end{align}
we would write
Step2: These are the most basic operation and there are several more. We wont bother about them in this text though.
Variables
Python, as most programming languages relies on the fact that you can create something called a variable in it. This is similar to what in mathematics is called a variable and a constant. For example, we can define a variable with the name $a$ in python and let it have the value $5$ by typing
Step3: Python now knows that there is such a thing as $a$ which you can use to do further operations. For example, instead of writing
Step4: We can write
Step5: and we get the same result. This might seem as it is not that useful, but we can use it the same way as we use constants and variables in mathematics to shorten what we have to write. For example, we may want to calculate averages of averages.
Step6: Without variables this would get messy and very, very soon extremely difficult.
Objects
Just as there are different kind of objects in mathematics such as integers, reals and matrices, there are different kind of objects in python. The basic ones are integers, floats, lists, tuples and strings which will be introduced here.
Integers
Integers behave very much like the integers in mathematics. Any operation between two integers will result in an integer, for example $5 + 2$ will result in $7$, which is an integer. Notice though that division might not always lead to a new integer, for example $\frac{5}{2}$ is not an integer. In python operations between integers always results in a new integer. Because of this, division between integers in python will drop the remainder.
Step7: Often, this is not what you wanted. This leads us to the next object.
Floats
A float, or a floating point number, works very much like a real number in mathematics. The set of floats is closed under all the operations we have introduced just as the reals are. To declare a float we simply have to add a dot to the number we wish to have, like this
Step8: and now, $a$ is a float instead of am integer. If we do the same operation as before, but with floats we get
Step9: Now it seems as if we only should use floats and not use integers at all because of this property. But as we will se soon the integers will play a central role in loops.
Lists and tuples
Just as integers and floats are similar, so are lists and tuples. We will begin with lists. A list is an ordered collection of objects. The objects can be of any type, it can even contain itself! (But that is usually not very useful). A list is initiated with matching square brackets [ and ].
Step10: Because a list is ordered, the objects in the list can be accessed by stating at which place they are. To access an object in the list we use the square brackets again. In python (and most otehr programming languages) counts from $0$, which means that the first object in the list has index $0$, the second has index $1$ and so on. So to access an object in a list we simply type
Step11: You can also access parts of a list like this
Step12: The lengt hof a list is not fixed in python and objects can be added and removed. To do this we will use append and del.
Step13: Tuples are initialized similarly as lists and they can contain most objects the list can contain. The main difference is that the tuple does not support item assignment. What this means is that when the tuple is created its objects can not change later. Tuples are initiated with matching parentheses ( and ).
Step15: Because tuples does not support item assignment, you cannot use append or del with it. Tuples ar good to use if you want to make sure that certain values stay unchanged in a program, for example a group of physical constants.
Strings
Strings are lines of text or symbols and are initiated with doubble or single quotes. If you wish for the string to span several lines you can use triple double quotes
Step16: Here, you saw the first occurrence of the print statement. It's functionality is much greater than prettifying string output as it can print text to the command or terminal window. One omportant functionality of the string is the format function. This function lets us create a string without knowing what it will contain beforehand.
Step17: It uses curly brackets { and } as placeholders for the objects in the format part. There are many other things you can do with strings, to find out use the question mark, ?, in the interpreter after the variable you want more information about. Notice that this does not work in the regular python interpreter, you have to use ipython. You can also use the help function to get help about functions and variables. It works both in the regular interpreter and in ipython. | Python Code:
1 + 1
Explanation: Using python
Python is an interpreted language. This means that there is a "python program" that reads your input and excecutes it. If you open your ipython interpreter, you'll see the following message (or something very similar):
Python 2.7.6 (default, Mar 22 2014, 22:59:56)
Type "copyright", "credits" or "license" for more information.
IPython 3.1.0 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In [1]:
where In [1]: is a command prompt. Whenever I write
End of explanation
(3.6*5 + (3 - 2)**3)/2
Explanation: I mean that the command is written in the ipython command prompt (or in the ipython notebook if you prefer to use that). There are some lines above the command prompt. These are information on the current python and ipython version and tips on how to get help. But first, let us look at some of the basic operations, addition, subtraction, multiplication, division and exponentiation. In python, these would have the symbols +, -, *, / and **. If we would like to calculate
\begin{align}
\frac{3.6\cdot 5 + (3 - 2)^{3}}{2}
\end{align}
we would write
End of explanation
a = 5
Explanation: These are the most basic operation and there are several more. We wont bother about them in this text though.
Variables
Python, as most programming languages relies on the fact that you can create something called a variable in it. This is similar to what in mathematics is called a variable and a constant. For example, we can define a variable with the name $a$ in python and let it have the value $5$ by typing
End of explanation
5 + 3
Explanation: Python now knows that there is such a thing as $a$ which you can use to do further operations. For example, instead of writing
End of explanation
a + 3
Explanation: We can write
End of explanation
b = (2.5 + 6.3)/2
c = (5.3 + 8.7)/2
(b + c)/2
Explanation: and we get the same result. This might seem as it is not that useful, but we can use it the same way as we use constants and variables in mathematics to shorten what we have to write. For example, we may want to calculate averages of averages.
End of explanation
a = 12
a / 5
Explanation: Without variables this would get messy and very, very soon extremely difficult.
Objects
Just as there are different kind of objects in mathematics such as integers, reals and matrices, there are different kind of objects in python. The basic ones are integers, floats, lists, tuples and strings which will be introduced here.
Integers
Integers behave very much like the integers in mathematics. Any operation between two integers will result in an integer, for example $5 + 2$ will result in $7$, which is an integer. Notice though that division might not always lead to a new integer, for example $\frac{5}{2}$ is not an integer. In python operations between integers always results in a new integer. Because of this, division between integers in python will drop the remainder.
End of explanation
a = 12.
Explanation: Often, this is not what you wanted. This leads us to the next object.
Floats
A float, or a floating point number, works very much like a real number in mathematics. The set of floats is closed under all the operations we have introduced just as the reals are. To declare a float we simply have to add a dot to the number we wish to have, like this
End of explanation
a / 5.
Explanation: and now, $a$ is a float instead of am integer. If we do the same operation as before, but with floats we get
End of explanation
a = [1, 3.5, [3, 5, []], 'this is a string']
a
Explanation: Now it seems as if we only should use floats and not use integers at all because of this property. But as we will se soon the integers will play a central role in loops.
Lists and tuples
Just as integers and floats are similar, so are lists and tuples. We will begin with lists. A list is an ordered collection of objects. The objects can be of any type, it can even contain itself! (But that is usually not very useful). A list is initiated with matching square brackets [ and ].
End of explanation
a[2] # Accessing the third element in the list. (This is acomment and is ignored by the interpreter.)
a[2][1] # We can access an objects in a list which is in a list
a[1] = 13.2 # We can change values of the objects in the list
a[0] + a[1] + a[2][1]
Explanation: Because a list is ordered, the objects in the list can be accessed by stating at which place they are. To access an object in the list we use the square brackets again. In python (and most otehr programming languages) counts from $0$, which means that the first object in the list has index $0$, the second has index $1$ and so on. So to access an object in a list we simply type
End of explanation
a[0:2] # Return a list containing the first element up to but not including the third (index 2) element
a[0:2] + a[0:2] # You can put two lists together with addition.
Explanation: You can also access parts of a list like this:
End of explanation
a = [] # Creating an empty list
a
a.append('bleen')
a.append([2,4.1,'grue'])
a.append(4.3)
a
del a[-1] # We can also index from the end of the list. -1 indicates the last element
a
Explanation: The lengt hof a list is not fixed in python and objects can be added and removed. To do this we will use append and del.
End of explanation
a = (2, 'e', (3.4, 6.8))
a
a[0]
a[-1][-1]
a[1] = 0
Explanation: Tuples are initialized similarly as lists and they can contain most objects the list can contain. The main difference is that the tuple does not support item assignment. What this means is that when the tuple is created its objects can not change later. Tuples are initiated with matching parentheses ( and ).
End of explanation
a = 'here is a string'
a
a = "Here's another" # Notice that we included a single quote in the string.
a
a =
This string
spans
several lines.
a # \n means new line. They can be manually included with \n.
print a # To see \n as an actual new line we need to use print a.
Explanation: Because tuples does not support item assignment, you cannot use append or del with it. Tuples ar good to use if you want to make sure that certain values stay unchanged in a program, for example a group of physical constants.
Strings
Strings are lines of text or symbols and are initiated with doubble or single quotes. If you wish for the string to span several lines you can use triple double quotes
End of explanation
a = [1,2,3,4,5]
str = "The sum of {} is {}".format(a, sum(a))
str
Explanation: Here, you saw the first occurrence of the print statement. It's functionality is much greater than prettifying string output as it can print text to the command or terminal window. One omportant functionality of the string is the format function. This function lets us create a string without knowing what it will contain beforehand.
End of explanation
a?
help(sum)
Explanation: It uses curly brackets { and } as placeholders for the objects in the format part. There are many other things you can do with strings, to find out use the question mark, ?, in the interpreter after the variable you want more information about. Notice that this does not work in the regular python interpreter, you have to use ipython. You can also use the help function to get help about functions and variables. It works both in the regular interpreter and in ipython.
End of explanation |
6,699 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Webscraping with Selenium
When the data that you want exists on a website with heavy JavaScript and requires interaction from the user, BeautifulSoup will not be enough. This is when you need a webdriver. One of the most popular webdrivers is Selenium. Selenium is commonly used in industry to automate testing of the user experience, but it can also interact with content to collect data that are difficult to get otherwise.
This lesson is a short introduction to the Selenium webdriver. It includes
Step1: Selenium actually uses our web browser, and since the JupyterHub doesn't come with Firefox, we'll download the binaries
Step2: We also need the webdriver for Firefox that allows Selenium to interact directly with the browser through the code we write. We can download the geckodriver for Firefox from the github page
Step3: 1. Launching the webdriver
Since we are in different environment and we can't use our regular graphical desktop, we need to tell Python to start a virutal display, onto which Selenium can project the Firefox web browser (though we won't actually see it).
Step4: Now we can initialize the Selenium web driver, giving it the path to the Firefox binary code and the driver
Step5: You can navigate Selenium to a URL by using the get method, exactly the same way we used the requests.get before
Step6: Cool, right? You can see Google in your browser now. Let's go look at some West Bengal State election results
Step7: Zilla Parishad
Similar to BeautifulSoup, Selenium has methods to find elements on a webpage. We can use the method find_element_by_name to find an element on the page by its name.
Step8: Now if we want to get the different options in this drop down, we can do the same. You'll notice that each name is associated with a unique value. Since we're getting multiple elements here, we'll use find_elements_by_tag_name
Step9: Now we'll make a dictionary associating each name with its value.
Step10: We can then select a district by using its name and our dictionary. First we'll make our own function using Selenium's Select, and then we'll call it on "Bankura".
Step11: You should have seen the dropdown menu select 'Bankura' by running the previous cell.
Panchayat Samity
We can do the same as we did above to find the different blocks.
Step12: Great! One dropdown menu to go.
Gram Panchayat
Step13: Once we selected the last dropdown menu parameter, the website automatically generate a table below. This table could not have been called up by a URL, as you can see that the URL in the browser did not change. This is why Selenium is so helpful.
3. Collecting generated data
Now that the table has been rendered, it exists as HTML in our page source. If we wanted to, we could send this to BeautifulSoup using the driver.page_source method to get the text. But we can also use Selenium's parsing methods.
First we'll identify it by its CSS selector, and then use the get_attribute method.
Step14: First we'll get all the rows of the table using the tr selector.
Step15: But the first row is the header so we don't want that.
Step16: Each cell in the row corresponds to the data we want.
Step17: Now it's just a matter of looping through the rows and getting the information we want from each one.
Step18: You'll notice that some of the information, such as total electors, is not supplied for each canddiate. This code will add that information for the candidates who don't have it.
Step19: 4. Exporting data to CSV
We can then loop through all the combinations of the dropdown menu we want, collect the information from the generated table, and append it to the data list. Once we're done, we can write it to a CSV. | Python Code:
from selenium import webdriver # powers the browser interaction
from selenium.webdriver.support.ui import Select # selects menu options
from pyvirtualdisplay import Display # for JHub environment
from bs4 import BeautifulSoup # to parse HTML
import csv # to write CSV
import pandas # to see CSV
Explanation: Webscraping with Selenium
When the data that you want exists on a website with heavy JavaScript and requires interaction from the user, BeautifulSoup will not be enough. This is when you need a webdriver. One of the most popular webdrivers is Selenium. Selenium is commonly used in industry to automate testing of the user experience, but it can also interact with content to collect data that are difficult to get otherwise.
This lesson is a short introduction to the Selenium webdriver. It includes:
Launching the webdriver
Navigating the browser
Collecting generated data
Exporting data to CSV
Let's first import the necessary Python libraries:
End of explanation
# download firefox binaries
!wget http://ftp.mozilla.org/pub/firefox/releases/54.0/linux-x86_64/en-US/firefox-54.0.tar.bz2
# untar binaries
!tar xvjf firefox-54.0.tar.bz2
Explanation: Selenium actually uses our web browser, and since the JupyterHub doesn't come with Firefox, we'll download the binaries:
End of explanation
# download geckodriver
!wget https://github.com/mozilla/geckodriver/releases/download/v0.17.0/geckodriver-v0.17.0-linux64.tar.gz
# untar geckdriver
!tar xzvf geckodriver-v0.17.0-linux64.tar.gz
Explanation: We also need the webdriver for Firefox that allows Selenium to interact directly with the browser through the code we write. We can download the geckodriver for Firefox from the github page:
End of explanation
display = Display(visible=0, size=(1024, 768))
display.start()
Explanation: 1. Launching the webdriver
Since we are in different environment and we can't use our regular graphical desktop, we need to tell Python to start a virutal display, onto which Selenium can project the Firefox web browser (though we won't actually see it).
End of explanation
# setup driver
driver = webdriver.Firefox(firefox_binary='./firefox/firefox', executable_path="./geckodriver")
Explanation: Now we can initialize the Selenium web driver, giving it the path to the Firefox binary code and the driver:
End of explanation
driver.get("http://www.google.com")
print(driver.page_source)
Explanation: You can navigate Selenium to a URL by using the get method, exactly the same way we used the requests.get before:
End of explanation
# go results page
driver.get("http://wbsec.gov.in/(S(eoxjutirydhdvx550untivvu))/DetailedResult/Detailed_gp_2013.aspx")
Explanation: Cool, right? You can see Google in your browser now. Let's go look at some West Bengal State election results:
2. Navigating the browser
To follow along as Selenium navigates the website, try opening the <a href="http://wbsec.gov.in/(S(eoxjutirydhdvx550untivvu))/DetailedResult/Detailed_gp_2013.aspx">site</a> in another tab. You'll notice if you select options from the menu, it calls a script to generate a custom table. The URL doesn't change, and so we can't just call for the HTML of the page, it needs to be generated. That's where Selenium shines. It can choose these menu options and wait for the generated table before grabbing the new HTML for the data.
End of explanation
# find "district" drop down menu
district = driver.find_element_by_name("ddldistrict")
district
Explanation: Zilla Parishad
Similar to BeautifulSoup, Selenium has methods to find elements on a webpage. We can use the method find_element_by_name to find an element on the page by its name.
End of explanation
# find options in "disrict" drop down
district_options = district.find_elements_by_tag_name("option")
print(district_options[1].get_attribute("value"))
print(district_options[1].text)
Explanation: Now if we want to get the different options in this drop down, we can do the same. You'll notice that each name is associated with a unique value. Since we're getting multiple elements here, we'll use find_elements_by_tag_name
End of explanation
d_options = {option.text.strip(): option.get_attribute("value") for option in district_options if option.get_attribute("value").isdigit()}
print(d_options)
Explanation: Now we'll make a dictionary associating each name with its value.
End of explanation
district_select = Select(district)
district_select.select_by_value(d_options["Bankura"])
Explanation: We can then select a district by using its name and our dictionary. First we'll make our own function using Selenium's Select, and then we'll call it on "Bankura".
End of explanation
# find the "block" drop down
block = driver.find_element_by_name("ddlblock")
# get options
block_options = block.find_elements_by_tag_name("option")
print(block_options[1].get_attribute("value"))
print(block_options[1].text)
b_options = {option.text.strip(): option.get_attribute("value") for option in block_options if option.get_attribute("value").isdigit()}
print(b_options)
panchayat_select = Select(block)
panchayat_select.select_by_value(b_options["BANKURA-I"])
Explanation: You should have seen the dropdown menu select 'Bankura' by running the previous cell.
Panchayat Samity
We can do the same as we did above to find the different blocks.
End of explanation
# get options
gp = driver.find_element_by_name("ddlgp")
gp_options = gp.find_elements_by_tag_name("option")
print(gp_options[1].get_attribute("value"))
print(gp_options[1].text)
gp_options = {option.text.strip(): option.get_attribute("value") for option in gp_options if option.get_attribute("value").isdigit()}
print(gp_options)
gram_select = Select(gp)
gram_select.select_by_value(gp_options["ANCHURI"])
Explanation: Great! One dropdown menu to go.
Gram Panchayat
End of explanation
soup = BeautifulSoup(driver.page_source, 'html5lib')
# get the html for the table
table = soup.select('#DataGrid1')[0]
Explanation: Once we selected the last dropdown menu parameter, the website automatically generate a table below. This table could not have been called up by a URL, as you can see that the URL in the browser did not change. This is why Selenium is so helpful.
3. Collecting generated data
Now that the table has been rendered, it exists as HTML in our page source. If we wanted to, we could send this to BeautifulSoup using the driver.page_source method to get the text. But we can also use Selenium's parsing methods.
First we'll identify it by its CSS selector, and then use the get_attribute method.
End of explanation
# get list of rows
rows = [row for row in table.select("tr")]
Explanation: First we'll get all the rows of the table using the tr selector.
End of explanation
rows = rows[1:]
Explanation: But the first row is the header so we don't want that.
End of explanation
rows[0].select('td')
Explanation: Each cell in the row corresponds to the data we want.
End of explanation
data = []
for row in rows:
d = {}
seat_names = row.select('td')[0].find_all("span")
d['seat'] = ' '.join([x.text for x in seat_names])
d['electors'] = row.select('td')[1].text.strip()
d['polled'] = row.select('td')[2].text.strip()
d['rejected'] = row.select('td')[3].text.strip()
d['osn'] = row.select('td')[4].text.strip()
d['candidate'] = row.select('td')[5].text.strip()
d['party'] = row.select('td')[6].text.strip()
d['secured'] = row.select('td')[7].text.strip()
data.append(d)
print(data[1])
Explanation: Now it's just a matter of looping through the rows and getting the information we want from each one.
End of explanation
i = 0
while i < len(data):
if data[i]['seat']:
seat = data[i]['seat']
electors = data[i]['electors']
polled = data[i]['polled']
rejected = data[i]['rejected']
i = i+1
else:
data[i]['seat'] = seat
data[i]['electors'] = electors
data[i]['polled'] = polled
data[i]['rejected'] = rejected
i = i+1
data
Explanation: You'll notice that some of the information, such as total electors, is not supplied for each canddiate. This code will add that information for the candidates who don't have it.
End of explanation
header = data[0].keys()
with open('WBS-table.csv', 'w') as output_file:
dict_writer = csv.DictWriter(output_file, header)
dict_writer.writeheader()
dict_writer.writerows(data)
pandas.read_csv('WBS-table.csv')
Explanation: 4. Exporting data to CSV
We can then loop through all the combinations of the dropdown menu we want, collect the information from the generated table, and append it to the data list. Once we're done, we can write it to a CSV.
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.