Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
3,200 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center> Earth's Energy Budget </center>
<img src = 'https
Step1: Clear sky downward solar flux is considered to be equivalent to solar radiation after clouds
Step2: compare CIMIS (measured on earth) with Clear Sky (CFSR)
Step3: Clear sky overestimates the CIMIS data
Why???
simulation
locations are different. Distance between two points is 20 - 25 miles.
Step4: Maybe clear sky is after clouds and downward waves are before clouds? | Python Code:
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''')
import pandas as pd
import matplotlib.pyplot as plt
from netCDF4 import Dataset
import netCDF4
plt.style.use('ggplot')
%matplotlib inline
plt.rcParams['figure.figsize'] = 16, 10
my_example_nc_file = 'pgbh01.gdas.20052010.nc'
fh = Dataset(my_example_nc_file, mode='r')
times = fh.variables['time']
time_np = netCDF4.num2date(times[:],times.units) - pd.offsets.Hour(8)
#print (fh.variables['ULWRF_L1_Avg_1'])
print (fh.variables['USWRF_L1_Avg_1'])
variables = {"SHTFL_L1_Avg_1" : "Sensible heat flux",
"DSWRF_L1_Avg_1" : "Downward shortwave radiation flux",
"CSDSF_L1_Avg_1" : "Clear sky downward solar flux",
"DSWRF_L1_Avg_1" : "Downward shortwave radiation flux",
"DLWRF_L1_Avg_1" : "Downward longwave radiation flux",
"CSULF_L1_Avg_1" : "Clear sky upward longwave flux",
"GFLUX_L1_Avg_1" : "Ground heat flux"}
Explanation: <center> Earth's Energy Budget </center>
<img src = 'https://science-edu.larc.nasa.gov/EDDOCS/images/Erb/components2.gif'>
End of explanation
downward_solar_flux_np = fh.variables["CSDSF_L1_Avg_1"][:, 0, 0]
cfsr = pd.DataFrame({'datetime': time_np, 'solar rad': downward_solar_flux_np})
cimis = pd.read_pickle('cimis_2005_2010.pkl')
def compare(title):
plt.plot(cfsr['datetime'][1:], cfsr['solar rad'][1:], label = "cfsr")
plt.plot(cimis['datetime'][4:][::6], cimis['solar rad'][4:][::6], label = "cimis")
plt.title(title)
plt.legend()
plt.rcParams['figure.figsize'] = 16, 10
Explanation: Clear sky downward solar flux is considered to be equivalent to solar radiation after clouds
End of explanation
compare('cfsr: downward longwave vs cimis: after clouds')
Explanation: compare CIMIS (measured on earth) with Clear Sky (CFSR)
End of explanation
cfsr['month'] = cfsr.datetime.dt.month
grouped = cfsr.groupby('month').mean()
grouped.reset_index(inplace=True)
cimis['month'] = cimis.datetime.dt.month
grouped2 = cimis.groupby('month').mean()
grouped2.reset_index(inplace=True)
x = grouped['month']
y = grouped['solar rad']
z = grouped2['solar rad']
ax = plt.subplot(111)
ax.bar(x+0.2, y,width=0.2,color='b',align='center')
ax.bar(x, z,width=0.2,color='g',align='center')
ax.legend(['cfsr','cimis'])
plt.title('average solar radiation accross different months for cfsr and cimis')
downward_shortwave = fh.variables['DSWRF_L1_Avg_1'][:, 0, 0]
downward_longwave = fh.variables['DLWRF_L1_Avg_1'][:, 0, 0]
upward_longwave = fh.variables['ULWRF_L1_Avg_1'][:, 0, 0]
upward_shortwave = fh.variables['USWRF_L1_Avg_1'][:, 0, 0]
Explanation: Clear sky overestimates the CIMIS data
Why???
simulation
locations are different. Distance between two points is 20 - 25 miles.
End of explanation
plt.plot(cfsr['datetime'], fh.variables['CSDSF_L1_Avg_1'][:, 0, 0] + fh.variables['CSDLF_L1_Avg_1'][:, 0, 0] , label = "clear sky")
plt.plot(cfsr['datetime'], fh.variables['DSWRF_L1_Avg_1'][:, 0, 0] + fh.variables['DLWRF_L1_Avg_1'][:, 0, 0] , label = "down")
plt.title('clear sky and downward wave comparison')
plt.legend()
plt.rcParams['figure.figsize'] = 16, 10
Explanation: Maybe clear sky is after clouds and downward waves are before clouds?
End of explanation |
3,201 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Webscraping with Beautiful Soup
In this lesson we'll learn about various techniques to scrape data from websites. This lesson will include
Step1: 1. Using BeautifulSoup
1.1 Make a GET request and parse the HTML response
We use the requests library just as we did with APIs, but this time we won't get JSON or XML back, but we'll get an HTML response.
Step2: 1.2 soup it
Now we use the BeautifulSoup function to make an object of the response, which allows us to parse the HTML tree. This returns an object (called a soup object) with all of the HTML in the original document.
Step3: 1.3 Find Elements
BeautifulSoup has a number of functions to find things on a page. Like other scraping tools, BeautifulSoup lets you find elements by their
Step4: NB
Step5: That's a lot! Many elements on a page will have the same HTML tag. For instance, if you search for everything with the a tag, you're likely to get a lot of stuff, much of which you don't want. What if we wanted to search for HTML tags ONLY with certain attributes, like particular CSS classes?
We can do this by adding an additional argument to the find_all. In the example below, we are finding all the a tags, and then filtering those with class = "sidemenu".
Step6: Oftentimes a more efficient way to search and find things on a website is by CSS selector. For this we have to use a different method, select(). Just pass a string into the .select() to get all elements with that string as a valid CSS selector.
In the example above, we can use "a.sidemenu" as a CSS selector, which returns all a tags with class sidemenu.
Step7: Using CSS is one way to organize how we stylize a website. They allow us to categorize and label certain HTML elements, and use these categories and labels to apply specfic styling. CSS selectors are what we use to identify these elements, and then decide what style to apply. We won't have time today to go into detail about HTML and CSS, but it's worth talking about the three most important CSS selectors
Step8: 1.4 Get Attributes and Text of Elements
Once we identify elements, we want to access information in that element. Oftentimes this means two things
Step9: It's a tag! Which means it has a text member
Step10: You'll see there is some extra spacing here, we can use the strip method to remove that
Step11: Sometimes we want the value of certain attributes. This is particularly relevant for a tags, or links, where the href attribute tells us where the link goes.
You can access a tag’s attributes by treating the tag like a dictionary
Step12: Nice, but that doesn't look like a full URL! Don't worry, we'll get to this soon.
Challenge 3
Find all the href attributes (url) from the mainmenu by writing a list comprehension and assign to it rel_paths.
Step13: 2. Collecting information
Believe it or not, that's all you need to scrape a website. Let's apply these skills to scrape the 98th general assembly.
Our goal is to scrape information on each senator, including their
Step14: 2.2 Find the right elements and text
Now let's try to get a list of rows in that table. Remember that rows are identified by the tr tag.
Step15: But remember, find_all gets all the elements with the tr tag. We can use smart CSS selectors to get only the rows we want.
Step16: We can use the select method on anything. Let's say we want to find everything with the CSS selector td.detail in an item of the list we created above.
Step17: Most of the time, we're interested in the actual text of a website, not its tags. Remember, to get the text of an HTML element, use the text member.
Step18: Now we can combine the BeautifulSoup tools with our basic python skills to scrape an entire web page.
Step19: 2.3 Loop it all together
Challenge 4
Let's use a for loop to get 'em all! We'll start at the beginning with the request
Step20: Challenge 5
Step21: Cool! Now you can probably guess how to loop it all together by iterating through the links we just extracted.
3. Following links to scrape bills
3.1 Writing a scraper function
Now we want to scrape the webpages corresponding to bills sponsored by each senator.
Challenge 6
Write a function called get_bills(url) to parse a given bill's URL. This will involve
Step22: 3.2 Get all the bills
Finally, we create a dictionary bills_dict which maps a district number (the key) onto a list_of_bills (the value) eminating from that district. You can do this by looping over all of the senate members in members_dict and calling get_bills() for each of their associated bill URLs.
NOTE
Step23: 4. Export to CSV
We can write this to a CSV too | Python Code:
import requests # to make GET request
from bs4 import BeautifulSoup # to parse the HTML response
import time # to pause between calls
import csv # to write data to csv
import pandas # to see CSV
Explanation: Webscraping with Beautiful Soup
In this lesson we'll learn about various techniques to scrape data from websites. This lesson will include:
Discussion of complying with Terms of Use
Using Python's BeautifulSoup library
Collecting data from one page
Following collected links
Exporting data to CSV
0. Terms of Use
We'll be scraping information on the state senators of Illinois, as well as the list of bills from the Illinois General Assembly. Your first step before scraping should always be to read the Terms of Use or Terms of Agreement for a website. Many websites will explicitly prohibit scraping in any form. Moreover, if you're affiliated with an institution, you may be breaching existing contracts by engaging in scraping. UC Berkeley's Library recommends following this workflow:
While our source's Terms of Use do not explicitly prohibit scraping (nor do their robots.txt), it is advisable to still contact the web administrator of the website. We will not be placing too much stress on their servers today, so please keep this in mind while following along and executing the code. You should always attempt to contact the web administrator of the site you plan to scrape. Oftentimes there is an easier way to get the data that you want.
Let's go ahead and import the Python libraries we'll need:
End of explanation
# make a GET request
response = requests.get('http://www.ilga.gov/senate/default.asp')
# read the content of the server’s response as a string
page_source = response.text
print(page_source[:1000])
Explanation: 1. Using BeautifulSoup
1.1 Make a GET request and parse the HTML response
We use the requests library just as we did with APIs, but this time we won't get JSON or XML back, but we'll get an HTML response.
End of explanation
# parse the response into an HTML tree soup object
soup = BeautifulSoup(page_source, 'html5lib')
# take a look
print(soup.prettify()[:1000])
Explanation: 1.2 soup it
Now we use the BeautifulSoup function to make an object of the response, which allows us to parse the HTML tree. This returns an object (called a soup object) with all of the HTML in the original document.
End of explanation
soup.find_all("a")
Explanation: 1.3 Find Elements
BeautifulSoup has a number of functions to find things on a page. Like other scraping tools, BeautifulSoup lets you find elements by their:
HTML tags
HTML Attributes
CSS Selectors
Let's search first for HTML tags.
The function find_all searches the soup tree to find all the elements with a particular HTML tag, and returns all of those elements.
What does the example below do?
End of explanation
soup("a")
Explanation: NB: Because find_all() is the most popular method in the BeautifulSoup search library, you can use a shortcut for it. If you treat the BeautifulSoup object as though it were a function, then it’s the same as calling find_all() on that object.
End of explanation
# get only the 'a' tags in 'sidemenu' class
soup("a", class_="sidemenu")
Explanation: That's a lot! Many elements on a page will have the same HTML tag. For instance, if you search for everything with the a tag, you're likely to get a lot of stuff, much of which you don't want. What if we wanted to search for HTML tags ONLY with certain attributes, like particular CSS classes?
We can do this by adding an additional argument to the find_all. In the example below, we are finding all the a tags, and then filtering those with class = "sidemenu".
End of explanation
# get elements with "a.sidemenu" CSS Selector.
soup.select("a.sidemenu")
Explanation: Oftentimes a more efficient way to search and find things on a website is by CSS selector. For this we have to use a different method, select(). Just pass a string into the .select() to get all elements with that string as a valid CSS selector.
In the example above, we can use "a.sidemenu" as a CSS selector, which returns all a tags with class sidemenu.
End of explanation
# your code here
soup.select("a.mainmenu")
Explanation: Using CSS is one way to organize how we stylize a website. They allow us to categorize and label certain HTML elements, and use these categories and labels to apply specfic styling. CSS selectors are what we use to identify these elements, and then decide what style to apply. We won't have time today to go into detail about HTML and CSS, but it's worth talking about the three most important CSS selectors:
element selector: simply including the element type, such as a above, will select all elements on the page of that element type. Try using your development tools (Chrome, Firefox, or Safari) to change all elements of the type a to a background color of red.
a {
background-color: red
}
class selector: if you put a period (.) before the name of a class, all elements belonging to that class will be selected. Try using your development tools to change all elements of the class detail to a background color of red.
.detail {
background-color: red
}
ID selector: if you put a hashtag (#) before the name of an id, all elements with that id will be selected. Try using the development tools to change all elements with the id Senate to a background color of red.
```
Senate {
background-color: red
}
```
The above three examples will take all elements with the given property, but oftentimes you only want certain elements within the hierarchy. We can do that by simply placing elements side-by-side separated by a space.
Challenge 1
Using your developer tools, change the background-color of all a elements in only the "Current Senate Members" table.
tr tr tr a {
background-color: red
}
Challenge 2
Find all the <a> elements in class mainmenu
End of explanation
# this is a list
soup.select("a.sidemenu")
# we first want to get an individual tag object
first_link = soup.select("a.sidemenu")[0]
# check out its class
print(type(first_link))
Explanation: 1.4 Get Attributes and Text of Elements
Once we identify elements, we want to access information in that element. Oftentimes this means two things:
Text
Attributes
Getting the text inside an element is easy. All we have to do is use the text member of a tag object:
End of explanation
print(first_link.text)
Explanation: It's a tag! Which means it has a text member:
End of explanation
print(first_link.text.strip())
Explanation: You'll see there is some extra spacing here, we can use the strip method to remove that:
End of explanation
print(first_link['href'])
Explanation: Sometimes we want the value of certain attributes. This is particularly relevant for a tags, or links, where the href attribute tells us where the link goes.
You can access a tag’s attributes by treating the tag like a dictionary:
End of explanation
# your code here
rel_paths = [link['href'] for link in soup.select("a.mainmenu")]
print(rel_paths)
Explanation: Nice, but that doesn't look like a full URL! Don't worry, we'll get to this soon.
Challenge 3
Find all the href attributes (url) from the mainmenu by writing a list comprehension and assign to it rel_paths.
End of explanation
# make a GET request
response = requests.get('http://www.ilga.gov/senate/default.asp?GA=98')
# read the content of the server’s response
page_source = response.text
# soup it
soup = BeautifulSoup(page_source, "html5lib")
Explanation: 2. Collecting information
Believe it or not, that's all you need to scrape a website. Let's apply these skills to scrape the 98th general assembly.
Our goal is to scrape information on each senator, including their:
* name
* district
* party
2.1 First, make the GET request and soup it
End of explanation
# get all tr elements
rows = soup.find_all("tr")
print(len(rows))
Explanation: 2.2 Find the right elements and text
Now let's try to get a list of rows in that table. Remember that rows are identified by the tr tag.
End of explanation
# returns every ‘tr tr tr’ css selector in the page
rows = soup.select('tr tr tr')
print(rows[2].prettify())
Explanation: But remember, find_all gets all the elements with the tr tag. We can use smart CSS selectors to get only the rows we want.
End of explanation
# select only those 'td' tags with class 'detail'
row = rows[2]
detail_cells = row.select('td.detail')
detail_cells
Explanation: We can use the select method on anything. Let's say we want to find everything with the CSS selector td.detail in an item of the list we created above.
End of explanation
# Keep only the text in each of those cells
row_data = [cell.text for cell in detail_cells]
print(row_data)
Explanation: Most of the time, we're interested in the actual text of a website, not its tags. Remember, to get the text of an HTML element, use the text member.
End of explanation
# check it out
print(row_data[0]) # name
print(row_data[3]) # district
print(row_data[4]) # party
Explanation: Now we can combine the BeautifulSoup tools with our basic python skills to scrape an entire web page.
End of explanation
# make a GET request
response = requests.get('http://www.ilga.gov/senate/default.asp?GA=98')
# read the content of the server’s response
page_source = response.text
# soup it
soup = BeautifulSoup(page_source, "html5lib")
# create empty list to store our data
members = []
# returns every ‘tr tr tr’ css selector in the page
rows = soup.select('tr tr tr')
# loop through all rows
for row in rows:
# select only those 'td' tags with class 'detail'
detail_cells = row.select('td.detail')
# get rid of junk rows
if len(detail_cells) is not 5:
continue
# keep only the text in each of those cells
row_data = [cell.text for cell in detail_cells]
# collect information
name = row_data[0]
district = int(row_data[3])
party = row_data[4]
# store in a tuple
tup = (name, district, party)
# append to list
members.append(tup)
print(len(members))
print()
print(members)
Explanation: 2.3 Loop it all together
Challenge 4
Let's use a for loop to get 'em all! We'll start at the beginning with the request:
End of explanation
# your code here
# make a GET request
response = requests.get('http://www.ilga.gov/senate/default.asp?GA=98')
# read the content of the server’s response
page_source = response.text
# soup it
soup = BeautifulSoup(page_source, "html5lib")
# Create empty list to store our data
members = []
# returns every ‘tr tr tr’ css selector in the page
rows = soup.select('tr tr tr')
# loop through all rows
for row in rows:
# select only those 'td' tags with class 'detail'
detail_cells = row.select('td.detail')
# get rid of junk rows
if len(detail_cells) is not 5:
continue
# keep only the text in each of those cells
row_data = [cell.text for cell in detail_cells]
# collect information
name, district, party = row_data[0], int(row_data[3]), row_data[4]
# add href
href = row.select('a')[1]['href']
# add full path
full_path = "http://www.ilga.gov/senate/" + href + "&Primary=True"
# store in a tuple
tup = (name, district, party, full_path)
# append to list
members.append(tup)
members[:5]
Explanation: Challenge 5: Get HREF element pointing to members' bills
The code above retrieves information on:
the senator's name
their district number
and their party
We now want to retrieve the URL for each senator's list of bills. The format for the list of bills for a given senator is:
http://www.ilga.gov/senate/SenatorBills.asp + ? + GA=98 + &MemberID=memberID + &Primary=True
to get something like:
http://www.ilga.gov/senate/SenatorBills.asp?MemberID=1911&GA=98&Primary=True
You should be able to see that, unfortunately, memberID is not currently something pulled out in our scraping code.
Your initial task is to modify the code above so that we also retrieve the full URL which points to the corresponding page of primary-sponsored bills, for each member, and return it along with their name, district, and party.
Tips:
To do this, you will want to get the appropriate anchor element (<a>) in each legislator's row of the table. You can again use the .select() method on the row object in the loop to do this — similar to the command that finds all of the td.detail cells in the row. Remember that we only want the link to the legislator's bills, not the committees or the legislator's profile page.
The anchor elements' HTML will look like <a href="/senate/Senator.asp/...">Bills</a>. The string in the href attribute contains the relative link we are after. You can access an attribute of a BeatifulSoup Tag object the same way you access a Python dictionary: anchor['attributeName']. (See the <a href="http://www.crummy.com/software/BeautifulSoup/bs4/doc/#tag">documentation</a> for more details). There are a lot of different ways to use BeautifulSoup to get things done; whatever you need to do to pull that href out is fine.
Since we will only get a relative link, you'll have to do some concatenating to get the full URLs.
Use the code you wrote in Challenge 4 and simply add the full path to the tuple
End of explanation
# your code here
def get_bills(url):
# make the GET request
response = requests.get(url)
page_source = response.text
soup = BeautifulSoup(page_source, "html5lib")
# get the table rows
rows = soup.select('tr tr tr')
# make empty list to collect the info
bills = []
for row in rows:
# get columns
detail_cells = row.select('td.billlist')
if len(detail_cells) is not 5:
continue
# get text in each column
row_data = [cell.text for cell in row]
# append data in columns 2-5
bills.append(tuple(row_data[2:6]))
return(bills)
# uncomment to test your code:
test_url = members[0][3]
print(test_url)
get_bills(test_url)[0:5]
Explanation: Cool! Now you can probably guess how to loop it all together by iterating through the links we just extracted.
3. Following links to scrape bills
3.1 Writing a scraper function
Now we want to scrape the webpages corresponding to bills sponsored by each senator.
Challenge 6
Write a function called get_bills(url) to parse a given bill's URL. This will involve:
requesting the URL using the <a href="http://docs.python-requests.org/en/latest/">requests</a> library
using the features of the BeautifulSoup library to find all of the <td> elements with the class billlist
return a list of tuples, each with:
description (2nd column)
chamber (S or H) (3rd column)
the last action (4th column)
the last action date (5th column)
I've started the function for you. Fill in the rest.
End of explanation
bills_info = []
for member in members[:3]: # only go through 3 members
print(member[0])
member_bills = get_bills(member[3])
for b in member_bills:
bill = list(member) + list(b)
bills_info.append(bill)
time.sleep(5)
bills_info
Explanation: 3.2 Get all the bills
Finally, we create a dictionary bills_dict which maps a district number (the key) onto a list_of_bills (the value) eminating from that district. You can do this by looping over all of the senate members in members_dict and calling get_bills() for each of their associated bill URLs.
NOTE: Please call the function time.sleep(5) for each iteration of the loop, so that we don't destroy the state's web site.
End of explanation
# manually decide on header names
header = ['Senator', 'District', 'Party', 'Bills Link', 'Description', 'Chamber', 'Last Action', 'Last Action Date']
with open('all-bills.csv', 'w') as output_file:
csv_writer = csv.writer(output_file)
csv_writer.writerow(header)
csv_writer.writerows(bills_info)
pandas.read_csv('all-bills.csv')
Explanation: 4. Export to CSV
We can write this to a CSV too:
End of explanation |
3,202 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Importing and Exporting Data
Data can be imported into Google BigQuery from a CSV file stored within Google Cloud Storage, or it can be streamed directly into BigQuery from Python code.
Similarly, the results of a query can be exported to Google Cloud Storage as a set of shards, or they can be streamed directly into a file within Datalab. Note that for larger data sizes, it is recommended to choose the sharded method.
Step1: Importing Data
The first step to analyzing and querying your data is importing it. For this demo, we'll create a temporary table in a temporary dataset within BigQuery, using a small data file within Cloud Storage.
Importing Data from Cloud Storage
To interact with Google Cloud Storage, Datalab includes the %%gcs command. First, see the available options on %%gcs
Step2: Let's use the read option to read a storage object into a local Python variable
Step3: Importing Data from a DataFrame
Step4: Exporting Data
Exporting Data to Cloud Storage
Step5: Exporting Data to a Local File
Step6: Cleanup | Python Code:
from google.datalab import Context
import google.datalab.bigquery as bq
import google.datalab.storage as storage
import pandas as pd
try:
from StringIO import StringIO
except ImportError:
from io import BytesIO as StringIO
Explanation: Importing and Exporting Data
Data can be imported into Google BigQuery from a CSV file stored within Google Cloud Storage, or it can be streamed directly into BigQuery from Python code.
Similarly, the results of a query can be exported to Google Cloud Storage as a set of shards, or they can be streamed directly into a file within Datalab. Note that for larger data sizes, it is recommended to choose the sharded method.
End of explanation
%gcs -h
Explanation: Importing Data
The first step to analyzing and querying your data is importing it. For this demo, we'll create a temporary table in a temporary dataset within BigQuery, using a small data file within Cloud Storage.
Importing Data from Cloud Storage
To interact with Google Cloud Storage, Datalab includes the %%gcs command. First, see the available options on %%gcs:
End of explanation
%%gcs read --object gs://cloud-datalab-samples/cars.csv --variable cars
print(cars)
# Create the schema, conveniently using a DataFrame example.
df = pd.read_csv(StringIO(cars))
schema = bq.Schema.from_data(df)
# Create the dataset
bq.Dataset('importingsample').create()
# Create the table
sample_table = bq.Table('importingsample.cars').create(schema = schema, overwrite = True)
sample_table.load('gs://cloud-datalab-samples/cars.csv', mode='append',
source_format = 'csv', csv_options=bq.CSVOptions(skip_leading_rows = 1))
%%bq query -n importingSample
SELECT * FROM importingsample.cars
%bq execute -q importingSample
Explanation: Let's use the read option to read a storage object into a local Python variable:
End of explanation
cars2 = storage.Object('cloud-datalab-samples', 'cars2.csv').read_stream()
df2 = pd.read_csv(StringIO(cars2))
df2
df2.fillna(value='', inplace=True)
df2
sample_table.insert(df2)
sample_table.to_dataframe()
Explanation: Importing Data from a DataFrame
End of explanation
project = Context.default().project_id
sample_bucket_name = project + '-datalab-samples'
sample_bucket_path = 'gs://' + sample_bucket_name
sample_bucket_object = sample_bucket_path + '/tmp/cars.csv'
print('Bucket: ' + sample_bucket_name)
print('Object: ' + sample_bucket_object)
sample_bucket = storage.Bucket(sample_bucket_name)
sample_bucket.create()
sample_bucket.exists()
table = bq.Table('importingsample.cars')
table.extract(destination = sample_bucket_object)
%gcs list --objects gs://$sample_bucket_name/*
bucket = storage.Bucket(sample_bucket_name)
obj = list(bucket.objects())[0]
data = obj.read_stream()
print(data)
Explanation: Exporting Data
Exporting Data to Cloud Storage
End of explanation
table.to_file('/tmp/cars.csv')
%%bash
ls -l /tmp/cars.csv
lines = None
with open('/tmp/cars.csv') as datafile:
lines = datafile.readlines()
print(''.join(lines))
Explanation: Exporting Data to a Local File
End of explanation
sample_bucket.object('tmp/cars.csv').delete()
sample_bucket.delete()
bq.Dataset('importingsample').delete(delete_contents = True)
Explanation: Cleanup
End of explanation |
3,203 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step6: Processing Multisentence Documents
Step8: Define markup_sentence
We are putting the functionality we went through in the previous two notebooks (BasicSentenceMarkup and BasicSentenceMarkupPart2) into a function markup_sentence. We add one step to the function
Step9: Create a ConTextDocument
ConTextDocument is a class for organizing the markup of multiple sentences. It has a private attribute that is NetworkX DiGraph that represents the document structure. In this exmaple we only use the ConTextDocument class to collect multiple sentence markups.
Step10: Split the document into sentences and process each sentence
pyConTextNLP comes with a simple sentence splitter in helper.py. I have not been maintaining this and have recently been using TextBlob to split sentences. A known problem with either sentence splitting solution is enumerated lists that don't use periods.
Step11: Displaying pyConTextNLP Markups
The display subpackage contains some functionality for visualizing the markups. Here I use HTML to color-code identified concepts.
Step12: There is also a rich XML description of the ConTextDocument | Python Code:
import pyConTextNLP.pyConTextGraph as pyConText
import pyConTextNLP.itemData as itemData
from textblob import TextBlob
import networkx as nx
import pyConTextNLP.display.html as html
from IPython.display import display, HTML
reports = [
IMPRESSION: Evaluation limited by lack of IV contrast; however, no evidence of
bowel obstruction or mass identified within the abdomen or pelvis. Non-specific interstitial opacities and bronchiectasis seen at the right
base, suggestive of post-inflammatory changes.,
IMPRESSION: Evidence of early pulmonary vascular congestion and interstitial edema. Probable scarring at the medial aspect of the right lung base, with no
definite consolidation.
,
IMPRESSION:
1. 2.0 cm cyst of the right renal lower pole. Otherwise, normal appearance
of the right kidney with patent vasculature and no sonographic evidence of
renal artery stenosis.
2. Surgically absent left kidney.,
IMPRESSION: No pneumothorax.,
IMPRESSION: No definite pneumothorax
IMPRESSION: New opacity at the left lower lobe consistent with pneumonia.
]
modifiers = itemData.instantiateFromCSVtoitemData(
"https://raw.githubusercontent.com/chapmanbe/pyConTextNLP/master/KB/lexical_kb_05042016.tsv")
targets = itemData.instantiateFromCSVtoitemData(
"https://raw.githubusercontent.com/chapmanbe/pyConTextNLP/master/KB/utah_crit.tsv")
Explanation: Processing Multisentence Documents
End of explanation
def markup_sentence(s, modifiers, targets, prune_inactive=True):
markup = pyConText.ConTextMarkup()
markup.setRawText(s)
markup.cleanText()
markup.markItems(modifiers, mode="modifier")
markup.markItems(targets, mode="target")
markup.pruneMarks()
markup.dropMarks('Exclusion')
# apply modifiers to any targets within the modifiers scope
markup.applyModifiers()
markup.pruneSelfModifyingRelationships()
if prune_inactive:
markup.dropInactiveModifiers()
return markup
report = reports[0]
print(report)
Explanation: Define markup_sentence
We are putting the functionality we went through in the previous two notebooks (BasicSentenceMarkup and BasicSentenceMarkupPart2) into a function markup_sentence. We add one step to the function: dropInactiveModifiers will delete any modifier node that does not get attached to a target node.
End of explanation
context = pyConText.ConTextDocument()
Explanation: Create a ConTextDocument
ConTextDocument is a class for organizing the markup of multiple sentences. It has a private attribute that is NetworkX DiGraph that represents the document structure. In this exmaple we only use the ConTextDocument class to collect multiple sentence markups.
End of explanation
blob = TextBlob(report.lower())
count = 0
rslts = []
for s in blob.sentences:
m = markup_sentence(s.raw, modifiers=modifiers, targets=targets)
rslts.append(m)
for r in rslts:
context.addMarkup(r)
Explanation: Split the document into sentences and process each sentence
pyConTextNLP comes with a simple sentence splitter in helper.py. I have not been maintaining this and have recently been using TextBlob to split sentences. A known problem with either sentence splitting solution is enumerated lists that don't use periods.
End of explanation
clrs = {\
"bowel_obstruction": "blue",
"inflammation": "blue",
"definite_negated_existence": "red",
"probable_negated_existence": "indianred",
"ambivalent_existence": "orange",
"probable_existence": "forestgreen",
"definite_existence": "green",
"historical": "goldenrod",
"indication": "pink",
"acute": "golden"
}
display(HTML(html.mark_document_with_html(context,colors = clrs, default_color="black")))
Explanation: Displaying pyConTextNLP Markups
The display subpackage contains some functionality for visualizing the markups. Here I use HTML to color-code identified concepts.
End of explanation
print(context.getXML())
Explanation: There is also a rich XML description of the ConTextDocument
End of explanation |
3,204 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examining the 100-year storm
From past analysis, we've seen that there have been 3 100-year storms in the last 46 years. This notebook takes a look at these 3 storms.
Step1: Storm 1 -> August, 1987
Step2: Looking at Storm 1 we see that within the 100-year storm, the real bulk of the rainfall falls overnight Aug 13-14 from 10PM until 10AM. This in itself was a 100-year storm. A few days later, we had an additional equivalent of a 10-year storm -- all this within the same 10 day period
Storm 2 -> September 2008
Step3: This event has 3 big downpours. Let's split this up
Step4: Storm 3 - July 2011
Step5: Interestingly, all of the 100-year storms are marked with a drastic period of a few hours which really makes it the big one.
Let's examine the 50-years to look for the same trend | Python Code:
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
from datetime import datetime, timedelta
import pandas as pd
import matplotlib.pyplot as plt
import operator
import seaborn as sns
%matplotlib inline
n_year_storms = pd.read_csv('data/n_year_storms_ohare_noaa.csv')
n_year_storms['start_time'] = pd.to_datetime(n_year_storms['start_time'])
n_year_storms['end_time'] = pd.to_datetime(n_year_storms['end_time'])
n_year_storms.head()
year_event_100 = n_year_storms[n_year_storms['n'] == 100]
year_event_100
rain_df = pd.read_csv('data/ohare_hourly_20160929.csv')
rain_df['datetime'] = pd.to_datetime(rain_df['datetime'])
rain_df = rain_df.set_index(pd.DatetimeIndex(rain_df['datetime']))
rain_df = rain_df['19700101':]
chi_rain_series = rain_df['HOURLYPrecip'].resample('1H', label='right').max().fillna(0)
chi_rain_series.head()
# N-Year Storm variables
# These define the thresholds laid out by bulletin 70, and transfer mins and days to hours
n_year_threshes = pd.read_csv('../../n-year/notebooks/data/n_year_definitions.csv')
n_year_threshes = n_year_threshes.set_index('Duration')
dur_str_to_hours = {
'5-min':5/60.0,
'10-min':10/60.0,
'15-min':15/60.0,
'30-min':0.5,
'1-hr':1.0,
'2-hr':2.0,
'3-hr':3.0,
'6-hr':6.0,
'12-hr':12.0,
'18-hr':18.0,
'24-hr':24.0,
'48-hr':48.0,
'72-hr':72.0,
'5-day':5*24.0,
'10-day':10*24.0
}
n_s = [int(x.replace('-year','')) for x in reversed(list(n_year_threshes.columns.values))]
duration_strs = sorted(dur_str_to_hours.items(), key=operator.itemgetter(1), reverse=False)
n_year_threshes
# Find n-year storms and store them in a data frame.
def find_n_year_storms(start_time_str, end_time_str, n):
start_time = pd.to_datetime(start_time_str)
end_time = pd.to_datetime(end_time_str)
n_index = n_s.index(n)
next_n = n_s[n_index-1] if n_index != 0 else None
storms = []
for duration_tuple in duration_strs:
duration_str = duration_tuple[0]
low_thresh = n_year_threshes.loc[duration_str, str(n) + '-year']
high_thresh = n_year_threshes.loc[duration_str, str(next_n) + '-year'] if next_n is not None else None
duration = int(dur_str_to_hours[duration_str])
sub_series = chi_rain_series[start_time: end_time]
rolling = sub_series.rolling(window=int(duration), min_periods=0).sum()
if high_thresh is not None:
event_endtimes = rolling[(rolling >= low_thresh) & (rolling < high_thresh)].sort_values(ascending=False)
else:
event_endtimes = rolling[(rolling >= low_thresh)].sort_values(ascending=False)
for index, event_endtime in event_endtimes.iteritems():
this_start_time = index - timedelta(hours=duration)
if this_start_time < start_time:
continue
storms.append({'n': n, 'end_time': index, 'inches': event_endtime, 'duration_hrs': duration,
'start_time': this_start_time})
return pd.DataFrame(storms)
Explanation: Examining the 100-year storm
From past analysis, we've seen that there have been 3 100-year storms in the last 46 years. This notebook takes a look at these 3 storms.
End of explanation
storm1 = chi_rain_series['1987-08-11 23:00:00':'1987-08-21 23:00:00']
storm1.cumsum().plot(title="Cumulative rainfall over 1987 100-year storm")
# The rainfall starts at...
storm1[storm1 > 0].index[0]
storm1 = storm1['1987-08-13 22:00:00':]
storm1.head()
# There are two periods of drastic rise in rain. Print out the percent of the storm that has fallen hourly to see that the
# first burst ends at 8/14 10AM
storm1.cumsum()/storm1.sum()
# Looking for an n-year storm in the small period of drastic increase #1
find_n_year_storms('1987-08-13 22:00:00', '1987-08-14 10:00:00', 100)
# Let's look for the second jump in precip
storm1['1987-08-16 12:00:00':].cumsum()/storm1.sum()
# Looking for an n-year storm in the small period of drastic increase #2
find_n_year_storms('1987-08-16 20:00:00', '1987-08-17 00:00:00', 10)
Explanation: Storm 1 -> August, 1987
End of explanation
storm2 = chi_rain_series['2008-09-04 13:00:00':'2008-09-14 13:00:00']
storm2.cumsum().plot(title="Cumulative rainfall over 2008 100-year storm")
Explanation: Looking at Storm 1 we see that within the 100-year storm, the real bulk of the rainfall falls overnight Aug 13-14 from 10PM until 10AM. This in itself was a 100-year storm. A few days later, we had an additional equivalent of a 10-year storm -- all this within the same 10 day period
Storm 2 -> September 2008
End of explanation
total_rainfall = storm2.sum()
total_rainfall
storm2.cumsum()/total_rainfall
# First downpour is a 1-year storm
find_n_year_storms('2008-09-04 13:00:00', '2008-09-04 21:00:00', 1)
storm2['2008-09-08 00:00:00':'2008-09-09 00:00:00'].cumsum()/total_rainfall
find_n_year_storms('2008-09-08 10:00:00', '2008-09-08 20:00:00', 1)
chi_rain_series['2008-09-08 10:00:00':'2008-09-08 20:00:00'].sum()
# No n-year events for second downpour
# Downpour 3
storm2['2008-09-12 12:00:00':'2008-09-13 15:00:00'].cumsum()/total_rainfall
find_n_year_storms('2008-09-12 12:00:00','2008-09-13 15:00:00',50)
Explanation: This event has 3 big downpours. Let's split this up
End of explanation
storm3 = chi_rain_series['2011-07-22 08:00:00':'2011-07-23 08:00:00']
storm3.cumsum().plot(title="Cumulative rainfall over 2011 100-year storm")
storm3['2011-07-22 22:00:00':'2011-07-23 05:00:00'].cumsum()/storm3.sum()
find_n_year_storms('2011-07-22 22:00:00', '2011-07-23 05:00:00', 100)
chi_rain_series['2011-07-22 08:00:00':'2011-07-23 08:00:00'].cumsum().plot(title="Cumulative rainfall over 2011 100-year storm")
Explanation: Storm 3 - July 2011
End of explanation
chi_rain_series['2010-07-23 16:00:00':'2010-07-24 16:00:00'].cumsum().plot(title="Cumulative rainfall over 2010 50-year storm")
# The following code is copied verbatim from @pjsier Rolling Rain N-Year Threshold.pynb
# Loading in hourly rain data from CSV, parsing the timestamp, and adding it as an index so it's more useful
rain_df = pd.read_csv('data/ohare_hourly_observations.csv')
rain_df['datetime'] = pd.to_datetime(rain_df['datetime'])
rain_df = rain_df.set_index(pd.DatetimeIndex(rain_df['datetime']))
print(rain_df.dtypes)
rain_df.head()
chi_rain_series = rain_df['hourly_precip'].resample('1H').max()
# This is where I break with @pjsier
# I am assuming here that a single hour cannot be part of more than one storm in the event_endtimes list.
# Therefore, I am looping through the list and throwing out any storms that include hours from heavier storms in the
# same block of time.=
def get_storms_without_overlap(event_endtimes, hours):
times_taken = []
ret_val = []
for i in range(len(event_endtimes)):
timestamp = event_endtimes.iloc[i].name
times_here = []
for h in range(hours):
times_here.append(timestamp - pd.DateOffset(hours=h))
if not bool(set(times_here) & set(times_taken)):
times_taken.extend(times_here)
ret_val.append({'start': timestamp - pd.DateOffset(hours=hours), 'end': timestamp, 'inches': event_endtimes.iloc[i]['hourly_precip']})
return ret_val
# Find the 100 year event. First, define the storm as based in Illinois Bulletin 70 as the number of inches
# of precipition that falls over a given span of straight hours.
_100_year_storm_milestones = [{'hours': 240, 'inches': 11.14}, {'hours':120, 'inches': 9.96},
{'hours': 72, 'inches': 8.78}, {'hours': 48, 'inches': 8.16}, {'hours': 24, 'inches': 7.58},
{'hours': 18, 'inches': 6.97}, {'hours': 12, 'inches': 6.59}, {'hours': 6, 'inches': 5.68},
{'hours': 3, 'inches': 4.9}, {'hours': 2, 'inches': 4.47}, {'hours': 1, 'inches': 3.51}]
all_storms = []
print("\tSTART\t\t\tEND\t\t\tINCHES")
for storm_hours in _100_year_storm_milestones:
rolling = pd.DataFrame(chi_rain_series.rolling(window=storm_hours['hours']).sum())
event_endtimes = rolling[(rolling['hourly_precip'] >= storm_hours['inches'])]
event_endtimes = event_endtimes.sort_values(by='hourly_precip', ascending=False)
storms = get_storms_without_overlap(event_endtimes, storm_hours['hours'])
if len(storms) > 0:
print("Across %s hours" % storm_hours['hours'])
for storm in storms:
print('\t%s\t%s\t%s inches' % (storm['start'], storm['end'], storm['inches']))
all_storms.extend(storms)
# Analysis Questions
# 1/25/2015 - 2/4/2015 - Worst storm by far in quantity, but Jan-Feb -- is it snow?
# 9/4/2008 - 9/14/2008 - This only appeared on the 10-day event, so it must've been well distributed across the days?
# 7/21/2011 - 7/23/2011 - Very heavy summer storm!
# Examining the storm from 7/21-2011 - 7/23/2011
import datetime
july_2011_storm = chi_rain_series.loc[(chi_rain_series.index >= datetime.datetime(2011,7,20)) & (chi_rain_series.index <= datetime.datetime(2011,7,24))]
july_2011_storm.head()
july_2011_storm.plot()
# Let's take a look at the cumulative buildup of the storm over time
cumulative_rainj11 = pd.DataFrame(july_2011_storm).hourly_precip.cumsum()
cumulative_rainj11.head()
cumulative_rainj11.plot()
cumulative_rainj11.loc[(cumulative_rainj11.index >= datetime.datetime(2011,7,22,21,0,0)) & (cumulative_rainj11.index <= datetime.datetime(2011,7,23,5,0,0))]
# We got a crazy, crazy downpour from about 11:00PM until 2:00AM. That alone was a 100-year storm, where we got 6.79 inches
# in 3 hours. That would've been a 100-year storm if we'd have gotten that in 12 hours!
Explanation: Interestingly, all of the 100-year storms are marked with a drastic period of a few hours which really makes it the big one.
Let's examine the 50-years to look for the same trend
End of explanation |
3,205 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cirq to Tensor Networks
Here we demonstrate turning circuits into tensor network representations of the circuit's unitary, final state vector, final density matrix, and final noisy density matrix.
Imports
Step1: Create a random circuit
Step2: Circuit to Tensors
The circuit defines a tensor network representation. By default, the initial state is the |0...0> state (represented by the "zero qubit" operations labeled "Q0" in the legend. "Q1" are single qubit operations and "Q2" are two qubit operations. The open legs are the indices into the state vector and are of the form "i{m}_q{n}" where m is the time index (given by the returned qubit_frontier dictionary) and "n" is the qubit string.
Note
Step3: To dense
Step4: Circuit Unitary
We can also leave the input legs open which gives a tensor network representation of the unitary
Step5: To dense
Step6: Density Matrix
We can also turn a circuit into its density matrix. The density matrix resulting from the evolution of the |0><0| initial state can be thought of as two copies of the circuit
Step7: Noise
Noise operations entangle the forwards and backwards evolutions. The new tensors labeled "kQ1" are 1-qubit Kraus operators.
Step8: For 6 or fewer qubits, we specify the contraction ordering.
For low-qubit-number circuits, a reasonable contraction ordering is to go in moment order (as a normal simulator would do). Otherwise, quimb will try to find an optimal ordering which was observed to take longer than it takes to do the contraction itself. We show how to tell quimb to contract in order by using the moment tags.
Step9: The result of a partial contraction
Step10: To Dense
Step11: Profile
For low-qubit-number, deep, noisy circuits, the quimb contraction is faster. | Python Code:
import cirq
import numpy as np
import pandas as pd
from cirq.contrib.svg import SVGCircuit
import cirq.contrib.quimb as ccq
import quimb
import quimb.tensor as qtn
Explanation: Cirq to Tensor Networks
Here we demonstrate turning circuits into tensor network representations of the circuit's unitary, final state vector, final density matrix, and final noisy density matrix.
Imports
End of explanation
qubits = cirq.LineQubit.range(3)
circuit = cirq.testing.random_circuit(qubits, n_moments=10, op_density=0.8, random_state=52)
circuit = cirq.drop_empty_moments(circuit)
SVGCircuit(circuit)
Explanation: Create a random circuit
End of explanation
tensors, qubit_frontier, fix = ccq.circuit_to_tensors(circuit, qubits)
tn = qtn.TensorNetwork(tensors)
print(qubit_frontier)
from matplotlib import pyplot as plt
tn.graph(fix=fix, color=['Q0', 'Q1', 'Q2'], figsize=(8,8))
Explanation: Circuit to Tensors
The circuit defines a tensor network representation. By default, the initial state is the |0...0> state (represented by the "zero qubit" operations labeled "Q0" in the legend. "Q1" are single qubit operations and "Q2" are two qubit operations. The open legs are the indices into the state vector and are of the form "i{m}_q{n}" where m is the time index (given by the returned qubit_frontier dictionary) and "n" is the qubit string.
Note: this notebook relies on unreleased Cirq features. If you want to try these features, make sure you install cirq via pip install cirq --pre.
End of explanation
psi_tn = ccq.tensor_state_vector(circuit, qubits)
psi_cirq = cirq.final_state_vector(circuit, qubit_order=qubits)
np.testing.assert_allclose(psi_cirq, psi_tn, atol=1e-7)
Explanation: To dense
End of explanation
tensors, qubit_frontier, fix = ccq.circuit_to_tensors(circuit, qubits, initial_state=None)
tn = qtn.TensorNetwork(tensors)
print(qubit_frontier)
tn.graph(fix=fix, color=['Q0', 'Q1', 'Q2'], figsize=(8, 8))
Explanation: Circuit Unitary
We can also leave the input legs open which gives a tensor network representation of the unitary
End of explanation
u_tn = ccq.tensor_unitary(circuit, qubits)
u_cirq = circuit.unitary(qubit_order=qubits)
np.testing.assert_allclose(u_cirq, u_tn, atol=1e-7)
Explanation: To dense
End of explanation
tensors, qubit_frontier, fix = ccq.circuit_to_density_matrix_tensors(circuit=circuit, qubits=qubits)
tn = qtn.TensorNetwork(tensors)
tn.graph(fix=fix, color=['Q0', 'Q1', 'Q2'])
Explanation: Density Matrix
We can also turn a circuit into its density matrix. The density matrix resulting from the evolution of the |0><0| initial state can be thought of as two copies of the circuit: one going "forwards" and one going "backwards" (i.e. use the complex conjugate of each operation). Kraus operator noise operations "link" the forwards and backwards circuits. As such, the density matrix for pure states is simple.
Note: for density matrices, we return a fix variable for a circuit-like layout of the tensors when calling tn.graph.
End of explanation
noise_model = cirq.ConstantQubitNoiseModel(cirq.DepolarizingChannel(p=1e-3))
circuit = cirq.Circuit(noise_model.noisy_moments(circuit.moments, qubits))
SVGCircuit(circuit)
tensors, qubit_frontier, fix = ccq.circuit_to_density_matrix_tensors(circuit=circuit, qubits=qubits)
tn = qtn.TensorNetwork(tensors)
tn.graph(fix=fix, color=['Q0', 'Q1', 'Q2', 'kQ1'], figsize=(8,8))
Explanation: Noise
Noise operations entangle the forwards and backwards evolutions. The new tensors labeled "kQ1" are 1-qubit Kraus operators.
End of explanation
partial = 12
tags_seq = [(f'i{i}b', f'i{i}f') for i in range(partial)]
tn.graph(fix=fix, color = [x for x, _ in tags_seq] + [y for _, y in tags_seq], figsize=(8, 8))
Explanation: For 6 or fewer qubits, we specify the contraction ordering.
For low-qubit-number circuits, a reasonable contraction ordering is to go in moment order (as a normal simulator would do). Otherwise, quimb will try to find an optimal ordering which was observed to take longer than it takes to do the contraction itself. We show how to tell quimb to contract in order by using the moment tags.
End of explanation
tn2 = tn.contract_cumulative(tags_seq, inplace=False)
tn2.graph(fix=fix, color=['Q0', 'Q1', 'Q2', 'kQ1'], figsize=(8, 8))
Explanation: The result of a partial contraction
End of explanation
rho_tn = ccq.tensor_density_matrix(circuit, qubits)
rho_cirq = cirq.final_density_matrix(circuit, qubit_order=qubits)
np.testing.assert_allclose(rho_cirq, rho_tn, atol=1e-5)
Explanation: To Dense
End of explanation
import timeit
def profile(n_qubits: int, n_moments: int):
qubits = cirq.LineQubit.range(n_qubits)
circuit = cirq.testing.random_circuit(qubits, n_moments=n_moments, op_density=0.8)
noise_model = cirq.ConstantQubitNoiseModel(cirq.DepolarizingChannel(p=1e-3))
circuit = cirq.Circuit(noise_model.noisy_moments(circuit.moments, qubits))
circuit = cirq.drop_empty_moments(circuit)
n_moments = len(circuit)
variables = {'circuit': circuit, 'qubits': qubits}
setup1 = [
'import cirq',
'import numpy as np',
]
n_call_cs, duration_cs = timeit.Timer(
stmt='cirq.final_density_matrix(circuit)',
setup='; '.join(setup1),
globals=variables).autorange()
setup2 = [
'from cirq.contrib.quimb import tensor_density_matrix',
'import numpy as np',
]
n_call_t, duration_t = timeit.Timer(
stmt='tensor_density_matrix(circuit, qubits)',
setup='; '.join(setup2),
globals=variables).autorange()
return {
'n_qubits': n_qubits,
'n_moments': n_moments,
'duration_cirq': duration_cs,
'duration_quimb': duration_t,
'n_call_cirq': n_call_cs,
'n_call_quimb': n_call_t,
}
records = []
max_qubits = 6
max_moments = 500
for n_qubits in [3, max_qubits]:
for n_moments in range(1, max_moments, 50):
record = profile(n_qubits=n_qubits, n_moments=n_moments)
records.append(record)
print('.', end='', flush=True)
df = pd.DataFrame(records)
df.head()
def select(df, k, v):
return df[df[k] == v].drop(k, axis=1)
pd.DataFrame.select = select
def plot1(df, labelfmt):
for k in ['duration_cirq', 'duration_quimb']:
plt.plot(df['n_moments'], df[k], '.-', label=labelfmt.format(k))
plt.legend(loc='best')
def plot(df):
df['duration_cirq'] /= df['n_call_cirq']
df['duration_quimb'] /= df['n_call_quimb']
plot1(df.select('n_qubits', 3), 'n = 3, {}')
plot1(df.select('n_qubits', 6), 'n = 6, {}')
plt.xlabel('N Moments')
plt.ylabel('Time / s')
plot(df)
plt.tight_layout()
Explanation: Profile
For low-qubit-number, deep, noisy circuits, the quimb contraction is faster.
End of explanation |
3,206 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Stacks
Stacks are one of the basic, linear datastructures that have the characteristical 2 end points (here
Step1: Note that the addition and removal of a single item is a O(1) algorithm
Step2: Example 1
Step3: The code above is relatively simple, yet effective. We initialize a new Stack and set the matches variables to true -- we use the latter to keep track whether the paranthesis are matched or not. Next, we use a for loop to iterate through the string. If we encounter an opening paranthesis, we add it from the stack. If we encounter a closing parenthesis, we try to remove the last opening bracket from the stack. If we encounter a closing bracket but the stack is empty, we already know that the parentheses are not matching, and we can break out the for loop early.
After we finished iterating through the string, we check if our stack is empty. If it is not, parentheses are not matching.
Here, two more examples
Step4: Example 2
Step5: Our conversion function is simple, we iteratively divide the integer number by 2 until we arrive at 0, and for each division, we add the remainder (a 0 or 1) to the stack. Finally, we remove the items one by one from the stack to build a string notation of the binary number. | Python Code:
class Stack(object):
def __init__(self):
self.stack = []
def add(self, item):
self.stack.append(item)
def pop(self):
self.stack.pop()
def peek(self):
return self.stack[-1]
def size(self):
return len(self.stack)
Explanation: Stacks
Stacks are one of the basic, linear datastructures that have the characteristical 2 end points (here: a top and a base).
Stacks are also often referred to as LIFO datastructures, which stands for "last in, first out," and we can picture them as "unwashed dishes" in our sink: the first plate to remove is the last one we put there and vice versa.
The idea behind stacks sounds trivial, yet, it is a data structure that is immensely useful in a variety of applications. One example would be the "back" button of our web browser, which goes one step back in our search history upon "clicking" -- back to the last item we added to the stack. Before we look at another common example, parenthesis matching, let's implement a basic Stack class using Python lists as an illustration.
End of explanation
st = Stack()
st.add('a')
st.add(1)
print('size:', st.size())
print('top element', st.peek())
st.pop()
print('size:', st.size())
print('top element', st.peek())
st.pop()
print('size:', st.size())
Explanation: Note that the addition and removal of a single item is a O(1) algorithm: it takes linear time to remove or add an item to the top of the stack. In the simple implementation above, we added 2 more convenience methods: a peek methods, which lets us look at the top of the stack (i.e., the end of the Python list self.stack) and a size method, which returns the number of elements that are currently in the stack.
End of explanation
def check_parens(eq, pair=['(', ')']):
st = Stack()
matches = True
for c in eq:
if c == pair[0]:
st.add(c)
elif c == pair[1]:
if st.size():
st.pop()
else:
matches = False
break
if not matches and st.size():
matches = False
return matches
eq1 = '(a + b) * (c + d)'
check_parens(eq=eq1)
Explanation: Example 1: Matching Paranthesis
Another common application of stacks -- next to the "back" button example mentioned earlier -- is syntax checking; matching openining and closing parantheses (e.g., in regex, math equations, etc.) to be specific.
End of explanation
eq2 = '(a + b) * (c + d))'
check_parens(eq=eq2)
eq3 = 'a + b) * (c + d)'
check_parens(eq=eq3)
Explanation: The code above is relatively simple, yet effective. We initialize a new Stack and set the matches variables to true -- we use the latter to keep track whether the paranthesis are matched or not. Next, we use a for loop to iterate through the string. If we encounter an opening paranthesis, we add it from the stack. If we encounter a closing parenthesis, we try to remove the last opening bracket from the stack. If we encounter a closing bracket but the stack is empty, we already know that the parentheses are not matching, and we can break out the for loop early.
After we finished iterating through the string, we check if our stack is empty. If it is not, parentheses are not matching.
Here, two more examples:
End of explanation
def decimal_to_binary(number):
st = Stack()
while number > 0:
remainder = number % 2
st.add(remainder)
number = number // 2
binary = ''
while st.size():
binary += str(st.peek())
st.pop()
return binary
Explanation: Example 2: Decimal to Binary Conversion
Now, let's look at another example, where we are using a stack to convert digits from the decimal into the binary system. For example, the decimal number 135 would be represented as the number 10000111 in the binary system, since
$$1 \times 2^7 + 0 \times 2^6 + 0 \times 2^5 + 0 \times 2^4 + 0 \times 2^3 + 1 \times 2^2 + 1 \times 2^1 + 1 \times 2^0 = 135.$$
End of explanation
decimal_to_binary(135)
Explanation: Our conversion function is simple, we iteratively divide the integer number by 2 until we arrive at 0, and for each division, we add the remainder (a 0 or 1) to the stack. Finally, we remove the items one by one from the stack to build a string notation of the binary number.
End of explanation |
3,207 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An Introduction to Multi-fidelity Modeling in Emukit
Overview
A common issue encountered when applying machine learning to environmental sciences and engineering problems is the difficulty or cost required to obtain sufficient data for building robust models.
Possible examples include aerospace and nautical engineering, where it is both infeasible and prohibitively expensive to run a vast number of experiments using the actual vehicle.
Even when there is no physical artifact involved, such as in climate modeling, data may still be hard to obtain when these can only be collected by running an expensive computer experiment, where the time required to acquire an individual data sample restricts the volume of data that can later be used for modeling.
Constructing a reliable model when only few observations are available is challenging, which is why it is common practice to develop <i>simulators</i> of the actual system, from which data points can be more easily obtained.
In engineering applications, such simulators often take the form of Computational Fluid Dynamics (CFD) tools which approximate the behaviour of the true artifact for a given design or configuration.
However, although it is now possible to obtain more data samples, it is highly unlikely that these simulators model the true system exactly; instead, these are expected to contain some degree of bias and/or noise.
From the above, one can deduce that naively combining observations from multiple information sources could result in the model giving biased predictions which do not accurately reflect the true problem.
To this end, <b>multi-fidelity models</b> are designed to augment the limited true observations available with cheaply-obtained approximations in a principled manner.
In such models, observations obtained from the true source are referred to as <i>high-fidelity</i> observations, whereas approximations are denoted as being <i>low-fidelity</i>.
These low-fidelity observations are then systemically combined with the more accurate (but limited) observations in order to predict the high-fidelity output more effectively.
Note than we can generally combine information from multiple lower fidelity sources, which can all be seen as auxiliary tasks in support of a single primary task.
In this notebook, we shall investigate a selection of multi-fidelity models based on Gaussian processes which are readily available in <b style="color
Step1: 1. Linear multi-fidelity model
The linear multi-fidelity model proposed in [Kennedy and O'Hagan, 2000] is widely viewed as a reference point for all such models.
In this model, the high-fidelity (true) function is modeled as a scaled sum of the low-fidelity function plus an error term
Step2: The inputs to the models are expected to take the form of ndarrays where the last column indicates the fidelity of the observed points.
Although only the input points, $X$, are augmented with the fidelity level, the observed outputs $Y$ must also be converted to array form.
For example, a dataset consisting of 3 low-fidelity points and 2 high-fidelity points would be represented as follows, where the input is three-dimensional while the output is one-dimensional
Step3: Observe that in the example above we restrict our observations to 12 from the lower fidelity function and only 6 from the high fidelity function.
As we shall demonstrate further below, fitting a standard GP model to the few high fidelity observations is unlikely to result in an acceptable fit, which is why we shall instead consider the linear multi-fidelity model presented in this section.
<br>
Below we fit a linear multi-fidelity model to the available low and high fidelity observations.
Given the smoothness of the functions, we opt to use an <i>RBF</i> kernel for both the bias and correlation components of the model.
Note
Step4: The above plot demonstrates how the multi-fidelity model learns the relationship between the low and high-fidelity observations in order to model both of the corresponding functions.
In this example, the posterior mean almost fits the true function exactly, while the associated uncertainty returned by the model is also appropriately small given the good fit.
1.2 Comparison to standard GP
In the absence of such a multi-fidelity model, a regular Gaussian process would have been fit exclusively to the high fidelity data.
As illustrated in the figure below, the resulting Gaussian process posterior yields a much worse fit to the data than that obtained by the multi-fidelity model.
The uncertainty estimates are also poorly calibrated.
Step5: 2. Nonlinear multi-fidelity model
Although the model described above works well when the mapping between the low and high-fidelity functions is linear, several issues may be encountered when this is not the case.
Consider the following example, where the low and high fidelity functions are defined as follows
Step6: In this case, the mapping between the two functions is nonlinear, as can be observed by plotting the high fidelity observations as a function of the lower fidelity observations.
Step7: 2.1 Failure of linear multi-fidelity model
Below we fit the linear multi-fidelity model to this new problem and plot the results.
Step8: As expected, the linear multi-fidelity model was unable to capture the nonlinear relationship between the low and high-fidelity data.
Consequently, the resulting fit of the true function is also poor.
2.2 Nonlinear Multi-fidelity model
In view of the deficiencies of the linear multi-fidelity model, a nonlinear multi-fidelity model is proposed in [Perdikaris et al, 2017] in order to better capture these correlations.
This nonlinear model is constructed as follows
Step9: Fitting the nonlinear fidelity model to the available data very closely fits the high-fidelity function while also fitting the low-fidelity function exactly.
This is a vast improvement over the results obtained using the linear model.
We can also confirm that the model is properly capturing the correlation between the low and high-fidelity observations by plotting the mapping learned by the model to the true mapping shown earlier. | Python Code:
# General imports
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import colors as mcolors
colors = dict(mcolors.BASE_COLORS, **mcolors.CSS4_COLORS)
%matplotlib inline
np.random.seed(20)
Explanation: An Introduction to Multi-fidelity Modeling in Emukit
Overview
A common issue encountered when applying machine learning to environmental sciences and engineering problems is the difficulty or cost required to obtain sufficient data for building robust models.
Possible examples include aerospace and nautical engineering, where it is both infeasible and prohibitively expensive to run a vast number of experiments using the actual vehicle.
Even when there is no physical artifact involved, such as in climate modeling, data may still be hard to obtain when these can only be collected by running an expensive computer experiment, where the time required to acquire an individual data sample restricts the volume of data that can later be used for modeling.
Constructing a reliable model when only few observations are available is challenging, which is why it is common practice to develop <i>simulators</i> of the actual system, from which data points can be more easily obtained.
In engineering applications, such simulators often take the form of Computational Fluid Dynamics (CFD) tools which approximate the behaviour of the true artifact for a given design or configuration.
However, although it is now possible to obtain more data samples, it is highly unlikely that these simulators model the true system exactly; instead, these are expected to contain some degree of bias and/or noise.
From the above, one can deduce that naively combining observations from multiple information sources could result in the model giving biased predictions which do not accurately reflect the true problem.
To this end, <b>multi-fidelity models</b> are designed to augment the limited true observations available with cheaply-obtained approximations in a principled manner.
In such models, observations obtained from the true source are referred to as <i>high-fidelity</i> observations, whereas approximations are denoted as being <i>low-fidelity</i>.
These low-fidelity observations are then systemically combined with the more accurate (but limited) observations in order to predict the high-fidelity output more effectively.
Note than we can generally combine information from multiple lower fidelity sources, which can all be seen as auxiliary tasks in support of a single primary task.
In this notebook, we shall investigate a selection of multi-fidelity models based on Gaussian processes which are readily available in <b style="color:#EB9100">Emukit</b>.
We start by investigating the traditional linear multi-fidelity model as proposed in [Kennedy and O'Hagan, 2000].
Subsequently, we shall illustrate why this model can be unsuitable when the mapping from low to high-fidelity observations is nonlinear, and demonstrate how an alternate model proposed in [Pedikaris et al. 2017] can alleviate this issue.
The examples presented in this notebook can then be easily adapted to a variety of problem settings.
Navigation
Linear multi-fidelity model
Nonlinear multi-fidelity model
References
End of explanation
import GPy
import emukit.multi_fidelity
import emukit.test_functions
from emukit.model_wrappers.gpy_model_wrappers import GPyMultiOutputWrapper
from emukit.multi_fidelity.models import GPyLinearMultiFidelityModel
## Generate samples from the Forrester function
high_fidelity = emukit.test_functions.forrester.forrester
low_fidelity = emukit.test_functions.forrester.forrester_low
x_plot = np.linspace(0, 1, 200)[:, None]
y_plot_l = low_fidelity(x_plot)
y_plot_h = high_fidelity(x_plot)
x_train_l = np.atleast_2d(np.random.rand(12)).T
x_train_h = np.atleast_2d(np.random.permutation(x_train_l)[:6])
y_train_l = low_fidelity(x_train_l)
y_train_h = high_fidelity(x_train_h)
Explanation: 1. Linear multi-fidelity model
The linear multi-fidelity model proposed in [Kennedy and O'Hagan, 2000] is widely viewed as a reference point for all such models.
In this model, the high-fidelity (true) function is modeled as a scaled sum of the low-fidelity function plus an error term:
$$
f_{high}(x) = f_{err}(x) + \rho \,f_{low}(x)
$$
In this equation, $f_{low}(x)$ is taken to be a Gaussian process modeling the outputs of the lower fidelity function, while $\rho$ is a scaling factor indicating the magnitude of the correlation to the high-fidelity data.
Setting this to 0 implies that there is no correlation between observations at different fidelities.
Meanwhile, $f_{err}(x)$ denotes yet another Gaussian process which models the bias term for the high-fidelity data.
Note that $f_{err}(x)$ and $f_{low}(x)$ are assumed to be independent processes which are only related by the equation given above.
Note: While we shall limit our explanation to the case of two fidelities, this set-up can easily be generalized to cater for $T$ fidelities as follows:
$$f_{t}(x) = f_{t}(x) + \rho_{t-1} \,f_{t-1}(x), \quad t=1,\dots, T$$
If the training points are sorted such that the low and high-fidelity points are grouped together:
$$
\begin{pmatrix}
X_{low} \
X_{high}
\end{pmatrix}
$$
we can express the model as a single Gaussian process having the following prior:
$$
\begin{bmatrix}
f_{low}\left(h\right)\
f_{high}\left(h\right)
\end{bmatrix}
\sim
GP
\begin{pmatrix}
\begin{bmatrix}
0 \ 0
\end{bmatrix},
\begin{bmatrix}
k_{low} & \rho k_{low} \
\rho k_{low} & \rho^2 k_{low} + k_{err}
\end{bmatrix}
\end{pmatrix}
$$
1.1 Linear multi-fidelity modeling in Emukit
As a first example of how the linear multi-fidelity model implemented in Emukit (emukit.multi_fidelity.models.GPyLinearMultiFidelityModel) can be used, we shall consider the two-fidelity Forrester function.
This benchmark is frequently used to illustrate the capabilities of multi-fidelity models.
End of explanation
## Convert lists of arrays to ndarrays augmented with fidelity indicators
from emukit.multi_fidelity.convert_lists_to_array import convert_x_list_to_array, convert_xy_lists_to_arrays
X_train, Y_train = convert_xy_lists_to_arrays([x_train_l, x_train_h], [y_train_l, y_train_h])
## Plot the original functions
plt.figure(figsize=(12, 8))
plt.plot(x_plot, y_plot_l, 'b')
plt.plot(x_plot, y_plot_h, 'r')
plt.scatter(x_train_l, y_train_l, color='b', s=40)
plt.scatter(x_train_h, y_train_h, color='r', s=40)
plt.ylabel('f (x)')
plt.xlabel('x')
plt.legend(['Low fidelity', 'High fidelity'])
plt.title('High and low fidelity Forrester functions');
Explanation: The inputs to the models are expected to take the form of ndarrays where the last column indicates the fidelity of the observed points.
Although only the input points, $X$, are augmented with the fidelity level, the observed outputs $Y$ must also be converted to array form.
For example, a dataset consisting of 3 low-fidelity points and 2 high-fidelity points would be represented as follows, where the input is three-dimensional while the output is one-dimensional:
$$
X =
\begin{pmatrix}
x_{low;0}^0 & x_{low;0}^1 & x_{low;0}^2 & 0\
x_{low;1}^0 & x_{low;1}^1 & x_{low;1}^2 & 0\
x_{low;2}^0 & x_{low;2}^1 & x_{low;2}^2 & 0\
x_{high;0}^0 & x_{high;0}^1 & x_{high;0}^2 & 1\
x_{high;1}^0 & x_{high;1}^1 & x_{high;1}^2 & 1
\end{pmatrix}\quad
Y = \begin{pmatrix}
y_{low;0}\
y_{low;1}\
y_{low;2}\
y_{high;0}\
y_{high;1}
\end{pmatrix}
$$
A similar procedure must be carried out for obtaining predictions at new test points, whereby the fidelity indicated in the column then indicates the fidelity at which the function must be predicted for a designated point.
For convenience of use, we provide helper methods for easily converting between a list of arrays (ordered from the lowest to the highest fidelity) and the required ndarray representation. This is found in emukit.multi_fidelity.convert_lists_to_array.
End of explanation
## Construct a linear multi-fidelity model
kernels = [GPy.kern.RBF(1), GPy.kern.RBF(1)]
lin_mf_kernel = emukit.multi_fidelity.kernels.LinearMultiFidelityKernel(kernels)
gpy_lin_mf_model = GPyLinearMultiFidelityModel(X_train, Y_train, lin_mf_kernel, n_fidelities=2)
gpy_lin_mf_model.mixed_noise.Gaussian_noise.fix(0)
gpy_lin_mf_model.mixed_noise.Gaussian_noise_1.fix(0)
## Wrap the model using the given 'GPyMultiOutputWrapper'
lin_mf_model = model = GPyMultiOutputWrapper(gpy_lin_mf_model, 2, n_optimization_restarts=5)
## Fit the model
lin_mf_model.optimize()
## Convert x_plot to its ndarray representation
X_plot = convert_x_list_to_array([x_plot, x_plot])
X_plot_l = X_plot[:len(x_plot)]
X_plot_h = X_plot[len(x_plot):]
## Compute mean predictions and associated variance
lf_mean_lin_mf_model, lf_var_lin_mf_model = lin_mf_model.predict(X_plot_l)
lf_std_lin_mf_model = np.sqrt(lf_var_lin_mf_model)
hf_mean_lin_mf_model, hf_var_lin_mf_model = lin_mf_model.predict(X_plot_h)
hf_std_lin_mf_model = np.sqrt(hf_var_lin_mf_model)
## Plot the posterior mean and variance
plt.figure(figsize=(12, 8))
plt.fill_between(x_plot.flatten(), (lf_mean_lin_mf_model - 1.96*lf_std_lin_mf_model).flatten(),
(lf_mean_lin_mf_model + 1.96*lf_std_lin_mf_model).flatten(), facecolor='g', alpha=0.3)
plt.fill_between(x_plot.flatten(), (hf_mean_lin_mf_model - 1.96*hf_std_lin_mf_model).flatten(),
(hf_mean_lin_mf_model + 1.96*hf_std_lin_mf_model).flatten(), facecolor='y', alpha=0.3)
plt.plot(x_plot, y_plot_l, 'b')
plt.plot(x_plot, y_plot_h, 'r')
plt.plot(x_plot, lf_mean_lin_mf_model, '--', color='g')
plt.plot(x_plot, hf_mean_lin_mf_model, '--', color='y')
plt.scatter(x_train_l, y_train_l, color='b', s=40)
plt.scatter(x_train_h, y_train_h, color='r', s=40)
plt.ylabel('f (x)')
plt.xlabel('x')
plt.legend(['Low Fidelity', 'High Fidelity', 'Predicted Low Fidelity', 'Predicted High Fidelity'])
plt.title('Linear multi-fidelity model fit to low and high fidelity Forrester function');
Explanation: Observe that in the example above we restrict our observations to 12 from the lower fidelity function and only 6 from the high fidelity function.
As we shall demonstrate further below, fitting a standard GP model to the few high fidelity observations is unlikely to result in an acceptable fit, which is why we shall instead consider the linear multi-fidelity model presented in this section.
<br>
Below we fit a linear multi-fidelity model to the available low and high fidelity observations.
Given the smoothness of the functions, we opt to use an <i>RBF</i> kernel for both the bias and correlation components of the model.
Note: The model implementation defaults to a MixedNoise noise likelihood whereby there is independent Gaussian noise for each fidelity.
This can be modified upfront using the 'likelihood' parameter in the model constructor, or by updating them directly after the model has been created.
In the example below, we choose to fix the noise to '0' for both fidelities in order to reflect that the observations are exact.
End of explanation
## Create standard GP model using only high-fidelity data
kernel = GPy.kern.RBF(1)
high_gp_model = GPy.models.GPRegression(x_train_h, y_train_h, kernel)
high_gp_model.Gaussian_noise.fix(0)
## Fit the GP model
high_gp_model.optimize_restarts(5)
## Compute mean predictions and associated variance
hf_mean_high_gp_model, hf_var_high_gp_model = high_gp_model.predict(x_plot)
hf_std_hf_gp_model = np.sqrt(hf_var_high_gp_model)
## Plot the posterior mean and variance for the high-fidelity GP model
plt.figure(figsize=(12, 8))
plt.fill_between(x_plot.flatten(), (hf_mean_lin_mf_model - 1.96*hf_std_lin_mf_model).flatten(),
(hf_mean_lin_mf_model + 1.96*hf_std_lin_mf_model).flatten(), facecolor='y', alpha=0.3)
plt.fill_between(x_plot.flatten(), (hf_mean_high_gp_model - 1.96*hf_std_hf_gp_model).flatten(),
(hf_mean_high_gp_model + 1.96*hf_std_hf_gp_model).flatten(), facecolor='k', alpha=0.1)
plt.plot(x_plot, y_plot_h, color='r')
plt.plot(x_plot, hf_mean_lin_mf_model, '--', color='y')
plt.plot(x_plot, hf_mean_high_gp_model, 'k--')
plt.scatter(x_train_h, y_train_h, color='r')
plt.xlabel('x')
plt.ylabel('f (x)')
plt.legend(['True Function', 'Linear Multi-fidelity GP', 'High fidelity GP'])
plt.title('Comparison of linear multi-fidelity model and high fidelity GP');
Explanation: The above plot demonstrates how the multi-fidelity model learns the relationship between the low and high-fidelity observations in order to model both of the corresponding functions.
In this example, the posterior mean almost fits the true function exactly, while the associated uncertainty returned by the model is also appropriately small given the good fit.
1.2 Comparison to standard GP
In the absence of such a multi-fidelity model, a regular Gaussian process would have been fit exclusively to the high fidelity data.
As illustrated in the figure below, the resulting Gaussian process posterior yields a much worse fit to the data than that obtained by the multi-fidelity model.
The uncertainty estimates are also poorly calibrated.
End of explanation
## Generate data for nonlinear example
high_fidelity = emukit.test_functions.non_linear_sin.nonlinear_sin_high
low_fidelity = emukit.test_functions.non_linear_sin.nonlinear_sin_low
x_plot = np.linspace(0, 1, 200)[:, None]
y_plot_l = low_fidelity(x_plot)
y_plot_h = high_fidelity(x_plot)
n_low_fidelity_points = 50
n_high_fidelity_points = 14
x_train_l = np.linspace(0, 1, n_low_fidelity_points)[:, None]
y_train_l = low_fidelity(x_train_l)
x_train_h = x_train_l[::4, :]
y_train_h = high_fidelity(x_train_h)
### Convert lists of arrays to ND-arrays augmented with fidelity indicators
X_train, Y_train = convert_xy_lists_to_arrays([x_train_l, x_train_h], [y_train_l, y_train_h])
plt.figure(figsize=(12, 8))
plt.plot(x_plot, y_plot_l, 'b')
plt.plot(x_plot, y_plot_h, 'r')
plt.scatter(x_train_l, y_train_l, color='b', s=40)
plt.scatter(x_train_h, y_train_h, color='r', s=40)
plt.xlabel('x')
plt.ylabel('f (x)')
plt.xlim([0, 1])
plt.legend(['Low fidelity', 'High fidelity'])
plt.title('High and low fidelity functions');
Explanation: 2. Nonlinear multi-fidelity model
Although the model described above works well when the mapping between the low and high-fidelity functions is linear, several issues may be encountered when this is not the case.
Consider the following example, where the low and high fidelity functions are defined as follows:
$$
f_{low}(x) = sin(8\pi x)
$$
$$
f_{high}(x) = (x - \sqrt{2}) \, f_{low}^2
$$
End of explanation
plt.figure(figsize=(12,8))
plt.ylabel('HF(x)')
plt.xlabel('LF(x)')
plt.plot(y_plot_l, y_plot_h, color=colors['purple'])
plt.title('Mapping from low fidelity to high fidelity')
plt.legend(['HF-LF Correlation'], loc='lower center');
Explanation: In this case, the mapping between the two functions is nonlinear, as can be observed by plotting the high fidelity observations as a function of the lower fidelity observations.
End of explanation
## Construct a linear multi-fidelity model
kernels = [GPy.kern.RBF(1), GPy.kern.RBF(1)]
lin_mf_kernel = emukit.multi_fidelity.kernels.LinearMultiFidelityKernel(kernels)
gpy_lin_mf_model = GPyLinearMultiFidelityModel(X_train, Y_train, lin_mf_kernel, n_fidelities=2)
gpy_lin_mf_model.mixed_noise.Gaussian_noise.fix(0)
gpy_lin_mf_model.mixed_noise.Gaussian_noise_1.fix(0)
lin_mf_model = model = GPyMultiOutputWrapper(gpy_lin_mf_model, 2, n_optimization_restarts=5)
## Fit the model
lin_mf_model.optimize()
## Convert test points to appropriate representation
X_plot = convert_x_list_to_array([x_plot, x_plot])
X_plot_low = X_plot[:200]
X_plot_high = X_plot[200:]
## Compute mean and variance predictions
hf_mean_lin_mf_model, hf_var_lin_mf_model = lin_mf_model.predict(X_plot_high)
hf_std_lin_mf_model = np.sqrt(hf_var_lin_mf_model)
## Compare linear and nonlinear model fits
plt.figure(figsize=(12,8))
plt.plot(x_plot, y_plot_h, 'r')
plt.plot(x_plot, hf_mean_lin_mf_model, '--', color='y')
plt.scatter(x_train_h, y_train_h, color='r')
plt.fill_between(x_plot.flatten(), (hf_mean_lin_mf_model - 1.96*hf_std_lin_mf_model).flatten(),
(hf_mean_lin_mf_model + 1.96*hf_std_lin_mf_model).flatten(), color='y', alpha=0.3)
plt.xlim(0, 1)
plt.xlabel('x')
plt.ylabel('f (x)')
plt.legend(['True Function', 'Linear multi-fidelity GP'], loc='lower right')
plt.title('Linear multi-fidelity model fit to high fidelity function');
Explanation: 2.1 Failure of linear multi-fidelity model
Below we fit the linear multi-fidelity model to this new problem and plot the results.
End of explanation
## Create nonlinear model
from emukit.multi_fidelity.models.non_linear_multi_fidelity_model import make_non_linear_kernels, NonLinearMultiFidelityModel
base_kernel = GPy.kern.RBF
kernels = make_non_linear_kernels(base_kernel, 2, X_train.shape[1] - 1)
nonlin_mf_model = NonLinearMultiFidelityModel(X_train, Y_train, n_fidelities=2, kernels=kernels,
verbose=True, optimization_restarts=5)
for m in nonlin_mf_model.models:
m.Gaussian_noise.variance.fix(0)
nonlin_mf_model.optimize()
## Compute mean and variance predictions
hf_mean_nonlin_mf_model, hf_var_nonlin_mf_model = nonlin_mf_model.predict(X_plot_high)
hf_std_nonlin_mf_model = np.sqrt(hf_var_nonlin_mf_model)
lf_mean_nonlin_mf_model, lf_var_nonlin_mf_model = nonlin_mf_model.predict(X_plot_low)
lf_std_nonlin_mf_model = np.sqrt(lf_var_nonlin_mf_model)
## Plot posterior mean and variance of nonlinear multi-fidelity model
plt.figure(figsize=(12,8))
plt.fill_between(x_plot.flatten(), (lf_mean_nonlin_mf_model - 1.96*lf_std_nonlin_mf_model).flatten(),
(lf_mean_nonlin_mf_model + 1.96*lf_std_nonlin_mf_model).flatten(), color='g', alpha=0.3)
plt.fill_between(x_plot.flatten(), (hf_mean_nonlin_mf_model - 1.96*hf_std_nonlin_mf_model).flatten(),
(hf_mean_nonlin_mf_model + 1.96*hf_std_nonlin_mf_model).flatten(), color='y', alpha=0.3)
plt.plot(x_plot, y_plot_l, 'b')
plt.plot(x_plot, y_plot_h, 'r')
plt.plot(x_plot, lf_mean_nonlin_mf_model, '--', color='g')
plt.plot(x_plot, hf_mean_nonlin_mf_model, '--', color='y')
plt.scatter(x_train_h, y_train_h, color='r')
plt.xlabel('x')
plt.ylabel('f (x)')
plt.xlim(0, 1)
plt.legend(['Low Fidelity', 'High Fidelity', 'Predicted Low Fidelity', 'Predicted High Fidelity'])
plt.title('Nonlinear multi-fidelity model fit to low and high fidelity functions');
Explanation: As expected, the linear multi-fidelity model was unable to capture the nonlinear relationship between the low and high-fidelity data.
Consequently, the resulting fit of the true function is also poor.
2.2 Nonlinear Multi-fidelity model
In view of the deficiencies of the linear multi-fidelity model, a nonlinear multi-fidelity model is proposed in [Perdikaris et al, 2017] in order to better capture these correlations.
This nonlinear model is constructed as follows:
$$ f_{high}(x) = \rho( \, f_{low}(x)) + \delta(x) $$
Replacing the linear scaling factor with a non-deterministic function results in a model which can thus capture the nonlinear relationship between the fidelities.
This model is implemented in Emukit as emukit.multi_fidelity.models.NonLinearModel.
It is defined in a sequential manner where a Gaussian process model is trained for every set of fidelity data available.
Once again, we manually fix the noise parameter for each model to 0.
The parameters of the two Gaussian processes are then optimized sequentially, starting from the low-fidelity.
End of explanation
plt.figure(figsize=(12,8))
plt.ylabel('HF(x)')
plt.xlabel('LF(x)')
plt.plot(y_plot_l, y_plot_h, '-', color=colors['purple'])
plt.plot(lf_mean_nonlin_mf_model, hf_mean_nonlin_mf_model, 'k--')
plt.legend(['True HF-LF Correlation', 'Learned HF-LF Correlation'], loc='lower center')
plt.title('Mapping from low fidelity to high fidelity');
Explanation: Fitting the nonlinear fidelity model to the available data very closely fits the high-fidelity function while also fitting the low-fidelity function exactly.
This is a vast improvement over the results obtained using the linear model.
We can also confirm that the model is properly capturing the correlation between the low and high-fidelity observations by plotting the mapping learned by the model to the true mapping shown earlier.
End of explanation |
3,208 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Resumen de estructuras de datos de Python
Tuplas, Listas, Sets, Diccionarios, Listas de comprehension, Funciones, Clases
Vamos a ver ejemplos de estrucutras de datos en Python
Tuplas
Son el tipo mas simple de estr. puede almacenar en una misma variable mas de un tipo de dato.
Step1: Lo Malo de las tuplas es que son inmutables
Step2: Listas
Son elementos
Step3: Qué es ams rapido
Step4: Atencion a los usuarios de R
Step5: Diccionarios
En una gran cantidad de problemas, quieren almacenar claves y asignarle a cada clave un valor.
Un mejor nombre para un diccc. es un "directorio telefonico"
Step6: Sets
son conj. matematicos
Step7: Condicionales y Loops, For, While, If, Elif
truco para hacer loops en python es la func. range
Step8: Clases | Python Code:
x = (1,2,3,0,2,1)
x
x = (0, 'Hola', (1,2))
x[1]
Explanation: Resumen de estructuras de datos de Python
Tuplas, Listas, Sets, Diccionarios, Listas de comprehension, Funciones, Clases
Vamos a ver ejemplos de estrucutras de datos en Python
Tuplas
Son el tipo mas simple de estr. puede almacenar en una misma variable mas de un tipo de dato.
End of explanation
id(x)
x = (0, 'Cambio', (1,2))
id(x)
x
Explanation: Lo Malo de las tuplas es que son inmutables
End of explanation
x = [1,2,3]
x.append('Nuevo valor')
x
x.insert(2, 'Valor Intermedio')
x
Explanation: Listas
Son elementos
End of explanation
import timeit
timeit.timeit('x = (1,2,3,4,5,6)')
timeit.timeit('x = [1,2,3,4,5,6]')
Explanation: Qué es ams rapido: Tulpas o Listas?
End of explanation
x = [1,2,3] # Asignacion
y = [0, x] # Referencia
y
x[0] = -1 # Asigno otra lista a x
y # al cambiar el valor en x se cambio en y (y apunta a x)
Explanation: Atencion a los usuarios de R: referencia o asignacion?
End of explanation
dir_tel = {'juan':5512345, 'pedro':5554321, 'itam':'is fun'}
dir_tel['juan']
dir_tel.keys()
dir_tel.values()
Explanation: Diccionarios
En una gran cantidad de problemas, quieren almacenar claves y asignarle a cada clave un valor.
Un mejor nombre para un diccc. es un "directorio telefonico"
End of explanation
A = set([1,2,3])
B = set([2,3,4])
A | B # Union
A & B # Intersección
A - B # Diferencia de conj.
A ^ B # Diferencia simetrica
Explanation: Sets
son conj. matematicos
End of explanation
range(1000)
for i in range(5):
print(i)
for i in range(10):
if i % 2 == 0:
print(str(i) + ' Par')
else:
print(str(i) + ' Impar')
i = 0
while i < 10:
print(i)
i = i + 1
Explanation: Condicionales y Loops, For, While, If, Elif
truco para hacer loops en python es la func. range
End of explanation
class Person:
def __init__(self, first, last):
self.first = first
self.last = last
def greet(self, add_msg = ''):
print('Hello ' + self.first + ' ' + add_msg)
juan = Person('juan', 'dominguez')
juan.first
juan.greet()
Explanation: Clases
End of explanation |
3,209 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convolutional Autoencoder
Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
Step1: Network Architecture
The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.
Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.
What's going on with the decoder
Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see deconvolutional layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find with the TensorFlow API, with tf.nn.conv2d_transpose.
However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.
Exercise
Step2: Training
As before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
Step3: Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.
Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.
Exercise
Step4: Checking out the performance
Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is. | Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
Explanation: Convolutional Autoencoder
Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
End of explanation
learning_rate = 0.001
n_elements = 28*28
inputs_ = tf.placeholder(tf.float32,(None,28,28,1))
targets_ = tf.placeholder(tf.float32,(None,28,28,1))
### Encoder
conv1 = tf.layers.conv2d(inputs_,16,(3,3),padding='same',activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1,(2,2),(2,2),padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1,8,(3,3),padding='same',activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2,(2,2),(2,2),padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2,8,(3,3),padding='same',activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv2,(2,2),(2,2),padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_images(encoded,(7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d_transpose(upsample1,8,(3,3), padding='same')
# Now 7x7x8
upsample2 = tf.image.resize_images(conv4,(14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d_transpose(upsample2,8,(3,3), padding='same')
# Now 14x14x8
upsample3 = tf.image.resize_images(conv5,(28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d_transpose(upsample3,16,(3,3), padding='same')
# Now 28x28x16
logits = tf.layers.conv2d(conv6,1,(3,3),padding='same')
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits,labels=targets_)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
Explanation: Network Architecture
The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.
Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.
What's going on with the decoder
Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see deconvolutional layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find with the TensorFlow API, with tf.nn.conv2d_transpose.
However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.
Exercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor.
End of explanation
sess = tf.Session()
epochs = 1
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
Explanation: Training
As before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
End of explanation
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 =
# Now 28x28x32
maxpool1 =
# Now 14x14x32
conv2 =
# Now 14x14x32
maxpool2 =
# Now 7x7x32
conv3 =
# Now 7x7x16
encoded =
# Now 4x4x16
### Decoder
upsample1 =
# Now 7x7x16
conv4 =
# Now 7x7x16
upsample2 =
# Now 14x14x16
conv5 =
# Now 14x14x32
upsample3 =
# Now 28x28x32
conv6 =
# Now 28x28x32
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
Explanation: Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.
Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.
Exercise: Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
End of explanation
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
Explanation: Checking out the performance
Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
End of explanation |
3,210 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: Simple Activation Atlas
This notebook uses Lucid to reproduce the results in Activation Atlas.
This notebook doesn't introduce the abstractions behind lucid; you may wish to also read the Lucid tutorial.
Note
Step2: Load model and activations
Step3: Whiten
Step5: Dimensionality reduction
Step6: Feature visualization
Step7: Grid | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
!pip -q install lucid>=0.3.8
!pip -q install umap-learn>=0.3.7
# General support
import math
import tensorflow as tf
import numpy as np
# For plots
import matplotlib.pyplot as plt
# Dimensionality reduction
import umap
from sklearn.manifold import TSNE
# General lucid code
from lucid.misc.io import save, show, load
import lucid.modelzoo.vision_models as models
# For rendering feature visualizations
import lucid.optvis.objectives as objectives
import lucid.optvis.param as param
import lucid.optvis.render as render
import lucid.optvis.transform as transform
Explanation: Simple Activation Atlas
This notebook uses Lucid to reproduce the results in Activation Atlas.
This notebook doesn't introduce the abstractions behind lucid; you may wish to also read the Lucid tutorial.
Note: The easiest way to use this tutorial is as a colab notebook, which allows you to dive in with no setup. We recommend you enable a free GPU by going:
Runtime → Change runtime type → Hardware Accelerator: GPU
Install and imports
End of explanation
model = models.InceptionV1()
model.load_graphdef()
# model.layers[7] is "mixed4c"
layer = "mixed4c"
print(model.layers[7])
raw_activations = model.layers[7].activations
activations = raw_activations[:100000]
print(activations.shape)
Explanation: Load model and activations
End of explanation
def whiten(full_activations):
correl = np.matmul(full_activations.T, full_activations) / len(full_activations)
correl = correl.astype("float32")
S = np.linalg.inv(correl)
S = S.astype("float32")
return S
S = whiten(raw_activations)
Explanation: Whiten
End of explanation
def normalize_layout(layout, min_percentile=1, max_percentile=99, relative_margin=0.1):
Removes outliers and scales layout to between [0,1].
# compute percentiles
mins = np.percentile(layout, min_percentile, axis=(0))
maxs = np.percentile(layout, max_percentile, axis=(0))
# add margins
mins -= relative_margin * (maxs - mins)
maxs += relative_margin * (maxs - mins)
# `clip` broadcasts, `[None]`s added only for readability
clipped = np.clip(layout, mins, maxs)
# embed within [0,1] along both axes
clipped -= clipped.min(axis=0)
clipped /= clipped.max(axis=0)
return clipped
layout = umap.UMAP(n_components=2, verbose=True, n_neighbors=20, min_dist=0.01, metric="cosine").fit_transform(activations)
## You can optionally use TSNE as well
# layout = TSNE(n_components=2, verbose=True, metric="cosine", learning_rate=10, perplexity=50).fit_transform(d)
layout = normalize_layout(layout)
plt.figure(figsize=(10, 10))
plt.scatter(x=layout[:,0],y=layout[:,1], s=2)
plt.show()
Explanation: Dimensionality reduction
End of explanation
#
# Whitened, euclidean neuron objective
#
@objectives.wrap_objective
def direction_neuron_S(layer_name, vec, batch=None, x=None, y=None, S=None):
def inner(T):
layer = T(layer_name)
shape = tf.shape(layer)
x_ = shape[1] // 2 if x is None else x
y_ = shape[2] // 2 if y is None else y
if batch is None:
raise RuntimeError("requires batch")
acts = layer[batch, x_, y_]
vec_ = vec
if S is not None: vec_ = tf.matmul(vec_[None], S)[0]
# mag = tf.sqrt(tf.reduce_sum(acts**2))
dot = tf.reduce_mean(acts * vec_)
# cossim = dot/(1e-4 + mag)
return dot
return inner
#
# Whitened, cosine similarity objective
#
@objectives.wrap_objective
def direction_neuron_cossim_S(layer_name, vec, batch=None, x=None, y=None, cossim_pow=1, S=None):
def inner(T):
layer = T(layer_name)
shape = tf.shape(layer)
x_ = shape[1] // 2 if x is None else x
y_ = shape[2] // 2 if y is None else y
if batch is None:
raise RuntimeError("requires batch")
acts = layer[batch, x_, y_]
vec_ = vec
if S is not None: vec_ = tf.matmul(vec_[None], S)[0]
mag = tf.sqrt(tf.reduce_sum(acts**2))
dot = tf.reduce_mean(acts * vec_)
cossim = dot/(1e-4 + mag)
cossim = tf.maximum(0.1, cossim)
return dot * cossim ** cossim_pow
return inner
#
# Renders a batch of activations as icons
#
def render_icons(directions, model, layer, size=80, n_steps=128, verbose=False, S=None, num_attempts=2, cossim=True, alpha=True):
image_attempts = []
loss_attempts = []
# Render multiple attempts, and pull the one with the lowest loss score.
for attempt in range(num_attempts):
# Render an image for each activation vector
param_f = lambda: param.image(size, batch=directions.shape[0], fft=True, decorrelate=True, alpha=alpha)
if(S is not None):
if(cossim is True):
obj_list = ([
direction_neuron_cossim_S(layer, v, batch=n, S=S, cossim_pow=4) for n,v in enumerate(directions)
])
else:
obj_list = ([
direction_neuron_S(layer, v, batch=n, S=S) for n,v in enumerate(directions)
])
else:
obj_list = ([
objectives.direction_neuron(layer, v, batch=n) for n,v in enumerate(directions)
])
obj = objectives.Objective.sum(obj_list)
transforms = []
if alpha:
transforms.append(transform.collapse_alpha_random())
transforms.append(transform.pad(2, mode='constant', constant_value=1))
transforms.append(transform.jitter(4))
transforms.append(transform.jitter(4))
transforms.append(transform.jitter(8))
transforms.append(transform.jitter(8))
transforms.append(transform.jitter(8))
transforms.append(transform.random_scale([0.995**n for n in range(-5,80)] + [0.998**n for n in 2*list(range(20,40))]))
transforms.append(transform.random_rotate(list(range(-20,20))+list(range(-10,10))+list(range(-5,5))+5*[0]))
transforms.append(transform.jitter(2))
# This is the tensorflow optimization process.
# We can't use the lucid helpers here because we need to know the loss.
print("attempt: ", attempt)
with tf.Graph().as_default(), tf.Session() as sess:
learning_rate = 0.05
losses = []
trainer = tf.train.AdamOptimizer(learning_rate)
T = render.make_vis_T(model, obj, param_f, trainer, transforms)
loss_t, vis_op, t_image = T("loss"), T("vis_op"), T("input")
losses_ = [obj_part(T) for obj_part in obj_list]
tf.global_variables_initializer().run()
for i in range(n_steps):
loss, _ = sess.run([losses_, vis_op])
losses.append(loss)
if (i % 100 == 0):
print(i)
img = t_image.eval()
img_rgb = img[:,:,:,:3]
if alpha:
print("alpha true")
k = 0.8
bg_color = 0.0
img_a = img[:,:,:,3:]
img_merged = img_rgb*((1-k)+k*img_a) + bg_color * k*(1-img_a)
image_attempts.append(img_merged)
else:
print("alpha false")
image_attempts.append(img_rgb)
loss_attempts.append(losses[-1])
# Use the icon with the lowest loss
loss_attempts = np.asarray(loss_attempts)
loss_final = []
image_final = []
print("Merging best scores from attempts...")
for i, d in enumerate(directions):
# note, this should be max, it is not a traditional loss
mi = np.argmax(loss_attempts[:,i])
image_final.append(image_attempts[mi][i])
return (image_final, loss_final)
Explanation: Feature visualization
End of explanation
#
# Takes a list of x,y layout and bins them into grid cells
#
def grid(xpts=None, ypts=None, grid_size=(8,8), x_extent=(0., 1.), y_extent=(0., 1.)):
xpx_length = grid_size[0]
ypx_length = grid_size[1]
xpt_extent = x_extent
ypt_extent = y_extent
xpt_length = xpt_extent[1] - xpt_extent[0]
ypt_length = ypt_extent[1] - ypt_extent[0]
xpxs = ((xpts - xpt_extent[0]) / xpt_length) * xpx_length
ypxs = ((ypts - ypt_extent[0]) / ypt_length) * ypx_length
ix_s = range(grid_size[0])
iy_s = range(grid_size[1])
xs = []
for xi in ix_s:
ys = []
for yi in iy_s:
xpx_extent = (xi, (xi + 1))
ypx_extent = (yi, (yi + 1))
in_bounds_x = np.logical_and(xpx_extent[0] <= xpxs, xpxs <= xpx_extent[1])
in_bounds_y = np.logical_and(ypx_extent[0] <= ypxs, ypxs <= ypx_extent[1])
in_bounds = np.logical_and(in_bounds_x, in_bounds_y)
in_bounds_indices = np.where(in_bounds)[0]
ys.append(in_bounds_indices)
xs.append(ys)
return np.asarray(xs)
def render_layout(model, layer, S, xs, ys, activ, n_steps=512, n_attempts=2, min_density=10, grid_size=(10, 10), icon_size=80, x_extent=(0., 1.0), y_extent=(0., 1.0)):
grid_layout = grid(xpts=xs, ypts=ys, grid_size=grid_size, x_extent=x_extent, y_extent=y_extent)
icons = []
for x in range(grid_size[0]):
for y in range(grid_size[1]):
indices = grid_layout[x, y]
if len(indices) > min_density:
average_activation = np.average(activ[indices], axis=0)
icons.append((average_activation, x, y))
icons = np.asarray(icons)
icon_batch, losses = render_icons(icons[:,0], model, alpha=False, layer=layer, S=S, n_steps=n_steps, size=icon_size, num_attempts=n_attempts)
canvas = np.ones((icon_size * grid_size[0], icon_size * grid_size[1], 3))
for i, icon in enumerate(icon_batch):
y = int(icons[i, 1])
x = int(icons[i, 2])
canvas[(grid_size[0] - x - 1) * icon_size:(grid_size[0] - x) * icon_size, (y) * icon_size:(y + 1) * icon_size] = icon
return canvas
#
# Given a layout, renders an icon for the average of all the activations in each grid cell.
#
xs = layout[:, 0]
ys = layout[:, 1]
canvas = render_layout(model, layer, S, xs, ys, raw_activations, n_steps=512, grid_size=(20, 20), n_attempts=1)
show(canvas)
Explanation: Grid
End of explanation |
3,211 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this notebook, we will work through a Bayes Net analysis using the GES algorithm with the TETRAD software (http
Step1: Load the data generated using the DCM forward model. In this model, there is a significant static connectivity from 1->2 and 1->3 (A matrix), and a PPI for 0->2 and 0->4 (B matrix) and a significant input to ROI 0 (C matrix).
Step2: Generate a set of synthetic datasets, referring to individual subjects
Step3: Run iMAGES (using a shell script)
Step4: Show the graph estimated by iMAGES
Step5: Show the true graph from the DCM forward model | Python Code:
import os,sys
import numpy
%matplotlib inline
import matplotlib.pyplot as plt
sys.path.insert(0,'../')
from utils.mkdesign import create_design_singlecondition
from nipy.modalities.fmri.hemodynamic_models import spm_hrf,compute_regressor
from utils.make_data import make_continuous_data
from utils.graph_utils import show_graph_from_adjmtx,show_graph_from_pattern
from statsmodels.tsa.arima_process import arma_generate_sample
import scipy.stats
from dcm_sim import sim_dcm_dataset
results_dir = os.path.abspath("../results")
if not os.path.exists(results_dir):
os.mkdir(results_dir)
Explanation: In this notebook, we will work through a Bayes Net analysis using the GES algorithm with the TETRAD software (http://www.phil.cmu.edu/tetrad/). We will use the same dataset used for the PPI and DCM examples.
End of explanation
_,data_conv,params=sim_dcm_dataset(verbose=True)
A_mtx=params['A']
B_mtx=params['B']
u=params['u']
# downsample design to 1 second TR
u=numpy.convolve(params['u'],spm_hrf(params['stepsize'],oversampling=1))
u=u[range(0,data_conv.shape[0],int(1./params['stepsize']))]
ntp=u.shape[0]
Explanation: Load the data generated using the DCM forward model. In this model, there is a significant static connectivity from 1->2 and 1->3 (A matrix), and a PPI for 0->2 and 0->4 (B matrix) and a significant input to ROI 0 (C matrix).
End of explanation
tetrad_dir='/home/vagrant/data/tetrad_files'
if not os.path.exists(tetrad_dir):
os.mkdir(tetrad_dir)
nfiles=10
for i in range(nfiles):
_,data_conv,params=sim_dcm_dataset()
# downsample to 1 second TR
data=data_conv[range(0,data_conv.shape[0],int(1./params['stepsize']))]
ntp=data.shape[0]
imagesdata=numpy.hstack((numpy.array(u)[:,numpy.newaxis],data))
numpy.savetxt(os.path.join(tetrad_dir,"data%03d.txt"%i),
imagesdata,delimiter='\t',
header='u\t0\t1\t2\t3\t4',comments='')
Explanation: Generate a set of synthetic datasets, referring to individual subjects
End of explanation
!bash run_images.sh
Explanation: Run iMAGES (using a shell script)
End of explanation
g=show_graph_from_pattern('images_test/test.pattern.dot')
Explanation: Show the graph estimated by iMAGES
End of explanation
show_graph_from_adjmtx(A_mtx,B_mtx,params['C'])
Explanation: Show the true graph from the DCM forward model
End of explanation |
3,212 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data block API foundations
Step1: Jump_to lesson 11 video
Step2: Image ItemList
Previously we were reading in to RAM the whole MNIST dataset at once, loading it as a pickle file. We can't do that for datasets larger than our RAM capacity, so instead we leave the images on disk and just grab the ones we need for each mini-batch as we use them.
Let's use the imagenette dataset and build the data blocks we need along the way.
Get images
Step3: To be able to look at what's inside a directory from a notebook, we add the .ls method to Path with a monkey-patch.
Step4: Let's have a look inside a class folder (the first class is tench)
Step5: Just in case there are other files in the directory (models, texts...) we want to keep only the images. Let's not write it out by hand, but instead use what's already on our computer (the MIME types database).
Step6: Now let's walk through the directories and grab all the images. The first private function grabs all the images inside a given directory and the second one walks (potentially recursively) through all the folder in path.
Jump_to lesson 11 video
Step7: We need the recurse argument when we start from path since the pictures are two level below in directories.
Step8: Imagenet is 100 times bigger than imagenette, so we need this to be fast.
Step9: Prepare for modeling
What we need to do
Step10: Transforms aren't only used for data augmentation. To allow total flexibility, ImageList returns the raw PIL image. The first thing is to convert it to 'RGB' (or something else).
Transforms only need to be functions that take an element of the ItemList and transform it. If they need state, they can be defined as a class. Also, having them as a class allows to define an _order attribute (default 0) that is used to sort the transforms.
Step11: We can also index with a range or a list of integers
Step12: Split validation set
Here, we need to split the files between those in the folder train and those in the folder val.
Step13: Since our filenames are path object, we can find the directory of the file with .parent. We need to go back two folders before since the last folders are the class names.
Step14: Jump_to lesson 11 video
Step15: Now that we can split our data, let's create the class that will contain it. It just needs two ItemList to be initialized, and we create a shortcut to all the unknown attributes by trying to grab them in the train ItemList.
Step16: Labeling
Labeling has to be done after splitting, because it uses training set information to apply to the validation set, using a Processor.
A Processor is a transformation that is applied to all the inputs once at initialization, with some state computed on the training set that is then applied without modification on the validation set (and maybe the test set or at inference time on a single item). For instance, it could be processing texts to tokenize, then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.
Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the Processor and applied on the validation set.
In our case, we want to convert label strings to numbers in a consistent and reproducible way. So we create a list of possible labels in the training set, and then convert our labels to numbers based on this vocab.
Jump_to lesson 11 video
Step17: First, let's define the processor. We also define a ProcessedItemList with an obj method that can get the unprocessed items
Step18: Here we label according to the folders of the images, so simply fn.parent.name. We label the training set first with a newly created CategoryProcessor so that it computes its inner vocab on that set. Then we label the validation set using the same processor, which means it uses the same vocab. The end result is another SplitData object.
Step19: Transform to tensor
Jump_to lesson 11 video
Step20: To be able to put all our images in a batch, we need them to have all the same size. We can do this easily in PIL.
Step21: The first transform resizes to a given size, then we convert the image to a by tensor before converting it to float and dividing by 255. We will investigate data augmentation transforms at length in notebook 10.
Step22: Here is a little convenience function to show an image from the corresponding tensor.
Step23: Modeling
DataBunch
Now we are ready to put our datasets together in a DataBunch.
Jump_to lesson 11 video
Step24: We can still see the images in a batch and get the corresponding classes.
Step25: We change a little bit our DataBunch to add a few attributes
Step26: Then we define a function that goes directly from the SplitData to a DataBunch.
Step27: This gives us the full summary on how to grab our data and put it in a DataBunch
Step28: Model
Jump_to lesson 11 video
Step29: We will normalize with the statistics from a batch.
Step30: We build our model using Bag of Tricks for Image Classification with Convolutional Neural Networks, in particular
Step31: Let's have a look at our model using Hooks. We print the layers and the shapes of their outputs.
Step32: And we can train the model
Step33: The leaderboard as this notebook is written has ~85% accuracy for 5 epochs at 128px size, so we're definitely on the right track!
Export | Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
#export
from exp.nb_07a import *
Explanation: Data block API foundations
End of explanation
datasets.URLs.IMAGENETTE_160
Explanation: Jump_to lesson 11 video
End of explanation
path = datasets.untar_data(datasets.URLs.IMAGENETTE_160)
path
Explanation: Image ItemList
Previously we were reading in to RAM the whole MNIST dataset at once, loading it as a pickle file. We can't do that for datasets larger than our RAM capacity, so instead we leave the images on disk and just grab the ones we need for each mini-batch as we use them.
Let's use the imagenette dataset and build the data blocks we need along the way.
Get images
End of explanation
#export
import PIL,os,mimetypes
Path.ls = lambda x: list(x.iterdir())
path.ls()
(path/'val').ls()
Explanation: To be able to look at what's inside a directory from a notebook, we add the .ls method to Path with a monkey-patch.
End of explanation
path_tench = path/'val'/'n01440764'
img_fn = path_tench.ls()[0]
img_fn
img = PIL.Image.open(img_fn)
img
plt.imshow(img)
import numpy
imga = numpy.array(img)
imga.shape
imga[:10,:10,0]
Explanation: Let's have a look inside a class folder (the first class is tench):
End of explanation
#export
image_extensions = set(k for k,v in mimetypes.types_map.items() if v.startswith('image/'))
' '.join(image_extensions)
#export
def setify(o): return o if isinstance(o,set) else set(listify(o))
test_eq(setify('aa'), {'aa'})
test_eq(setify(['aa',1]), {'aa',1})
test_eq(setify(None), set())
test_eq(setify(1), {1})
test_eq(setify({1}), {1})
Explanation: Just in case there are other files in the directory (models, texts...) we want to keep only the images. Let's not write it out by hand, but instead use what's already on our computer (the MIME types database).
End of explanation
#export
def _get_files(p, fs, extensions=None):
p = Path(p)
res = [p/f for f in fs if not f.startswith('.')
and ((not extensions) or f'.{f.split(".")[-1].lower()}' in extensions)]
return res
t = [o.name for o in os.scandir(path_tench)]
t = _get_files(path, t, extensions=image_extensions)
t[:3]
#export
def get_files(path, extensions=None, recurse=False, include=None):
path = Path(path)
extensions = setify(extensions)
extensions = {e.lower() for e in extensions}
if recurse:
res = []
for i,(p,d,f) in enumerate(os.walk(path)): # returns (dirpath, dirnames, filenames)
if include is not None and i==0: d[:] = [o for o in d if o in include]
else: d[:] = [o for o in d if not o.startswith('.')]
res += _get_files(p, f, extensions)
return res
else:
f = [o.name for o in os.scandir(path) if o.is_file()]
return _get_files(path, f, extensions)
get_files(path_tench, image_extensions)[:3]
Explanation: Now let's walk through the directories and grab all the images. The first private function grabs all the images inside a given directory and the second one walks (potentially recursively) through all the folder in path.
Jump_to lesson 11 video
End of explanation
get_files(path, image_extensions, recurse=True)[:3]
all_fns = get_files(path, image_extensions, recurse=True)
len(all_fns)
Explanation: We need the recurse argument when we start from path since the pictures are two level below in directories.
End of explanation
%timeit -n 10 get_files(path, image_extensions, recurse=True)
Explanation: Imagenet is 100 times bigger than imagenette, so we need this to be fast.
End of explanation
#export
def compose(x, funcs, *args, order_key='_order', **kwargs):
key = lambda o: getattr(o, order_key, 0)
for f in sorted(listify(funcs), key=key): x = f(x, **kwargs)
return x
class ItemList(ListContainer):
def __init__(self, items, path='.', tfms=None):
super().__init__(items)
self.path,self.tfms = Path(path),tfms
def __repr__(self): return f'{super().__repr__()}\nPath: {self.path}'
def new(self, items, cls=None):
if cls is None: cls=self.__class__
return cls(items, self.path, tfms=self.tfms)
def get(self, i): return i
def _get(self, i): return compose(self.get(i), self.tfms)
def __getitem__(self, idx):
res = super().__getitem__(idx)
if isinstance(res,list): return [self._get(o) for o in res]
return self._get(res)
class ImageList(ItemList):
@classmethod
def from_files(cls, path, extensions=None, recurse=True, include=None, **kwargs):
if extensions is None: extensions = image_extensions
return cls(get_files(path, extensions, recurse=recurse, include=include), path, **kwargs)
def get(self, fn): return PIL.Image.open(fn)
Explanation: Prepare for modeling
What we need to do:
Get files
Split validation set
random%, folder name, csv, ...
Label:
folder name, file name/re, csv, ...
Transform per image (optional)
Transform to tensor
DataLoader
Transform per batch (optional)
DataBunch
Add test set (optional)
Jump_to lesson 11 video
Get files
We use the ListContainer class from notebook 06 to store our objects in an ItemList. The get method will need to be subclassed to explain how to access an element (open an image for instance), then the private _get method can allow us to apply any additional transform to it.
new will be used in conjunction with __getitem__ (that works for one index or a list of indices) to create training and validation set from a single stream when we split the data.
End of explanation
#export
class Transform(): _order=0
class MakeRGB(Transform):
def __call__(self, item): return item.convert('RGB')
def make_rgb(item): return item.convert('RGB')
il = ImageList.from_files(path, tfms=make_rgb)
il
img = il[0]; img
Explanation: Transforms aren't only used for data augmentation. To allow total flexibility, ImageList returns the raw PIL image. The first thing is to convert it to 'RGB' (or something else).
Transforms only need to be functions that take an element of the ItemList and transform it. If they need state, they can be defined as a class. Also, having them as a class allows to define an _order attribute (default 0) that is used to sort the transforms.
End of explanation
il[:1]
Explanation: We can also index with a range or a list of integers:
End of explanation
fn = il.items[0]; fn
Explanation: Split validation set
Here, we need to split the files between those in the folder train and those in the folder val.
End of explanation
fn.parent.parent.name
Explanation: Since our filenames are path object, we can find the directory of the file with .parent. We need to go back two folders before since the last folders are the class names.
End of explanation
#export
def grandparent_splitter(fn, valid_name='valid', train_name='train'):
gp = fn.parent.parent.name
return True if gp==valid_name else False if gp==train_name else None
def split_by_func(items, f):
mask = [f(o) for o in items]
# `None` values will be filtered out
f = [o for o,m in zip(items,mask) if m==False]
t = [o for o,m in zip(items,mask) if m==True ]
return f,t
splitter = partial(grandparent_splitter, valid_name='val')
%time train,valid = split_by_func(il, splitter)
len(train),len(valid)
Explanation: Jump_to lesson 11 video
End of explanation
#export
class SplitData():
def __init__(self, train, valid): self.train,self.valid = train,valid
def __getattr__(self,k): return getattr(self.train,k)
#This is needed if we want to pickle SplitData and be able to load it back without recursion errors
def __setstate__(self,data:Any): self.__dict__.update(data)
@classmethod
def split_by_func(cls, il, f):
lists = map(il.new, split_by_func(il.items, f))
return cls(*lists)
def __repr__(self): return f'{self.__class__.__name__}\nTrain: {self.train}\nValid: {self.valid}\n'
sd = SplitData.split_by_func(il, splitter); sd
Explanation: Now that we can split our data, let's create the class that will contain it. It just needs two ItemList to be initialized, and we create a shortcut to all the unknown attributes by trying to grab them in the train ItemList.
End of explanation
#export
from collections import OrderedDict
def uniqueify(x, sort=False):
res = list(OrderedDict.fromkeys(x).keys())
if sort: res.sort()
return res
Explanation: Labeling
Labeling has to be done after splitting, because it uses training set information to apply to the validation set, using a Processor.
A Processor is a transformation that is applied to all the inputs once at initialization, with some state computed on the training set that is then applied without modification on the validation set (and maybe the test set or at inference time on a single item). For instance, it could be processing texts to tokenize, then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.
Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the Processor and applied on the validation set.
In our case, we want to convert label strings to numbers in a consistent and reproducible way. So we create a list of possible labels in the training set, and then convert our labels to numbers based on this vocab.
Jump_to lesson 11 video
End of explanation
#export
class Processor():
def process(self, items): return items
class CategoryProcessor(Processor):
def __init__(self): self.vocab=None
def __call__(self, items):
#The vocab is defined on the first use.
if self.vocab is None:
self.vocab = uniqueify(items)
self.otoi = {v:k for k,v in enumerate(self.vocab)}
return [self.proc1(o) for o in items]
def proc1(self, item): return self.otoi[item]
def deprocess(self, idxs):
assert self.vocab is not None
return [self.deproc1(idx) for idx in idxs]
def deproc1(self, idx): return self.vocab[idx]
Explanation: First, let's define the processor. We also define a ProcessedItemList with an obj method that can get the unprocessed items: for instance a processed label will be an index between 0 and the number of classes - 1, the corresponding obj will be the name of the class. The first one is needed by the model for the training, but the second one is better for displaying the objects.
End of explanation
#export
def parent_labeler(fn): return fn.parent.name
def _label_by_func(ds, f, cls=ItemList): return cls([f(o) for o in ds.items], path=ds.path)
#This is a slightly different from what was seen during the lesson,
# we'll discuss the changes in lesson 11
class LabeledData():
def process(self, il, proc): return il.new(compose(il.items, proc))
def __init__(self, x, y, proc_x=None, proc_y=None):
self.x,self.y = self.process(x, proc_x),self.process(y, proc_y)
self.proc_x,self.proc_y = proc_x,proc_y
def __repr__(self): return f'{self.__class__.__name__}\nx: {self.x}\ny: {self.y}\n'
def __getitem__(self,idx): return self.x[idx],self.y[idx]
def __len__(self): return len(self.x)
def x_obj(self, idx): return self.obj(self.x, idx, self.proc_x)
def y_obj(self, idx): return self.obj(self.y, idx, self.proc_y)
def obj(self, items, idx, procs):
isint = isinstance(idx, int) or (isinstance(idx,torch.LongTensor) and not idx.ndim)
item = items[idx]
for proc in reversed(listify(procs)):
item = proc.deproc1(item) if isint else proc.deprocess(item)
return item
@classmethod
def label_by_func(cls, il, f, proc_x=None, proc_y=None):
return cls(il, _label_by_func(il, f), proc_x=proc_x, proc_y=proc_y)
def label_by_func(sd, f, proc_x=None, proc_y=None):
train = LabeledData.label_by_func(sd.train, f, proc_x=proc_x, proc_y=proc_y)
valid = LabeledData.label_by_func(sd.valid, f, proc_x=proc_x, proc_y=proc_y)
return SplitData(train,valid)
ll = label_by_func(sd, parent_labeler, proc_y=CategoryProcessor())
assert ll.train.proc_y is ll.valid.proc_y
ll.train.y
ll.train.y.items[0], ll.train.y_obj(0), ll.train.y_obj(slice(2))
ll
Explanation: Here we label according to the folders of the images, so simply fn.parent.name. We label the training set first with a newly created CategoryProcessor so that it computes its inner vocab on that set. Then we label the validation set using the same processor, which means it uses the same vocab. The end result is another SplitData object.
End of explanation
ll.train[0]
ll.train[0][0]
Explanation: Transform to tensor
Jump_to lesson 11 video
End of explanation
ll.train[0][0].resize((128,128))
Explanation: To be able to put all our images in a batch, we need them to have all the same size. We can do this easily in PIL.
End of explanation
#export
class ResizeFixed(Transform):
_order=10
def __init__(self,size):
if isinstance(size,int): size=(size,size)
self.size = size
def __call__(self, item): return item.resize(self.size, PIL.Image.BILINEAR)
def to_byte_tensor(item):
res = torch.ByteTensor(torch.ByteStorage.from_buffer(item.tobytes()))
w,h = item.size
return res.view(h,w,-1).permute(2,0,1)
to_byte_tensor._order=20
def to_float_tensor(item): return item.float().div_(255.)
to_float_tensor._order=30
tfms = [make_rgb, ResizeFixed(128), to_byte_tensor, to_float_tensor]
il = ImageList.from_files(path, tfms=tfms)
sd = SplitData.split_by_func(il, splitter)
ll = label_by_func(sd, parent_labeler, proc_y=CategoryProcessor())
Explanation: The first transform resizes to a given size, then we convert the image to a by tensor before converting it to float and dividing by 255. We will investigate data augmentation transforms at length in notebook 10.
End of explanation
#export
def show_image(im, figsize=(3,3)):
plt.figure(figsize=figsize)
plt.axis('off')
plt.imshow(im.permute(1,2,0))
x,y = ll.train[0]
x.shape
show_image(x)
Explanation: Here is a little convenience function to show an image from the corresponding tensor.
End of explanation
bs=64
train_dl,valid_dl = get_dls(ll.train,ll.valid,bs, num_workers=4)
x,y = next(iter(train_dl))
x.shape
Explanation: Modeling
DataBunch
Now we are ready to put our datasets together in a DataBunch.
Jump_to lesson 11 video
End of explanation
show_image(x[0])
ll.train.proc_y.vocab[y[0]]
y
Explanation: We can still see the images in a batch and get the corresponding classes.
End of explanation
#export
class DataBunch():
def __init__(self, train_dl, valid_dl, c_in=None, c_out=None):
self.train_dl,self.valid_dl,self.c_in,self.c_out = train_dl,valid_dl,c_in,c_out
@property
def train_ds(self): return self.train_dl.dataset
@property
def valid_ds(self): return self.valid_dl.dataset
Explanation: We change a little bit our DataBunch to add a few attributes: c_in (for channel in) and c_out (for channel out) instead of just c. This will help when we need to build our model.
End of explanation
#export
def databunchify(sd, bs, c_in=None, c_out=None, **kwargs):
dls = get_dls(sd.train, sd.valid, bs, **kwargs)
return DataBunch(*dls, c_in=c_in, c_out=c_out)
SplitData.to_databunch = databunchify
Explanation: Then we define a function that goes directly from the SplitData to a DataBunch.
End of explanation
path = datasets.untar_data(datasets.URLs.IMAGENETTE_160)
tfms = [make_rgb, ResizeFixed(128), to_byte_tensor, to_float_tensor]
il = ImageList.from_files(path, tfms=tfms)
sd = SplitData.split_by_func(il, partial(grandparent_splitter, valid_name='val'))
ll = label_by_func(sd, parent_labeler, proc_y=CategoryProcessor())
data = ll.to_databunch(bs, c_in=3, c_out=10, num_workers=4)
Explanation: This gives us the full summary on how to grab our data and put it in a DataBunch:
End of explanation
cbfs = [partial(AvgStatsCallback,accuracy),
CudaCallback]
Explanation: Model
Jump_to lesson 11 video
End of explanation
m,s = x.mean((0,2,3)).cuda(),x.std((0,2,3)).cuda()
m,s
#export
def normalize_chan(x, mean, std):
return (x-mean[...,None,None]) / std[...,None,None]
_m = tensor([0.47, 0.48, 0.45])
_s = tensor([0.29, 0.28, 0.30])
norm_imagenette = partial(normalize_chan, mean=_m.cuda(), std=_s.cuda())
cbfs.append(partial(BatchTransformXCallback, norm_imagenette))
nfs = [64,64,128,256]
Explanation: We will normalize with the statistics from a batch.
End of explanation
#export
import math
def prev_pow_2(x): return 2**math.floor(math.log2(x))
def get_cnn_layers(data, nfs, layer, **kwargs):
def f(ni, nf, stride=2): return layer(ni, nf, 3, stride=stride, **kwargs)
l1 = data.c_in
l2 = prev_pow_2(l1*3*3)
layers = [f(l1 , l2 , stride=1),
f(l2 , l2*2, stride=2),
f(l2*2, l2*4, stride=2)]
nfs = [l2*4] + nfs
layers += [f(nfs[i], nfs[i+1]) for i in range(len(nfs)-1)]
layers += [nn.AdaptiveAvgPool2d(1), Lambda(flatten),
nn.Linear(nfs[-1], data.c_out)]
return layers
def get_cnn_model(data, nfs, layer, **kwargs):
return nn.Sequential(*get_cnn_layers(data, nfs, layer, **kwargs))
def get_learn_run(nfs, data, lr, layer, cbs=None, opt_func=None, **kwargs):
model = get_cnn_model(data, nfs, layer, **kwargs)
init_cnn(model)
return get_runner(model, data, lr=lr, cbs=cbs, opt_func=opt_func)
sched = combine_scheds([0.3,0.7], cos_1cycle_anneal(0.1,0.3,0.05))
learn,run = get_learn_run(nfs, data, 0.2, conv_layer, cbs=cbfs+[
partial(ParamScheduler, 'lr', sched)
])
Explanation: We build our model using Bag of Tricks for Image Classification with Convolutional Neural Networks, in particular: we don't use a big conv 7x7 at first but three 3x3 convs, and don't go directly from 3 channels to 64 but progressively add those.
End of explanation
#export
def model_summary(run, learn, data, find_all=False):
xb,yb = get_batch(data.valid_dl, run)
device = next(learn.model.parameters()).device#Model may not be on the GPU yet
xb,yb = xb.to(device),yb.to(device)
mods = find_modules(learn.model, is_lin_layer) if find_all else learn.model.children()
f = lambda hook,mod,inp,out: print(f"{mod}\n{out.shape}\n")
with Hooks(mods, f) as hooks: learn.model(xb)
model_summary(run, learn, data)
Explanation: Let's have a look at our model using Hooks. We print the layers and the shapes of their outputs.
End of explanation
%time run.fit(5, learn)
Explanation: And we can train the model:
End of explanation
!python notebook2script.py 08_data_block.ipynb
Explanation: The leaderboard as this notebook is written has ~85% accuracy for 5 epochs at 128px size, so we're definitely on the right track!
Export
End of explanation |
3,213 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PCA Analysis
Step1: Feature Selection
Step2: Logistic Regression
Step3: Naive Bayes
Step4: KNN
Step5: Random Forest
Step6: Decision Tree
Step7: SVC
Step8: Gradient Boosting
Step9: Conclusion
Based on the breast Cancer Diagnostic dataset with 569 entries a model has been built to predict based on the most relevant features obtained if a tumor is benign or malign. In this case we have used classifiers to build the models and the hyperparameters have been tuned on the training set, accounting for 70% of all the data and test on the remaining 30%. As a result, all the models tested have an accuracy that goes from 89% in the worst-case scenario up to 95% in the best case. The models used have been logistic regression, KNN, SVC, Random Forest, Naïve Bayes (Bernouilli, Gradient Boosting and Decision Tree). All hyperparameters of the models and the models have been tested using cross validation with five folds.
The first step has been to create and select the features that will be the predictors of the model and to build the output variable as a [0,1] variable. In the latter, the dataset has been resampled to balance the number of outputs in each class. For the feature selection, Random forest best features, select best and recursive feature elimination have been used. Additionally is has been compared to the number of features that a PCA analysis gives as meaningful for the model. In the case of the feature selection models used, the features selected using random forest have been narrowed down to the ones produced by the recursive feature elimination using as the maximum number of features the number given by the PCA analysis equating to 3.
The selected features are | Python Code:
# Build up the correlation mtrix
Z = X1
correlation_matrix = Z.corr()
#Eigenvectores & Eigenvalues
eig_vals, eig_vecs = np.linalg.eig(correlation_matrix)
sklearn_pca = PCA(n_components=len(Z.columns))
Y_sklearn = sklearn_pca.fit_transform(correlation_matrix)
#From the Scree plot.
plt.plot(eig_vals)
plt.show()
print(
'The percentage of total variance in the dataset explained by each',
'component from Sklearn PCA.\n',
sklearn_pca.explained_variance_ratio_
)
#PCA features
# Create a scaler object
sc = StandardScaler()
# Fit the scaler to the features and transform
X_std = sc.fit_transform(X1)
# Create a PCA object from Scree plot the number of components is 3
pca = decomposition.PCA(n_components=3)
# Fit the PCA and transform the data
X_std_pca = pca.fit_transform(X_std)
# View the new feature data's shape
X_std_pca.shape
# Create a new dataframe with the new features
XPCA = pd.DataFrame(X_std_pca)
XPCA.head()
#Calculate Feature Importance using Random Forest
rf = RandomForestClassifier()
rf.fit(X1, Y)
#Define feature importance
feature_importance = rf.feature_importances_
# Make importances relative to max importance.
feature_importance = 100.0 * (feature_importance / feature_importance.max())
sorted_idx = np.argsort(feature_importance)
pos = np.arange(sorted_idx.shape[0]) + .5
plt.figure(figsize=(7, 30))
plt.subplot(1, 1, 1)
plt.barh(pos, feature_importance[sorted_idx], align='center')
plt.yticks(pos, X1.columns[sorted_idx])
plt.xlabel('Relative Importance')
plt.title('Diagclass')
plt.show()
#Feature Selection. Scores for the most relevant features (should we start with the one that has more explanatory power)
# feature extraction
test = SelectKBest()
fit = test.fit(X1, Y)
#Identify features with highest score from a predictive perspective (for all programs)
names2 = X1.columns
Bestfeatures = pd.DataFrame(fit.scores_, index = names2)
Bestfeatures.columns = ['Best Features']
Bestfeatures.sort_values(by=['Best Features'], ascending=False)
# create the RFE model and select features
#From PCA analyis the number of components is 3
nfeatures = 3
lr = LogisticRegression()
rfe = RFE(lr,nfeatures)
fit = rfe.fit(X1,Y)
# summarize the selection of the features
result_RFE = pd.DataFrame(list(zip(X1.head(0), rfe.ranking_, rfe.support_)),columns=['Features','Ranking','Support'] )
result_RFE.sort_values('Ranking')
Explanation: PCA Analysis
End of explanation
#View all the predictors to make the feature selection
X1.columns
#Feature Selection using Random Forest
X3 = X1[['perimeter_worst', 'area_worst', 'concave points_mean', 'concavity_mean','radius_worst','perimeter_mean',
'concavity_worst', 'compactness_mean','concave points_worst','compactness_worst']]
#Feature Selection using RFE & PCA
X2 = X1[['radius_worst','concave points_worst','perimeter_worst']]
#Split the data into training and testing datasets. Split: 70/30; train/test
X_train, X_test, y_train, y_test = train_test_split(X2,Y, test_size=0.3, random_state=123)
#Initiating the cross validation generator, N splits = 5
kf = KFold(5)
Explanation: Feature Selection
End of explanation
# Initialize and fit the model.
lr = LogisticRegression()
#Tune parameters
k1 = np.arange(20)+1
k2 = ['l1','l2']
parameters = {'C': k1,
'penalty':k2
}
#Fit parameters
lr1 = GridSearchCV(lr, param_grid=parameters, cv=kf)
#Fit the tunned classifier in the traiing space
lr1.fit(X_train, y_train)
#Print the best parameters
print(lr1.best_params_)
#Have a raw idea of the accuracy of each of the feeatures selection carried out with different methodologies
lr1.fit(XPCA, Y)
# Predict on test set
predPCA_y = lr1.predict(XPCA)
print((
'PCA accuracy: {}\n'
'RFE accuracy: {}\n'
'FI accuracy: {}\n'
).format(cross_val_score(lr1,XPCA,Y,cv=kf).mean(),cross_val_score(lr,X2,Y,cv=kf).mean(),cross_val_score(lr,X3,Y,cv=kf).mean()))
#Fit on Test set
lr1.fit(X_test, y_test)
predtest_y = lr1.predict(X_test)
#Evaluate model (test set)
target_names = ['0.0', '1.0']
print(classification_report(y_test, predtest_y, target_names=target_names))
confusion = confusion_matrix(y_test, predtest_y)
print(confusion)
# Accuracy tables.
table_test = pd.crosstab(y_test, predtest_y, margins=True)
test_tI_errors = table_test.loc[0.0,1.0] / table_test.loc['All','All']
test_tII_errors = table_test.loc[1.0,0.0] / table_test.loc['All','All']
acclr1 = cross_val_score(lr1,X_test,y_test,cv=kf).mean()
acclr1pca = cross_val_score(lr1,XPCA,Y,cv=kf).mean()
print((
'Logistic Regression accuracy: {}\n'
'Logistic Regression accuracy PCA: {}\n'
'Percent Type I errors: {}\n'
'Percent Type II errors: {}\n\n'
).format(acclr1,acclr1pca,test_tI_errors, test_tII_errors))
Explanation: Logistic Regression
End of explanation
# Initialize and fit the model.
lb = BernoulliNB()
#Tune parameters
k1 = np.arange(10)+1
parameters = {'alpha': k1}
#Fit parameters
lb1 = GridSearchCV(lb, param_grid=parameters, cv=kf)
#Fit the tunned classifier in the traiing space
lb1.fit(X_train, y_train)
#Print the best parameters
print(lb1.best_params_)
# Predict on the test data set
lb1.fit(X_test, y_test)
# Predict on training set
predtestlb_y = lb1.predict(X_test)
#Evaluation of the model (testing)
target_names = ['0.0', '1.0']
print(classification_report(y_test, predtestlb_y, target_names=target_names))
confusion = confusion_matrix(y_test, predtestlb_y)
print(confusion)
# Accuracy tables.
table_test = pd.crosstab(y_test, predtestlb_y, margins=True)
test_tI_errors = table_test.loc[0.0,1.0] / table_test.loc['All','All']
test_tII_errors = table_test.loc[1.0,0.0] / table_test.loc['All','All']
acclb1 = cross_val_score(lb1,X_test,y_test,cv=kf).mean()
acclb1pca = cross_val_score(lb1,XPCA, Y,cv=kf).mean()
print((
'Naive Bayes accuracy: {}\n'
'Naive Bayes accuracy PCA: {}\n'
'Percent Type I errors: {}\n'
'Percent Type II errors: {}\n\n'
).format(acclb1,acclb1pca,test_tI_errors, test_tII_errors))
Explanation: Naive Bayes
End of explanation
# Initialize and fit the model
KNN = KNeighborsClassifier(n_jobs=-1)
#Create range of values to fit parameters
k1 = [11,13,15,17,19,21]
k2 = [40,50,60]
k3 = ['uniform', 'distance']
k4 = ['auto', 'ball_tree','kd_tree','brute']
parameters = {'n_neighbors': k1,
'leaf_size': k2,
'weights':k3,
'algorithm':k4}
#Fit parameters
clf = GridSearchCV(KNN, param_grid=parameters, cv=kf)
#Fit the tunned model
clf.fit(X_train, y_train)
#The best hyper parameters set
print("Best Hyper Parameters:", clf.best_params_)
#Initialize the model on test dataset
clf.fit(X_test, y_test)
# Predict on test dataset
predtest3_y = clf.predict(X_test)
#Evaluate model on the test set
target_names = ['0.0', '1.0']
print(classification_report(y_test, predtest3_y, target_names=target_names))
#Create confusion matrix
confusion = confusion_matrix(y_test, predtest3_y)
print(confusion)
# Accuracy tables.
table_test = pd.crosstab(y_test, predtest3_y, margins=True)
test_tI_errors = table_test.loc[0.0,1.0] / table_test.loc['All','All']
test_tII_errors = table_test.loc[1.0,0.0] / table_test.loc['All','All']
accclf = cross_val_score(clf,X_test,y_test,cv=kf).mean()
accclfpca = cross_val_score(clf,XPCA,Y,cv=kf).mean()
#Print Results
print((
'KNN accuracy: {}\n'
'KNN accuracy PCA: {}\n'
'Percent Type I errors: {}\n'
'Percent Type II errors: {}\n\n'
).format(accclf,accclfpca,test_tI_errors, test_tII_errors))
Explanation: KNN
End of explanation
# Initialize the model
rf = RandomForestClassifier(n_jobs = -1)
#Create range of values to fit parameters
k1 = [20,100,300]
parameters = {'n_estimators':k1}
#Fit parameters
rf1 = GridSearchCV(rf, param_grid=parameters, cv=kf)
#Fit the tunned model
rf1.fit(X_train, y_train)
#The best hyper parameters set
print("Best Hyper Parameters:", rf1.best_params_)
#Fit in test dataset
rf1.fit(X_test, y_test)
#Predict on test dataset
predtestrf_y = rf1.predict(X_test)
#Test Scores
target_names = ['0', '1']
print(classification_report(y_test, predtestrf_y, target_names=target_names))
cnf = confusion_matrix(y_test, predtestrf_y)
print(cnf)
table_test = pd.crosstab(y_test, predtestrf_y, margins=True)
test_tI_errors = table_test.loc[0.0,1.0]/table_test.loc['All','All']
test_tII_errors = table_test.loc[1.0,0.0]/table_test.loc['All','All']
accrf1 = cross_val_score(rf1,X_test,y_test,cv=kf).mean()
accrf1pca = cross_val_score(rf1,XPCA,Y,cv=kf).mean()
print((
'Random Forest accuracy:{}\n'
'Random Forest accuracy PCA:{}\n'
'Percent Type I errors: {}\n'
'Percent Type II errors: {}'
).format(accrf1,accrf1pca,test_tI_errors, test_tII_errors))
Explanation: Random Forest
End of explanation
# Train model
OTM = DecisionTreeClassifier()
#Create range of values to fit parameters
k1 = ['auto', 'sqrt', 'log2']
parameters = {'max_features': k1
}
#Fit parameters
OTM1 = GridSearchCV(OTM, param_grid=parameters, cv=kf)
#Fit the tunned model
OTM1.fit(X_train, y_train)
#The best hyper parameters set
print("Best Hyper Parameters:", OTM1.best_params_)
#Fit on test dataset
OTM1.fit(X_test, y_test)
#Predict parameters on test dataset
predtestrf1_y = OTM1.predict(X_test)
#Test Scores
target_names = ['0', '1']
print(classification_report(y_test, predtestrf1_y, target_names=target_names))
cnf = confusion_matrix(y_test, predtestrf1_y)
print(cnf)
table_test = pd.crosstab(y_test, predtestrf1_y, margins=True)
test_tI_errors = table_test.loc[0.0,1.0]/table_test.loc['All','All']
test_tII_errors = table_test.loc[1.0,0.0]/table_test.loc['All','All']
OTM1acc = cross_val_score(OTM1,X_test,y_test,cv=kf).mean()
OTM1accpca = cross_val_score(OTM1,XPCA,Y,cv=kf).mean()
print((
'Decision Tree accuracy:{}\n'
'Decision Tree accuracy PCA:{}\n'
'Percent Type I errors: {}\n'
'Percent Type II errors: {}'
).format(OTM1acc,OTM1accpca, test_tI_errors, test_tII_errors))
Explanation: Decision Tree
End of explanation
# Train model
svc = SVC()
#Create range of values to fit parameters
k1 = np.arange(20)+1
k2 = ['linear','rbf']
parameters = {'C': k1,
'kernel': k2}
#Fit parameters
svc1 = GridSearchCV(svc, param_grid=parameters, cv=kf)
#Fit the tunned model
svc1.fit(X_train, y_train)
#The best hyper parameters set
print("Best Hyper Parameters:", svc1.best_params_)
#Fit tunned model on Test set
svc1.fit(X_test, y_test)
# Predict on training set
predtestsvc_y = svc1.predict(X_test)
#Test Scores
target_names = ['0.0', '1.0']
print(classification_report(y_test, predtestsvc_y, target_names=target_names))
cnf = confusion_matrix(y_test, predtestsvc_y)
print(cnf)
table_test = pd.crosstab(y_test, predtestsvc_y, margins=True)
accsvc1 = cross_val_score(svc1,X_test,y_test,cv=kf).mean()
accsvc1pca = cross_val_score(svc1,XPCA,Y,cv=kf).mean()
print((
'SVC accuracy:{}\n'
'SVC accuracy PCA:{}\n'
).format(accsvc1,accsvc1pca))
Explanation: SVC
End of explanation
# Train model
GBC = GradientBoostingClassifier()
k1 = ['deviance','exponential']
k2 = np.arange(100)+1
k5 = ['friedman_mse','mse','mae']
parameters = {'loss': k1,
'n_estimators': k2,
'criterion': k5}
#Fit parameters
GBC1 = GridSearchCV(GBC, param_grid=parameters, cv=kf)
#Fit the tunned model
GBC1.fit(X_train, y_train)
#The best hyper parameters set
print("Best Hyper Parameters:", GBC1.best_params_)
#Fit on the test set
GBC1.fit(X_test, y_test)
# Predict on test set
predtestgb_y = GBC1.predict(X_test)
#Test Scores
target_names = ['0', '1']
print(classification_report(y_test, predtestgb_y, target_names=target_names))
cnf = confusion_matrix(y_test, predtestgb_y)
print(cnf)
table_test = pd.crosstab(y_test, predtestgb_y, margins=True)
test_tI_errors = table_test.loc[0.0,1.0]/table_test.loc['All','All']
test_tII_errors = table_test.loc[1.0,0.0]/table_test.loc['All','All']
accGBC1 = cross_val_score(GBC1,X_test,y_test,cv=kf).mean()
accGBC1pca = cross_val_score(GBC1,XPCA,Y,cv=kf).mean()
print((
'Gradient Boosting accuracy:{}\n'
'Gradient Boosting accuracy PCA:{}\n'
'Percent Type I errors: {}\n'
'Percent Type II errors: {}'
).format(accGBC1,accGBC1pca,test_tI_errors, test_tII_errors))
Explanation: Gradient Boosting
End of explanation
#Summary of accuracy of different models:
print(('Accuracy of each model: \n'
'Logistic Regression:{:.{prec}f} \n'
'KNN: {:.{prec}f} \n'
'SVC: {:.{prec}f} \n'
'Random Forest: {:.{prec}f} \n'
'Naive Bayes:{:.{prec}f} \n'
'Gradient Boosting: {:.{prec}f} \n'
'Decision Tree: {:.{prec}f} \n'
).format(acclr1,accclf,accsvc1,accrf1,acclb1,accGBC1,OTM1acc,prec=4))
Explanation: Conclusion
Based on the breast Cancer Diagnostic dataset with 569 entries a model has been built to predict based on the most relevant features obtained if a tumor is benign or malign. In this case we have used classifiers to build the models and the hyperparameters have been tuned on the training set, accounting for 70% of all the data and test on the remaining 30%. As a result, all the models tested have an accuracy that goes from 89% in the worst-case scenario up to 95% in the best case. The models used have been logistic regression, KNN, SVC, Random Forest, Naïve Bayes (Bernouilli, Gradient Boosting and Decision Tree). All hyperparameters of the models and the models have been tested using cross validation with five folds.
The first step has been to create and select the features that will be the predictors of the model and to build the output variable as a [0,1] variable. In the latter, the dataset has been resampled to balance the number of outputs in each class. For the feature selection, Random forest best features, select best and recursive feature elimination have been used. Additionally is has been compared to the number of features that a PCA analysis gives as meaningful for the model. In the case of the feature selection models used, the features selected using random forest have been narrowed down to the ones produced by the recursive feature elimination using as the maximum number of features the number given by the PCA analysis equating to 3.
The selected features are: 'radius_worst','concave points_worst','perimeter_worst'.
End of explanation |
3,214 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 01
Import
Step1: Interact basics
Write a print_sum function that prints the sum of its arguments a and b.
Step2: Use the interact function to interact with the print_sum function.
a should be a floating point slider over the interval [-10., 10.] with step sizes of 0.1
b should be an integer slider the interval [-8, 8] with step sizes of 2.
Step3: Write a function named print_string that prints a string and additionally prints the length of that string if a boolean parameter is True.
Step4: Use the interact function to interact with the print_string function.
s should be a textbox with the initial value "Hello World!".
length should be a checkbox with an initial value of True. | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
Explanation: Interact Exercise 01
Import
End of explanation
def print_sum(a, b):
print(a + b)
Explanation: Interact basics
Write a print_sum function that prints the sum of its arguments a and b.
End of explanation
# YOUR CODE HERE
interact(print_sum, a=(-10.0, 10.0, 0.1), b=(-8, 8, 2));
assert True # leave this for grading the print_sum exercise
Explanation: Use the interact function to interact with the print_sum function.
a should be a floating point slider over the interval [-10., 10.] with step sizes of 0.1
b should be an integer slider the interval [-8, 8] with step sizes of 2.
End of explanation
def print_string(s, length=False):
print(s)
if length == True:
print(len(s))
Explanation: Write a function named print_string that prints a string and additionally prints the length of that string if a boolean parameter is True.
End of explanation
# YOUR CODE HERE
interact(print_string, Length=True, s="");
assert True # leave this for grading the print_string exercise
Explanation: Use the interact function to interact with the print_string function.
s should be a textbox with the initial value "Hello World!".
length should be a checkbox with an initial value of True.
End of explanation |
3,215 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PyBroMo 5. Two-state dynamics - Dynamic smFRET simulation
<small><i>
This notebook is part of <a href="http
Step1: Timestamps, detectors and particles for the two states
Step2: Simulation
Mean residence times for the two states a and b
Step3: Exponential distributions of residence times for the two states
Step6: Define functions
Step7: Simulate dynamics
Step8: Add background
Background is the same Poisson process in the two static files
and it is saved as a virtual particle (index = 35, the "36th" virtual particle).
Let's take the background from file a for simplicity.
Step9: Merge arrays from individual particles
Step10: Save data to Photon-HDF5
Step12: Create description string
Step13: Save file | Python Code:
from pathlib import Path
from textwrap import dedent, indent
import numpy as np
import tables
from scipy.stats import expon
import phconvert as phc
print('phconvert version:', phc.__version__)
SIM_PATH = 'data/'
filelist = list(Path(SIM_PATH).glob('smFRET_*_600s.hdf5'))
[f.name for f in filelist]
filename_a = str([f for f in filelist if '11_E_40_Em' in f.name][0])
filename_b = str([f for f in filelist if '11_E_75_Em' in f.name][0])
filename_a, filename_b
da = tables.open_file(filename_a)
db = tables.open_file(filename_b)
da.filename
db.filename
print(da.root.description.read().decode())
print(db.root.description.read().decode())
# Make sure files a re using the same trajectories
assert da.root.description.read().decode().split('\n')[5] == db.root.description.read().decode().split('\n')[5]
Explanation: PyBroMo 5. Two-state dynamics - Dynamic smFRET simulation
<small><i>
This notebook is part of <a href="http://tritemio.github.io/PyBroMo" target="_blank">PyBroMo</a> a
python-based single-molecule Brownian motion diffusion simulator
that simulates confocal smFRET
experiments.
</i></small>
Overview
In this notebook we simulate a freely-diffusing smFRET experiment with dynamics between two states.
The input are two smFRET files, one for each static state.
These input files need to be simulations from the same particles trajectories
but with different E*.
Load static FRET data
End of explanation
# Timestamps
times_a = da.root.photon_data.timestamps.read()
times_b = db.root.photon_data.timestamps.read()
# Detectors
det_a = da.root.photon_data.detectors.read()
det_b = db.root.photon_data.detectors.read()
# Particle number for each timestamp
par_a = da.root.photon_data.particles.read()
par_b = db.root.photon_data.particles.read()
par_a
acquisition_duration = da.root.acquisition_duration.read()
assert acquisition_duration == db.root.acquisition_duration.read()
print('Acquisition duration: %d s' % acquisition_duration)
times_unit = da.root.photon_data.timestamps_specs.timestamps_unit.read()
times_unit
Explanation: Timestamps, detectors and particles for the two states
End of explanation
tau_a = 1e-3 / 5
tau_b = 0.5e-3 / 5
tau_s = [tau_a, tau_b]
Explanation: Simulation
Mean residence times for the two states a and b
End of explanation
expon_s = tuple(expon(scale=tau / times_unit) for tau in tau_s)
n = int(1.1 * acquisition_duration / (tau_a + tau_b)) # Number of transitions (upper limit)
Explanation: Exponential distributions of residence times for the two states
End of explanation
def sim_two_states_single_particle(times_s, taus_s):
Simulate 2-state transitions for a single particle.
Arguments:
times_s (tuple or arrays): 2-tuple of timestamps arrays
for the two states a (times_s[0]) and b (times_s[1]).
taus_s (tuple or arrays): 2-tuple of residence times arrays
for the two states a (taus_s[0]) and b (taus_s[1]).
Returns:
List of index pairs. Each pair is a start/stop index for
for the timestamps of current state for a specific residence time.
The states are strictly alternating starting from 0 (i.e. a).
- first pair: (state = 0) start/stop index for array `times_s[0]` (where 0 = state)
corresponding to the residence time `taus_s[state][i_residence_time] = taus_s[0][0]`
- second pair: (state = 1) start/stop index for array `times_s[1]` (where 1 = state)
corresponding to the residence time `taus_s[state][i_residence_time] = taus_s[1][0]`
- third pair: (state = 0) start/stop index for array `times_s[0]` (where 0 = state)
corresponding to the residence time `taus_s[state][i_residence_time] = taus_s[0][1]`
and so on.
slices_list = []
index_s = [0, 0] # indexes for looping thorugh the timestamps arrays
index_start_s = [0, 0] # indexes of current state start in each timestamps array
index_tau_s = [0, 0] # index of current time window duration
t_start = 0 # time of current state start
state, otherstate = 0, 1
while ((index_s[0] < len(times_s[0]) - 1) and
(index_s[1] < len(times_s[1]) - 1)):
# Duration of current time window (i.e. duration of current state)
tau = taus_s[state][index_tau_s[state]]
# Find timestamps in current time window
# for both timestamps arrays
for state_i in (0, 1):
times = times_s[state_i]
delta_t = times[index_s[state_i]] - t_start
while delta_t < tau and index_s[state_i] < len(times) - 1:
index_s[state_i] += 1
delta_t = times[index_s[state_i]] - t_start
#print(state, index_s[state])
# Save the timestamps only for current state
slices_list.append((index_start_s[state], index_s[state]))
# Save index of first timestamp in next time window
index_start_s = index_s.copy()
# Discard current "used" tau
index_tau_s[state] += 1
# Switch to a new state
t_start += tau
state, otherstate = otherstate, state
return slices_list
def sim_two_states(num_particles, times_states, det_states, par_states, times_unit, expon_s, seed=1):
Simulate 2-state transitions for a set of particles.
Arguments:
num_particles (int): number of simulated particles.
times_states (tuple of arrays): 2-tuple of timestamps arrays, one for each state
det_states (tuple of arrays): 2-tuple of detectors arrays, one for each state
par_states (tuple of arrays): 2-tuple of particles arrays, one for each state
times_unit (float): timestamps unit in seconds.
expon_s (tuple of scipy.stats distributions): 2-tuple of exponential distributions
used to simulate residency times for each state. Each element is a frozen
`scipy.stats.expon` distribution with scale parameter set according to the
residency time for the corresponding state.
Returns:
Tuple of 2 lists:
- List of timestamps arrays, one for each particle, after 2-states dynamics simulation.
- List of detectors arrays, one for each particle, after 2-states dynamics simulation.
np.random.seed(seed)
times_p = []
det_p = []
for particle in range(num_particles):
print('\n- Simulating particle %d: ' % particle, end='', flush=True)
# Get timestamps and detectors for current particle in each state
print('timestamps..', end='', flush=True)
masks_states = [par == particle for par in par_states]
times_s = [memoryview(t_par[mask_par]) for t_par, mask_par in zip(times_states, masks_states)]
det_s = [memoryview(det_par[mask_par]) for det_par, mask_par in zip(det_states, masks_states)]
print('[done] ', end='', flush=True)
# Simulate residence times
print('residence..', end='', flush=True)
taus_s = [memoryview(exp_dist.rvs(n)) for exp_dist in expon_s]
sim_duration = np.sum(np.sum(taus) for taus in taus_s) * times_unit
assert sim_duration > acquisition_duration
print('[done] ', end='', flush=True)
# Compute start/stop indexes for the timestamps for each residence time
print('transition-index..', end='', flush=True)
slices_list = sim_two_states_single_particle(times_s, taus_s)
print('[done] ', end='', flush=True)
# Create new timestamps and detectors to store dynamics simulation results
print('merge..', end='', flush=True)
times_size = sum([s[1] - s[0] for s in slices_list])
times = np.zeros(times_size, dtype='int64')
det = np.zeros(times_size, dtype='uint8')
par = np.ones(times_size, dtype='uint8') * particle
times_m = memoryview(times)
det_m = memoryview(det)
# istart, istop are indexes of times_m/det_m while the
# start, stop indexes in slices_list refer to `times_s[state]`
# where state = 0 for odd elements and state = 1 for even elements.
# See `sim_two_states_single_particle()` for more info on `slice_list`.
istart = 0
state, otherstate = 0, 1
for start, stop in slices_list:
istop = istart + stop - start
times_m[istart:istop] = times_s[state][start:stop]
det_m[istart:istop] = det_s[state][start:stop]
istart = istop
state, otherstate = otherstate, state
print('[done]', flush=True)
assert (times != 0).all()
times_p.append(times)
det_p.append(det)
return times_p, det_p
Explanation: Define functions
End of explanation
seed = 987123654 # random number generator seed
times_p, det_p = sim_two_states(35, (times_a, times_b), (det_a, det_b), (par_a, par_b),
times_unit=times_unit, expon_s=expon_s, seed=seed)
assert all(all(np.diff(t) >= 0) for t in times_p)
assert len(times_p) == len(det_p) == 35
det_p[0][:10]
det_p[1][:10]
times_p[0][:10]
times_p[1][:10]
Explanation: Simulate dynamics
End of explanation
times_a[par_a == 35]
det_a[par_a == 35]
times_p.append(times_a[par_a == 35])
det_p.append(det_a[par_a == 35])
Explanation: Add background
Background is the same Poisson process in the two static files
and it is saved as a virtual particle (index = 35, the "36th" virtual particle).
Let's take the background from file a for simplicity.
End of explanation
times_dyn = np.hstack(times_p)
det_dyn = np.hstack(det_p)
argsort = times_dyn.argsort(kind='mergesort')
times_dyn = times_dyn[argsort]
det_dyn = det_dyn[argsort]
det_dyn
par_dyn = np.hstack([det_p_i.size * [idx] for idx, det_p_i in enumerate(det_p)])
assert par_dyn.shape[0] == sum(d.size for d in det_p)
par_dyn = par_dyn[argsort]
Explanation: Merge arrays from individual particles
End of explanation
def make_photon_hdf5(times, det, par, times_unit, description, identity=None):
photon_data = dict(
timestamps = times,
timestamps_specs = dict(timestamps_unit=times_unit),
detectors = det,
particles = par,
measurement_specs = dict(
measurement_type = 'smFRET',
detectors_specs = dict(spectral_ch1 = np.atleast_1d(0),
spectral_ch2 = np.atleast_1d(1))))
setup = dict(
num_pixels = 2,
num_spots = 1,
num_spectral_ch = 2,
num_polarization_ch = 1,
num_split_ch = 1,
modulated_excitation = False,
lifetime = False,
excitation_alternated=(False,),
excitation_cw=(True,))
provenance = dict(software='', software_version='', filename='')
if identity is None:
identity = dict()
description = description
acquisition_duration = np.round((times[-1] - times[0]) * times_unit)
data = dict(
acquisition_duration = round(acquisition_duration),
description = description,
photon_data = photon_data,
setup=setup,
provenance=provenance,
identity=identity)
return data
Explanation: Save data to Photon-HDF5
End of explanation
da.filename
db.filename
traj_descr = dedent('\n'.join(da.root.description.read().decode().split('\n')[4:7]))
print(traj_descr)
part_D_descr = indent(dedent('\n'.join(da.root.description.read().decode().split('\n')[9:11])), ' ')
print(part_D_descr)
state0_descr = indent(dedent('\n'.join(da.root.description.read().decode().split('\n')[11:13])), ' ')
print(state0_descr)
state1_descr = indent(dedent('\n'.join(db.root.description.read().decode().split('\n')[11:13])), ' ')
print(state1_descr)
bg_descr = dedent('\n'.join(da.root.description.read().decode().split('\n')[-4:]))
print(bg_descr)
tau_a_ms = tau_a * 1e3
tau_b_ms = tau_b * 1e3
tau_a_us = tau_a * 1e6
tau_b_us = tau_b * 1e6
filename_a_name = Path(filename_a).name
filename_b_name = Path(filename_b).name
description = f\
PyBroMo simulation of 2-states dynamics
----------------------------------------
{traj_descr}
{part_D_descr}
State 0:
Residency time: {tau_a_ms} ms
{state0_descr}
filename: {filename_a_name}
State 1:
Residency time: {tau_b_ms} ms
{state0_descr}
filename: {filename_b_name}
{bg_descr}
print(description)
Explanation: Create description string
End of explanation
identity=dict(author='Antonino Ingargiola',
author_affiliation='UCLA')
data = make_photon_hdf5(times_dyn, det_dyn, par_dyn, times_unit, description, identity=identity)
data
h5_fname = f'smFRET_0eb9b3_P_35_s0_D_2.5e-11_dynamics_E_40-75_Tau_{tau_a_us:.0f}-{tau_b_us:.0f}us_EmTot_226k-200k_BgD900_BgA600_t_max_600s.hdf5'
h5_fname
phc.hdf5.save_photon_hdf5(data, h5_fname=h5_fname, overwrite=True)
Explanation: Save file
End of explanation |
3,216 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img align="left" src="imgs/logo.jpg" width="50px" style="margin-right
Step1: We repeat our definition of the Spouse Candidate subclass, and load the test set
Step2: I. Training a SparseLogisticRegression Discriminative Model
We use the training marginals to train a discriminative model that classifies each Candidate as a true or false mention. We'll use a random hyperparameter search, evaluated on the development set labels, to find the best hyperparameters for our model. To run a hyperparameter search, we need labels for a development set. If they aren't already available, we can manually create labels using the Viewer.
Feature Extraction
Instead of using a deep learning approach to start, let's look at a standard sparse logistic regression model. First, we need to extract out features. This can take a while, but we only have to do it once!
Step3: First, reload the training marginals
Step4: Load our development data for tuning
Step5: The following code performs model selection by tuning our learning algorithm's hyperparamters. Note
Step6: Examining Features
Extracting features allows us to inspect and interperet our learned weights | Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os
import numpy as np
# Connect to the database backend and initalize a Snorkel session
from lib.init import *
Explanation: <img align="left" src="imgs/logo.jpg" width="50px" style="margin-right:10px">
Snorkel Workshop: Extracting Spouse Relations <br> from the News
Advanced Part 6: Hyperparameter Tuning via Grid Search
End of explanation
Spouse = candidate_subclass('Spouse', ['person1', 'person2'])
Explanation: We repeat our definition of the Spouse Candidate subclass, and load the test set:
End of explanation
from lib.features import hybrid_span_mention_ftrs
from snorkel.annotations import FeatureAnnotator
featurizer = FeatureAnnotator(f=hybrid_span_mention_ftrs)
F_train = featurizer.load_matrix(session, split=0)
F_dev = featurizer.load_matrix(session, split=1)
F_test = featurizer.load_matrix(session, split=2)
if F_train.size == 0:
%time F_train = featurizer.apply(split=0, parallelism=1)
if F_dev.size == 0:
%time F_dev = featurizer.apply_existing(split=1, parallelism=1)
if F_test.size == 0:
%time F_test = featurizer.apply_existing(split=2, parallelism=1)
print(F_train.shape)
print(F_dev.shape)
print(F_test.shape)
Explanation: I. Training a SparseLogisticRegression Discriminative Model
We use the training marginals to train a discriminative model that classifies each Candidate as a true or false mention. We'll use a random hyperparameter search, evaluated on the development set labels, to find the best hyperparameters for our model. To run a hyperparameter search, we need labels for a development set. If they aren't already available, we can manually create labels using the Viewer.
Feature Extraction
Instead of using a deep learning approach to start, let's look at a standard sparse logistic regression model. First, we need to extract out features. This can take a while, but we only have to do it once!
End of explanation
from snorkel.annotations import load_marginals
train_marginals = load_marginals(session, split=0)
import matplotlib.pyplot as plt
plt.hist(train_marginals, bins=20)
plt.show()
Explanation: First, reload the training marginals:
End of explanation
from snorkel.annotations import load_gold_labels
L_gold_dev = load_gold_labels(session, annotator_name='gold', split=1)
L_gold_dev.shape
Explanation: Load our development data for tuning
End of explanation
from snorkel.learning import RandomSearch
from snorkel.learning.tensorflow import SparseLogisticRegression
seed = 1234
num_model_search = 5
# search over this parameter grid
param_grid = {}
param_grid['batch_size'] = [64, 128]
param_grid['lr'] = [1e-4, 1e-3, 1e-2]
param_grid['l1_penalty'] = [1e-6, 1e-4, 1e-2]
param_grid['l2_penalty'] = [1e-6, 1e-4, 1e-2]
param_grid['rebalance'] = [0.0, 0.5]
model_class_params = {
'n_threads':1
}
model_hyperparams = {
'n_epochs': 30,
'print_freq': 10,
'dev_ckpt_delay': 0.5,
'X_dev': F_dev,
'Y_dev': L_gold_dev
}
searcher = RandomSearch(SparseLogisticRegression, param_grid, F_train, train_marginals,
n=num_model_search, seed=seed,
model_class_params=model_class_params,
model_hyperparams=model_hyperparams)
print("Discriminitive Model Parameter Space (seed={}):".format(seed))
for i, params in enumerate(searcher.search_space()):
print("{} {}".format(i, params))
disc_model, run_stats = searcher.fit(X_valid=F_dev, Y_valid=L_gold_dev, n_threads=1)
run_stats
Explanation: The following code performs model selection by tuning our learning algorithm's hyperparamters. Note: This requires installing tensorflow: conda install tensorflow.
End of explanation
from lib.scoring import *
print_top_k_features(session, disc_model, F_train, top_k=25)
Explanation: Examining Features
Extracting features allows us to inspect and interperet our learned weights
End of explanation |
3,217 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Optimizing Real World Problems
In this workshop we will code up a model called POM3 and optimize it using the GA we developed in the first workshop.
POM3 is a software estimation model like XOMO for Software Engineering. It is based on Turner
and Boehm’s model of agile development. It compares traditional plan-based approaches
to agile-based approaches in requirements prioritization. It describes how a team decides which
requirements to implement next. POM3 reveals requirements incrementally in random order, with
which developers plan their work assignments. These assignments are further adjusted based on
current cost and priority of requirement. POM3 is a realistic model which takes more runtime than
standard mathematical models(2-100ms, not 0.006-0.3ms)
Step12: The Generic Problem Class
Remember the Problem Class we coded up for GA workshop. Here we abstract it further such that it can be inherited by all the future classes. Go through these utility functions and classes before you proceed further.
Step14: Great. Now that the class and its basic methods is defined, lets extend it for
POM3 model.
POM3 has multiple versions but for this workshop we will code up the POM3A model. It has 9 decisions defined as follows
Culture in [0.1, 0.9]
Criticality in [0.82, 1.20]
Criticality Modifier in [2, 10]
Initially Known in [0.4, 0.7]
Inter-Dependency in [1, 100]
Dynamism in [1, 50]
Size in [0, 4]
Plan in [0, 5]
Team Size in [1, 44]
<img src="pom3.png"/>
The model has 4 objectives
* Cost in [0,10000] - Minimize
* Score in [0,1] - Maximize
* Completion in [0,1] - Maximize
* Idle in [0,1] - Minimize
Step21: Utility functions for genetic algorithms.
Step22: Putting it all together and making the GA
Step23: Visualize
Lets plot the initial population with respect to the final frontier. | Python Code:
%matplotlib inline
# All the imports
from __future__ import print_function, division
from math import *
import random
import sys
import matplotlib.pyplot as plt
# TODO 1: Enter your unity ID here
__author__ = "latimko"
class O:
Basic Class which
- Helps dynamic updates
- Pretty Prints
def __init__(self, **kwargs):
self.has().update(**kwargs)
def has(self):
return self.__dict__
def update(self, **kwargs):
self.has().update(kwargs)
return self
def __repr__(self):
show = [':%s %s' % (k, self.has()[k])
for k in sorted(self.has().keys())
if k[0] is not "_"]
txt = ' '.join(show)
if len(txt) > 60:
show = map(lambda x: '\t' + x + '\n', show)
return '{' + ' '.join(show) + '}'
print("Unity ID: ", __author__)
Explanation: Optimizing Real World Problems
In this workshop we will code up a model called POM3 and optimize it using the GA we developed in the first workshop.
POM3 is a software estimation model like XOMO for Software Engineering. It is based on Turner
and Boehm’s model of agile development. It compares traditional plan-based approaches
to agile-based approaches in requirements prioritization. It describes how a team decides which
requirements to implement next. POM3 reveals requirements incrementally in random order, with
which developers plan their work assignments. These assignments are further adjusted based on
current cost and priority of requirement. POM3 is a realistic model which takes more runtime than
standard mathematical models(2-100ms, not 0.006-0.3ms)
End of explanation
# Few Utility functions
def say(*lst):
Print whithout going to new line
print(*lst, end="")
sys.stdout.flush()
def random_value(low, high, decimals=2):
Generate a random number between low and high.
decimals incidicate number of decimal places
return round(random.uniform(low, high),decimals)
def gt(a, b): return a > b
def lt(a, b): return a < b
def shuffle(lst):
Shuffle a list
random.shuffle(lst)
return lst
class Decision(O):
Class indicating Decision of a problem
def __init__(self, name, low, high):
@param name: Name of the decision
@param low: minimum value
@param high: maximum value
O.__init__(self, name=name, low=low, high=high)
class Objective(O):
Class indicating Objective of a problem
def __init__(self, name, do_minimize=True, low=0, high=1):
@param name: Name of the objective
@param do_minimize: Flag indicating if objective has to be minimized or maximized
O.__init__(self, name=name, do_minimize=do_minimize, low=low, high=high)
def normalize(self, val):
return (val - self.low)/(self.high - self.low)
class Point(O):
Represents a member of the population
def __init__(self, decisions):
O.__init__(self)
self.decisions = decisions
self.objectives = None
def __hash__(self):
return hash(tuple(self.decisions))
def __eq__(self, other):
return self.decisions == other.decisions
def clone(self):
new = Point(self.decisions[:])
new.objectives = self.objectives[:]
return new
class Problem(O):
Class representing the cone problem.
def __init__(self, decisions, objectives):
Initialize Problem.
:param decisions - Metadata for Decisions
:param objectives - Metadata for Objectives
O.__init__(self)
self.decisions = decisions
self.objectives = objectives
@staticmethod
def evaluate(point):
assert False
return point.objectives
@staticmethod
def is_valid(point):
return True
def generate_one(self, retries = 20):
for _ in xrange(retries):
point = Point([random_value(d.low, d.high) for d in self.decisions])
if self.is_valid(point):
return point
raise RuntimeError("Exceeded max runtimes of %d" % 20)
Explanation: The Generic Problem Class
Remember the Problem Class we coded up for GA workshop. Here we abstract it further such that it can be inherited by all the future classes. Go through these utility functions and classes before you proceed further.
End of explanation
class POM3(Problem):
from pom3.pom3 import pom3 as pom3_helper
helper = pom3_helper()
def __init__(self):
Initialize the POM3 classes
names = ["Culture", "Criticality", "Criticality Modifier", "Initial Known",
"Inter-Dependency", "Dynamism", "Size", "Plan", "Team Size"]
lows = [0.1, 0.82, 2, 0.40, 1, 1, 0, 0, 1]
highs = [0.9, 1.20, 10, 0.70, 100, 50, 4, 5, 44]
# TODO 2: Use names, lows and highs defined above to code up decision
# and objective metadata for POM3.
decisions = [Decision(n, l, h) for n, l, h in zip(names, lows, highs)]
objectives = [Objective("Cost", True, 0, 10000), Objective("Score", False, 0, 1),
Objective("Completion", False, 0, 1), Objective("Idle", True, 0, 1)]
Problem.__init__(self, decisions, objectives)
@staticmethod
def evaluate(point):
if not point.objectives:
point.objectives = POM3.helper.simulate(point.decisions)
return point.objectives
pom3 = POM3()
one = pom3.generate_one()
print(POM3.evaluate(one))
Explanation: Great. Now that the class and its basic methods is defined, lets extend it for
POM3 model.
POM3 has multiple versions but for this workshop we will code up the POM3A model. It has 9 decisions defined as follows
Culture in [0.1, 0.9]
Criticality in [0.82, 1.20]
Criticality Modifier in [2, 10]
Initially Known in [0.4, 0.7]
Inter-Dependency in [1, 100]
Dynamism in [1, 50]
Size in [0, 4]
Plan in [0, 5]
Team Size in [1, 44]
<img src="pom3.png"/>
The model has 4 objectives
* Cost in [0,10000] - Minimize
* Score in [0,1] - Maximize
* Completion in [0,1] - Maximize
* Idle in [0,1] - Minimize
End of explanation
def populate(problem, size):
Create a Point list of length size
population = []
for _ in range(size):
population.append(problem.generate_one())
return population
def crossover(mom, dad):
Create a new point which contains decisions from
the first half of mom and second half of dad
n = len(mom.decisions)
return Point(mom.decisions[:n//2] + dad.decisions[n//2:])
def mutate(problem, point, mutation_rate=0.01):
Iterate through all the decisions in the point
and if the probability is less than mutation rate
change the decision(randomly set it between its max and min).
for i, decision in enumerate(problem.decisions):
if random.random() < mutation_rate:
point.decisions[i] = random_value(decision.low, decision.high)
return point
def bdom(problem, one, two):
Return if one dominates two based
on binary domintation
objs_one = problem.evaluate(one)
objs_two = problem.evaluate(two)
dominates = False
for i, obj in enumerate(problem.objectives):
better = lt if obj.do_minimize else gt
if better(objs_one[i], objs_two[i]):
dominates = True
elif objs_one[i] != objs_two[i]:
return False
return dominates
def fitness(problem, population, point, dom_func):
Evaluate fitness of a point based on the definition in the previous block.
For example point dominates 5 members of population,
then fitness of point is 5.
return len([1 for another in population if dom_func(problem, point, another)])
def elitism(problem, population, retain_size, dom_func):
Sort the population with respect to the fitness
of the points and return the top 'retain_size' points of the population
fitnesses = []
for point in population:
fitnesses.append((fitness(problem, population, point, dom_func), point))
population = [tup[1] for tup in sorted(fitnesses, reverse=True)]
return population[:retain_size]
Explanation: Utility functions for genetic algorithms.
End of explanation
def ga(pop_size = 100, gens = 250, dom_func=bdom):
problem = POM3()
population = populate(problem, pop_size)
[problem.evaluate(point) for point in population]
initial_population = [point.clone() for point in population]
gen = 0
while gen < gens:
say(".")
children = []
for _ in range(pop_size):
mom = random.choice(population)
dad = random.choice(population)
while (mom == dad):
dad = random.choice(population)
child = mutate(problem, crossover(mom, dad))
if problem.is_valid(child) and child not in population+children:
children.append(child)
population += children
population = elitism(problem, population, pop_size, dom_func)
gen += 1
print("")
return initial_population, population
Explanation: Putting it all together and making the GA
End of explanation
def plot_pareto(initial, final):
initial_objs = [point.objectives for point in initial]
final_objs = [point.objectives for point in final]
initial_x = [i[1] for i in initial_objs]
initial_y = [i[2] for i in initial_objs]
final_x = [i[1] for i in final_objs]
final_y = [i[2] for i in final_objs]
plt.scatter(initial_x, initial_y, color='b', marker='+', label='initial')
plt.scatter(final_x, final_y, color='r', marker='o', label='final')
plt.title("Scatter Plot between initial and final population of GA")
plt.ylabel("Score")
plt.xlabel("Completion")
plt.legend(loc=9, bbox_to_anchor=(0.5, -0.175), ncol=2)
plt.show()
initial, final = ga(gens=50)
plot_pareto(initial, final)
Explanation: Visualize
Lets plot the initial population with respect to the final frontier.
End of explanation |
3,218 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PRINCIPLE COMPONENT ANALYSIS
Authors
Ndèye Gagnessiry Ndiaye and Christin Seifert
License
This work is licensed under the Creative Commons Attribution 3.0 Unported License https
Step1: The Iris dataset represents 3 kind of Iris flowers (Setosa, Versicolour and Virginica) with 4 attributes
Step2: We apply Principal Component Analysis to the Iris dataset with 4-dimensions (all components are keeped).
Step3: We project data in the PCA 4-dimensionnal space.
Step4: The following figure shows successively the projections on (x=PC1,y=PC2), (x=PC1,y=PC3),(x=PC1,y=PC4),(x=PC2,y=PC3),(x=PC2,y=PC4) and (x=PC3,y=PC4). Data is best separated with the components with largest eigenvalues (highest variance). | Python Code:
import pandas as pd
import numpy as np
import pylab as plt
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
Explanation: PRINCIPLE COMPONENT ANALYSIS
Authors
Ndèye Gagnessiry Ndiaye and Christin Seifert
License
This work is licensed under the Creative Commons Attribution 3.0 Unported License https://creativecommons.org/licenses/by/3.0/
This notebook:
creates PCA projections of Iris dataset
End of explanation
from sklearn import datasets
iris = datasets.load_iris()
x = pd.DataFrame(iris.data)
x.columns = ['SepalLength','SepalWidth','PetalLength','PetalWidth']
x.head()
Explanation: The Iris dataset represents 3 kind of Iris flowers (Setosa, Versicolour and Virginica) with 4 attributes: sepal length, sepal width, petal length and petal width.
End of explanation
pca = PCA(n_components=4)
pca.fit(iris.data)
eigen_values =pca.explained_variance_
print(eigen_values)
eigen_vectors = pca.components_
print(eigen_vectors)
Explanation: We apply Principal Component Analysis to the Iris dataset with 4-dimensions (all components are keeped).
End of explanation
projection = pca.transform(iris.data)
x = pd.DataFrame(projection)
x.columns = ['PC1','PC2','PC3','PC4']
x.head()
Explanation: We project data in the PCA 4-dimensionnal space.
End of explanation
# Show projections
y = iris.target
target_names = iris.target_names
colors = ['navy', 'turquoise', 'darkorange']
lw = 2
plt.figure(figsize=(25,30))
plt.subplot(231)
for color, i, target_name in zip(colors, [0, 1,2], target_names):
plt.scatter(projection[y == i, 0], projection[y == i, 1], color=color, alpha=.8, lw=lw,
label=target_name)
plt.legend(loc='best', shadow=False, scatterpoints=1)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.title('PCA((x=PC1,y=PC2))')
plt.subplot(232)
for color, i, target_name in zip(colors, [0, 1,2], target_names):
plt.scatter(projection[y == i, 0], projection[y == i, 2], color=color, alpha=.8, lw=lw,
label=target_name)
plt.legend(loc='best', shadow=False, scatterpoints=1)
plt.xlabel('PC1')
plt.ylabel('PC3')
plt.title('PCA((x=PC1,y=PC3))')
plt.subplot(233)
for color, i, target_name in zip(colors, [0, 1,2], target_names):
plt.scatter(projection[y == i, 0], projection[y == i, 3], color=color, alpha=.8, lw=lw,
label=target_name)
plt.legend(loc='best', shadow=False, scatterpoints=1)
plt.xlabel('PC1')
plt.ylabel('PC4')
plt.title('PCA((x=PC1,y=PC4))')
plt.subplot(234)
for color, i, target_name in zip(colors, [0, 1,2], target_names):
plt.scatter(projection[y == i, 1], projection[y == i, 2], color=color, alpha=.8, lw=lw,
label=target_name)
plt.legend(loc='best', shadow=False, scatterpoints=1)
plt.xlabel('PC2')
plt.ylabel('PC3')
plt.title('PCA((x=PC2,y=PC3))')
plt.subplot(235)
for color, i, target_name in zip(colors, [0, 1,2], target_names):
plt.scatter(projection[y == i, 1], projection[y == i, 3], color=color, alpha=.8, lw=lw,
label=target_name)
plt.legend(loc='best', shadow=False, scatterpoints=1)
plt.xlabel('PC2')
plt.ylabel('PC4')
plt.title('PCA((x=PC2,y=PC4))')
plt.subplot(236)
for color, i, target_name in zip(colors, [0, 1,2], target_names):
plt.scatter(projection[y == i, 2], projection[y == i, 3], color=color, alpha=.8, lw=lw,
label=target_name)
plt.legend(loc='best', shadow=False, scatterpoints=1)
plt.xlabel('PC3')
plt.ylabel('PC4')
plt.title('PCA((x=PC3,y=PC4))')
plt.show()
Explanation: The following figure shows successively the projections on (x=PC1,y=PC2), (x=PC1,y=PC3),(x=PC1,y=PC4),(x=PC2,y=PC3),(x=PC2,y=PC4) and (x=PC3,y=PC4). Data is best separated with the components with largest eigenvalues (highest variance).
End of explanation |
3,219 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Optimizing Python Code
Step1: Pairwise Distance Estimation
Step2: The timiing for the results in Jake's post (2013) and the results from this post (2017) are summarized below.
Step3: The timings are speedup number for the 2013 and 2017 runs are very different due to differences in versions and perhaps even the python version. This post is using Py35 running in Windows. The take away here is that the numpy is atleast 2 orders of magnitude faster than python. And the numba and cython snippets are about an order of magnitude faster than numpy in both the benchmarks.
I will not rush to make any claims on numba vs cython. It is unclear what kinds of optimizations is used in the cython magic. I would expect the cython code to be as fast as C and perhaps some tweaking will help us get there. It is really interesting how easy it is to get performance boost from numba. From an ease of use point of view, numba is hands down winner in this simple example.
Amortizing Payments
Here lets look at one more example. This is an amortizing payment calculation, such as in mortgage payments.
Step4: Here is the equivalent Cython function.
Step5: Here is the Numba version
Step6: Let's compare the performance of the three function types.
Step7: Python
Step8: Numba
Step9: Cython | Python Code:
import numpy as np
import numba
import cython
%load_ext cython
import pandas as pd
numba.__version__, cython.__version__, np.__version__
Explanation: Optimizing Python Code: Numba vs Cython
Goutham Balaraman
I came across an old post by jakevdp on Numba vs Cython. I thought I will revisit this topic because both Numba and Cython has matured significantly over this time period. In this post I am going to do two examples:
1. Pairwise distance estimation example that Jake discusses. The intention is to see how the maturity of these projects has contributed to improvements.
2. A simple cashflow payment calculation of an amortizing bond or mortgage payments. This a calculation that cannot be vectorized in a numpy sense. So the speedups would have to come from optimizing loops using tools like Numba or Cython.
End of explanation
X = np.random.random((1000, 3))
def pairwise_python(X):
M = X.shape[0]
N = X.shape[1]
D = np.empty((M, M), dtype=np.float)
for i in range(M):
for j in range(M):
d = 0.0
for k in range(N):
tmp = X[i, k] - X[j, k]
d += tmp * tmp
D[i, j] = np.sqrt(d)
return D
%timeit -n10 pairwise_python(X)
def pairwise_numpy(X):
return np.sqrt(((X[:, None, :] - X) ** 2).sum(-1))
%timeit -n10 pairwise_numpy(X)
pairwise_numba = numba.jit(pairwise_python)
%timeit -n10 pairwise_numba(X)
%%cython
import numpy as np
cimport cython
from libc.math cimport sqrt
@cython.boundscheck(False)
@cython.wraparound(False)
def pairwise_cython(double[:, ::1] X):
cdef int M = X.shape[0]
cdef int N = X.shape[1]
cdef double tmp, d
cdef double[:, ::1] D = np.empty((M, M), dtype=np.float64)
for i in range(M):
for j in range(M):
d = 0.0
for k in range(N):
tmp = X[i, k] - X[j, k]
d += tmp * tmp
D[i, j] = sqrt(d)
return np.asarray(D)
%timeit -n10 pairwise_cython(X)
Explanation: Pairwise Distance Estimation
End of explanation
df1 = pd.DataFrame({"Time (ms)": [13400,111, 9.12, 9.87], "Speedup": [1, 121, 1469, 1357]},
index=["Python", "Numpy", "Numba", "Cython"])
df2 = pd.DataFrame({"Time (ms)": [2470, 38.3, 4.04, 6.6], "Speedup": [1, 65, 611, 374]},
index=["Python", "Numpy", "Numba", "Cython"])
df = pd.concat([df1, df2], axis = 1, keys=(["2013", "2017"]))
df
Explanation: The timiing for the results in Jake's post (2013) and the results from this post (2017) are summarized below.
End of explanation
def amortize_payments_py(B0, R, term, cpr=0.0):
smm = 1. - pow(1 - cpr/100., 1/12.)
r = R/1200.
S = np.zeros(term)
P = np.zeros(term)
I = np.zeros(term)
B = np.zeros(term)
Pr = np.zeros(term)
Bt = B0
pow_term = pow(1+r, term)
A = Bt*r*pow_term/(pow_term - 1)
for i in range(term):
n = term-i
I[i] = Bt * r
Pr[i] = smm*Bt
S[i] = A-I[i] if Bt>1e-2 else 0.
P[i] = S[i] + Pr[i]
Bt = max(Bt - P[i], 0.0)
B[i] = Bt
return S,I, Pr,P, B
Explanation: The timings are speedup number for the 2013 and 2017 runs are very different due to differences in versions and perhaps even the python version. This post is using Py35 running in Windows. The take away here is that the numpy is atleast 2 orders of magnitude faster than python. And the numba and cython snippets are about an order of magnitude faster than numpy in both the benchmarks.
I will not rush to make any claims on numba vs cython. It is unclear what kinds of optimizations is used in the cython magic. I would expect the cython code to be as fast as C and perhaps some tweaking will help us get there. It is really interesting how easy it is to get performance boost from numba. From an ease of use point of view, numba is hands down winner in this simple example.
Amortizing Payments
Here lets look at one more example. This is an amortizing payment calculation, such as in mortgage payments.
End of explanation
%%cython
cimport cython
import numpy as np
from libc.math cimport pow
@cython.boundscheck(False)
@cython.wraparound(False)
def amortize_payments_cy(double B0,double R,int term,double cpr=0.0):
cdef double smm = 1. - pow(1 - cpr/100., 1/12.)
cdef double r = R/1200.
cdef double[:] D = np.empty(term, dtype=np.float64)
cdef double[:] S = np.empty(term, dtype=np.float64)
cdef double[:] P = np.empty(term, dtype=np.float64)
cdef double[:] I = np.empty(term, dtype=np.float64)
cdef double[:] B = np.empty(term, dtype=np.float64)
cdef double[:] Pr = np.empty(term, dtype=np.float64)
cdef double Bt = B0
cdef double pow_term = pow(1+r, term)
cdef double A = Bt*r*pow_term/(pow_term - 1.)
cdef double n = term
cdef int i=0
for i in range(term):
n = term-i
I[i] = Bt * r
Pr[i] = smm*Bt
S[i] = A-I[i] if Bt>1e-2 else 0.
P[i] = S[i] + Pr[i]
Bt = max(Bt - P[i], 0.0)
B[i] = Bt
return np.asarray(S),np.asarray(I), np.asarray(Pr),np.asarray(P), np.asarray(B)
Explanation: Here is the equivalent Cython function.
End of explanation
amortize_payments_nb = numba.njit(cache=True)(amortize_payments_py)
Explanation: Here is the Numba version
End of explanation
B0 = 500000.
R = 4.0
term = 360
Explanation: Let's compare the performance of the three function types.
End of explanation
%timeit -n1000 S,I, Pr,P, B = amortize_payments_py(B0, R, term, cpr=10)
Explanation: Python
End of explanation
%timeit -n1000 S,I, Pr,P, B = amortize_payments_nb(B0, R, term, cpr=10)
Explanation: Numba
End of explanation
%timeit -n1000 S,I, Pr,P, B = amortize_payments_cy(B0, R, term, cpr=10)
Explanation: Cython
End of explanation |
3,220 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Rotate Array
Step1: Simple Solution
The simplest solution splits the array at the point to rotate, and constructs a new array using the two parts.
Step2: This solution has both time and space complexity of O(n).
Bubble Rotate
However if we want to rotate a large array in place (without creating a new array), the solution above is inefficient. The time complexity is O(n), dependant only on the size of the input array, but the space complexity is also O(n) since we need to create a new array with the rotated elements. By applying a similar algorithm to bubble sort we can perform the rotation in place.
Step3: However, although the space complexity is now O(1), the time complexity is O(n * k). It would be good to find a solution that has O(1) space complexity and O(n) time complexity.
Reverse Rotate
Another way of rotating the array is to split the array into two sub arrays at the point of rotation. Each subarray is reversed, before rotating the entire array. This solution achieves O(1) space complexity and O(n). | Python Code:
import sys; sys.path.append('../..')
from puzzles import leet_puzzle
leet_puzzle('rotate-array')
n, k = 7, 4
example_array = list(range(n))
example_array
Explanation: Rotate Array
End of explanation
def rotate_simple(input_array, order):
order %= len(input_array)
return input_array[order:] + input_array[:order]
rotate_simple(example_array, k)
%%timeit
rotate_simple(example_array, k)
Explanation: Simple Solution
The simplest solution splits the array at the point to rotate, and constructs a new array using the two parts.
End of explanation
def rotate_bubble_inplace(input_array, order):
order %= len(input_array)
for i in range(order):
for j in range(len(input_array)):
input_array[j], input_array[j - 1] = input_array[j - 1], input_array[j]
return input_array
example_array2 = list(example_array)
rotate_bubble_inplace(example_array2, k)
%%timeit
rotate_bubble_inplace(example_array, k)
Explanation: This solution has both time and space complexity of O(n).
Bubble Rotate
However if we want to rotate a large array in place (without creating a new array), the solution above is inefficient. The time complexity is O(n), dependant only on the size of the input array, but the space complexity is also O(n) since we need to create a new array with the rotated elements. By applying a similar algorithm to bubble sort we can perform the rotation in place.
End of explanation
def rotate_reverse_inplace(input_array, order):
length = len(input_array)
order = -order % length
split_location = length - order
input_array[:split_location] = reversed(input_array[:split_location])
input_array[split_location:] = reversed(input_array[split_location:])
input_array.reverse()
return input_array
example_array2 = list(example_array)
rotate_reverse_inplace(example_array2, k)
%%timeit
rotate_reverse_inplace(example_array, k)
Explanation: However, although the space complexity is now O(1), the time complexity is O(n * k). It would be good to find a solution that has O(1) space complexity and O(n) time complexity.
Reverse Rotate
Another way of rotating the array is to split the array into two sub arrays at the point of rotation. Each subarray is reversed, before rotating the entire array. This solution achieves O(1) space complexity and O(n).
End of explanation |
3,221 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
AveragePooling2D
[pooling.AveragePooling2D.0] input 6x6x3, pool_size=(2, 2), strides=None, padding='valid', data_format='channels_last'
Step1: [pooling.AveragePooling2D.1] input 6x6x3, pool_size=(2, 2), strides=(1, 1), padding='valid', data_format='channels_last'
Step2: [pooling.AveragePooling2D.2] input 6x7x3, pool_size=(2, 2), strides=(2, 1), padding='valid', data_format='channels_last'
Step3: [pooling.AveragePooling2D.3] input 6x6x3, pool_size=(3, 3), strides=None, padding='valid', data_format='channels_last'
Step4: [pooling.AveragePooling2D.4] input 6x6x3, pool_size=(3, 3), strides=(3, 3), padding='valid', data_format='channels_last'
Step5: [pooling.AveragePooling2D.5] input 6x6x3, pool_size=(2, 2), strides=None, padding='same', data_format='channels_last'
Step6: [pooling.AveragePooling2D.6] input 6x6x3, pool_size=(2, 2), strides=(1, 1), padding='same', data_format='channels_last'
Step7: [pooling.AveragePooling2D.7] input 6x7x3, pool_size=(2, 2), strides=(2, 1), padding='same', data_format='channels_last'
Step8: [pooling.AveragePooling2D.8] input 6x6x3, pool_size=(3, 3), strides=None, padding='same', data_format='channels_last'
Step9: [pooling.AveragePooling2D.9] input 6x6x3, pool_size=(3, 3), strides=(3, 3), padding='same', data_format='channels_last'
Step10: [pooling.AveragePooling2D.10] input 5x6x3, pool_size=(3, 3), strides=(2, 2), padding='valid', data_format='channels_first'
Step11: [pooling.AveragePooling2D.11] input 5x6x3, pool_size=(3, 3), strides=(1, 1), padding='same', data_format='channels_first'
Step12: [pooling.AveragePooling2D.12] input 4x6x4, pool_size=(2, 2), strides=None, padding='valid', data_format='channels_first'
Step13: export for Keras.js tests | Python Code:
data_in_shape = (6, 6, 3)
L = AveragePooling2D(pool_size=(2, 2), strides=None, padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(270)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.0'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: AveragePooling2D
[pooling.AveragePooling2D.0] input 6x6x3, pool_size=(2, 2), strides=None, padding='valid', data_format='channels_last'
End of explanation
data_in_shape = (6, 6, 3)
L = AveragePooling2D(pool_size=(2, 2), strides=(1, 1), padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(271)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.1'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling2D.1] input 6x6x3, pool_size=(2, 2), strides=(1, 1), padding='valid', data_format='channels_last'
End of explanation
data_in_shape = (6, 7, 3)
L = AveragePooling2D(pool_size=(2, 2), strides=(2, 1), padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(272)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.2'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling2D.2] input 6x7x3, pool_size=(2, 2), strides=(2, 1), padding='valid', data_format='channels_last'
End of explanation
data_in_shape = (6, 6, 3)
L = AveragePooling2D(pool_size=(3, 3), strides=None, padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(273)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.3'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling2D.3] input 6x6x3, pool_size=(3, 3), strides=None, padding='valid', data_format='channels_last'
End of explanation
data_in_shape = (6, 6, 3)
L = AveragePooling2D(pool_size=(3, 3), strides=(3, 3), padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(274)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.4'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling2D.4] input 6x6x3, pool_size=(3, 3), strides=(3, 3), padding='valid', data_format='channels_last'
End of explanation
data_in_shape = (6, 6, 3)
L = AveragePooling2D(pool_size=(2, 2), strides=None, padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(275)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.5'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling2D.5] input 6x6x3, pool_size=(2, 2), strides=None, padding='same', data_format='channels_last'
End of explanation
data_in_shape = (6, 6, 3)
L = AveragePooling2D(pool_size=(2, 2), strides=(1, 1), padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(276)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.6'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling2D.6] input 6x6x3, pool_size=(2, 2), strides=(1, 1), padding='same', data_format='channels_last'
End of explanation
data_in_shape = (6, 7, 3)
L = AveragePooling2D(pool_size=(2, 2), strides=(2, 1), padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(277)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.7'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling2D.7] input 6x7x3, pool_size=(2, 2), strides=(2, 1), padding='same', data_format='channels_last'
End of explanation
data_in_shape = (6, 6, 3)
L = AveragePooling2D(pool_size=(3, 3), strides=None, padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(278)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.8'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling2D.8] input 6x6x3, pool_size=(3, 3), strides=None, padding='same', data_format='channels_last'
End of explanation
data_in_shape = (6, 6, 3)
L = AveragePooling2D(pool_size=(3, 3), strides=(3, 3), padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(279)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.9'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling2D.9] input 6x6x3, pool_size=(3, 3), strides=(3, 3), padding='same', data_format='channels_last'
End of explanation
data_in_shape = (5, 6, 3)
L = AveragePooling2D(pool_size=(3, 3), strides=(2, 2), padding='valid', data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(280)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.10'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling2D.10] input 5x6x3, pool_size=(3, 3), strides=(2, 2), padding='valid', data_format='channels_first'
End of explanation
data_in_shape = (5, 6, 3)
L = AveragePooling2D(pool_size=(3, 3), strides=(1, 1), padding='same', data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(281)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.11'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling2D.11] input 5x6x3, pool_size=(3, 3), strides=(1, 1), padding='same', data_format='channels_first'
End of explanation
data_in_shape = (4, 6, 4)
L = AveragePooling2D(pool_size=(2, 2), strides=None, padding='valid', data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(282)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.12'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling2D.12] input 4x6x4, pool_size=(2, 2), strides=None, padding='valid', data_format='channels_first'
End of explanation
print(json.dumps(DATA))
Explanation: export for Keras.js tests
End of explanation |
3,222 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step4: Demo - Storing information in EEX
Step8: Storing force field information
So far, only the topology and coordinates of the system are specified, and we are not able to calculate an energy.
To calculate the energy, we need to define the functional form of bond, angle, dihedral, and nonbonded interactions and the associated constants.
In this demo, we store the parameters for the TraPPE United Atom forcefield with harmonic bonds.
\begin{equation}
\ U_{total} = \sum_{bonds}{k_{b}(r-r_{0})^2} + \sum_{angles}{k_{\theta} (\theta - \theta_{eq} )^{2}} + \sum_{dihedrals}{c_{1}[1 + cos(\phi)] + c_{2}[1 - cos(2\phi)] + c_{3}[1 + cos(3\phi)]} + \sum_{i=1}^{N-1}{\sum_{j=i+1}^{N}{ 4\epsilon_{ij}[(\frac{\sigma_{ij}}r_{ij})^{12} - (\frac{\sigma_{ij}}r_{ij})^6] }}
\end{equation}
Step9: Alternatively, these could have been set directly as pairs without a mixing rule.
# Add NB parameters with pairs
dl.add_nb_parameter(atom_type=1, atom_type2=1, nb_name="LJ", nb_model="AB", nb_parameters=[1.0, 1.0])
dl.add_nb_parameter(atom_type=1, atom_type2=2, nb_name="LJ", nb_model="epsilon/sigma", nb_parameters=[1.0, 1.0])
dl.add_nb_parameter(atom_type=2, atom_type2=2, nb_name="LJ", nb_model="epsilon/sigma", nb_parameters=[1.0, 1.0])
Step10: Reading from MD input files
Typically, this storage would not be done by hand as shown above. Instead, readers and writers for specific softwares are used.
Below, we use the plugin for AMBER with EEX to read in an amber file with information for the a butane molecule which is equivalent to the one created in the first datalayer. The EEX translator uses the functions displayed above to store all information from the amber prmtop and inpcrd files in the datalayer.
Step11: Comparing the two datalayers
The summary shows a system with 4 atom, 3 bonds, 2 angles and 4 dihedrals. This differs from the first datalayer in the number of dihedrals and the number of dihedral parameters. However, the evaluated system energy is equivalent.
This is because AMBER stores dihedral angles using a different functional form. Instead of a single equation, dihedrals with multiple terms are built from multiple harmonic equations. The equations are equivalent when evaluated.
\begin{equation}
\sum_{dihedrals}{(V_{0}[1 + cos(0\phi)]}) + \sum_{dihedrals}{(V_{1}[1 + cos(n\phi)]}) + \sum_{dihedrals}{(V_{2}[1 + cos(2\phi - \pi)]}) + \sum_{dihedrals}{(V_{3}[1 + cos(3\phi )]}) = \sum_{dihedrals}{c_{0} + c_{1}[1 + cos(\phi)] + c_{2}[1 - cos(2\phi)] + c_{3}[1 + cos(3\phi)]}
\end{equation}
Although not not implemented yet, EEX should eventually be able to identify and perform a translation between equivalent functional forms.
Step12: Writing output files
Step13: Translating small peptide structure
Here, we demonstrate translating a small solvated peptide structure (from http | Python Code:
import eex
import os
import pandas as pd
import numpy as np
# Create empty data layer
dl = eex.datalayer.DataLayer("butane", backend="Memory")
dl.summary()
First, we add atoms to the system. Atoms have associated metadata. The possible atom metadata is listed here.
dl.list_valid_atom_properties()
TOPOLOGY:
Information can be added to the datalayer in the form of pandas dataframes. Here, we add atom metadata.
The name of the column corresponds to the atom property.
Populate empty dataframe with relevant information and add to EEX datalayer
# Create empty dataframe
atom_df = pd.DataFrame()
# Create atomic system using pandas dataframe
atom_df["atom_index"] = np.arange(0,4)
atom_df["molecule_index"] = [int(x) for x in np.zeros(4)]
atom_df["residue_index"] = [int(x) for x in np.zeros(4)]
atom_df["atom_name"] = ["C1", "C2", "C3", "C4"]
atom_df["charge"] = np.zeros(4)
atom_df["atom_type"] = [1, 2, 2, 1]
atom_df["X"] = [0, 0, 0, -1.474]
atom_df["Y"] = [-0.4597, 0, 1.598, 1.573]
atom_df["Z"] = [-1.5302, 0, 0, -0.6167]
atom_df["mass"] = [15.0452, 14.02658, 14.02658, 15.0452]
# Add atoms to datalayer
dl.add_atoms(atom_df, by_value=True)
# Print datalayer information
dl.summary()
# Print stored atom properties
dl.get_atoms(properties=None, by_value=True)
TOPOLOGY:
The EEX datalayer now contains four nonbonded atoms. To create butane, atoms must be bonded
to one another.
Add bonds to system
# Create empty dataframes for bonds
bond_df = pd.DataFrame()
# Create column names. Here, "term_index" refers to the bond type index.
# i.e. - if all bonds are the same type, they will have the same term index
bond_column_names = ["atom1", "atom2", "term_index"]
# Create corresponding data. The first row specifies that atom0 is bonded
# to atom 1 and has bond_type id 0
bond_data = np.array([[0, 1, 0,],
[1, 2, 0],
[2, 3, 0]])
for num, name in enumerate(bond_column_names):
bond_df[name] = bond_data[:,num]
dl.add_bonds(bond_df)
dl.summary()
TOPOLOGY:
Add angles and dihedrals to system.
# Follow similar procedure as for bonds
angle_df = pd.DataFrame()
dihedral_df = pd.DataFrame()
angle_column_names = ["atom1", "atom2", "atom3", "term_index"]
dihedral_column_names = ["atom1", "atom2", "atom3", "atom4", "term_index"]
angle_data = np.array([[0, 1, 2, 0,],
[1, 2, 3, 0],])
dihedral_data = np.array([[0, 1, 2, 3, 0,]])
for num, name in enumerate(angle_column_names):
angle_df[name] = angle_data[:,num]
dl.add_angles(angle_df)
for num, name in enumerate(dihedral_column_names):
dihedral_df[name] = dihedral_data[:,num]
dl.add_dihedrals(dihedral_df)
dl.summary()
Explanation: Demo - Storing information in EEX
End of explanation
EEX FORCE FIELD PARAMETERS
A main component of EEX is internally stored metadata which defines details functional forms including form, constants,
unit types, and default units (if the user does not overrride this option).
This metadata is stored as human readable dictionaries which can easily be added or maninpulated.
# Here, we examine the metadata present in the bond metadata for a harmonic bond
bond_metadata = eex.metadata.two_body_terms.two_body_metadata
for k, v in bond_metadata["forms"]["harmonic"].items():
print(k, v)
FORCE FIELD PARAMETERS
To add bonds (or other parameters) using this metadata, the user specifies the form using a keyword ("harmonic") that
matches to EEX's metadata.
Values for the contstants are passed using a dictionary with the 'parameters' defined in the metadata as keys.
Each bond type is given a uid, and default dimensions may be overwritten by the user using a dictionary
and the 'utype' argument
# Here, in add_term_parameter, the first argument is the term order. '2'
# corresponds to bonded atoms.
dl.add_term_parameter(2, "harmonic", {'K': 300.9, 'R0': 1.540}, uid=0, utype={'K':"kcal * mol **-1 * angstrom ** -2",
'R0': "angstrom"})
# If units or parameters are not compatible with the metadata, the datalayer will not allow storage of the parameter.
# Here, we have changed 'K' to simply "kcal". This will fail (uncomment to test)
#dl.add_term_parameter(2, "harmonic", {'K': 300.9, 'R0': 1.540}, uid=0, utype={'K':"kcal",'R0': "angstrom"})
## Add harmonic angle parameters
dl.add_term_parameter(3, "harmonic", {'K': 62.100, 'theta0': 114}, uid=0, utype={'K':'kcal * mol ** -1 * radian ** -2',
'theta0': 'degree'})
# Add OPLS dihedral parameter
dl.add_term_parameter(4, "opls", {'K_1': 1.41103414, 'K_2': -0.27101489,
'K_3': 3.14502869, 'K_4': 0}, uid=0, utype={'K_1': 'kcal * mol ** -1',
'K_2': 'kcal * mol ** -1',
'K_3': 'kcal * mol ** -1',
'K_4': 'kcal * mol ** -1'})
NONBOND PARAMETERS
For nonbond parametets, we currently provide support for Lennard Jones and Buckingham potentials
Most programs use pair-wise Lennard Jones potentials for nonbond interactions. Our internal metadata stores these as A
and B parameters. However, uses may specify other forms such as epsilon/sigma, epsilon, Rmin, etc.
Lennard Jones parameters can be added as a pair (atom_type1, atom_type2) or for a single atom type with a mixing rule.
dl.add_nb_parameter(atom_type=1, nb_name="LJ",
nb_model="epsilon/sigma", nb_parameters={'sigma': 3.75, 'epsilon': 0.1947460018},
utype={'sigma': 'angstrom', 'epsilon': 'kcal * mol ** -1'})
dl.add_nb_parameter(atom_type=2, nb_name="LJ",
nb_model="epsilon/sigma", nb_parameters={'sigma': 3.95, 'epsilon': 0.0914112887},
utype={'sigma': 'angstrom', 'epsilon': 'kcal * mol ** -1'})
dl.set_mixing_rule('lorentz-berthelot')
# Retrieve stored parameters
print("All stored parameters\n", dl.list_nb_parameters("LJ"), "\n\n")
# To apply the mixing rule:
dl.build_LJ_mixing_table()
print("All stored parameters\n", dl.list_nb_parameters("LJ"), "\n\n")
# These can also be retrieved for only single atoms, or for atom pairs by using itype='single' or itype='pairs'
pair_interactions = dl.list_nb_parameters("LJ", itype="pair")
print("Pair parameters\n", pair_interactions)
Explanation: Storing force field information
So far, only the topology and coordinates of the system are specified, and we are not able to calculate an energy.
To calculate the energy, we need to define the functional form of bond, angle, dihedral, and nonbonded interactions and the associated constants.
In this demo, we store the parameters for the TraPPE United Atom forcefield with harmonic bonds.
\begin{equation}
\ U_{total} = \sum_{bonds}{k_{b}(r-r_{0})^2} + \sum_{angles}{k_{\theta} (\theta - \theta_{eq} )^{2}} + \sum_{dihedrals}{c_{1}[1 + cos(\phi)] + c_{2}[1 - cos(2\phi)] + c_{3}[1 + cos(3\phi)]} + \sum_{i=1}^{N-1}{\sum_{j=i+1}^{N}{ 4\epsilon_{ij}[(\frac{\sigma_{ij}}r_{ij})^{12} - (\frac{\sigma_{ij}}r_{ij})^6] }}
\end{equation}
End of explanation
dl.summary()
# Evaluate system energy
energy_system1 = dl.evaluate(utype="kcal * mol ** -1")
print(energy_system1)
Explanation: Alternatively, these could have been set directly as pairs without a mixing rule.
# Add NB parameters with pairs
dl.add_nb_parameter(atom_type=1, atom_type2=1, nb_name="LJ", nb_model="AB", nb_parameters=[1.0, 1.0])
dl.add_nb_parameter(atom_type=1, atom_type2=2, nb_name="LJ", nb_model="epsilon/sigma", nb_parameters=[1.0, 1.0])
dl.add_nb_parameter(atom_type=2, atom_type2=2, nb_name="LJ", nb_model="epsilon/sigma", nb_parameters=[1.0, 1.0])
End of explanation
# Preview an amber prmtop (parameter-topology file) for Amber.
butane_file = os.path.join("..", "examples", "amber","alkanes", "trappe_butane_single_molecule.prmtop")
f = open(butane_file)
print(f.read())
f.close()
# Create new datalayer and populate using amber reader
dl_amber = eex.datalayer.DataLayer("butane_amber")
eex.translators.amber.read_amber_file(dl_amber, butane_file)
dl_amber.summary()
Explanation: Reading from MD input files
Typically, this storage would not be done by hand as shown above. Instead, readers and writers for specific softwares are used.
Below, we use the plugin for AMBER with EEX to read in an amber file with information for the a butane molecule which is equivalent to the one created in the first datalayer. The EEX translator uses the functions displayed above to store all information from the amber prmtop and inpcrd files in the datalayer.
End of explanation
energy_system2 = dl_amber.evaluate(utype="kcal * mol ** -1")
for k in energy_system1:
energy_difference = energy_system1[k] - energy_system2[k]
print(k," difference:\t %.3f" % energy_difference)
# Compare stored NB parameters
eex.testing.dict_compare(dl_amber.list_nb_parameters("LJ"), dl.list_nb_parameters("LJ", itype="pair"))
Explanation: Comparing the two datalayers
The summary shows a system with 4 atom, 3 bonds, 2 angles and 4 dihedrals. This differs from the first datalayer in the number of dihedrals and the number of dihedral parameters. However, the evaluated system energy is equivalent.
This is because AMBER stores dihedral angles using a different functional form. Instead of a single equation, dihedrals with multiple terms are built from multiple harmonic equations. The equations are equivalent when evaluated.
\begin{equation}
\sum_{dihedrals}{(V_{0}[1 + cos(0\phi)]}) + \sum_{dihedrals}{(V_{1}[1 + cos(n\phi)]}) + \sum_{dihedrals}{(V_{2}[1 + cos(2\phi - \pi)]}) + \sum_{dihedrals}{(V_{3}[1 + cos(3\phi )]}) = \sum_{dihedrals}{c_{0} + c_{1}[1 + cos(\phi)] + c_{2}[1 - cos(2\phi)] + c_{3}[1 + cos(3\phi)]}
\end{equation}
Although not not implemented yet, EEX should eventually be able to identify and perform a translation between equivalent functional forms.
End of explanation
# We can now write the amber file we read for lammps.
eex.translators.lammps.write_lammps_file(dl_amber, "output_lammps.data", "output_lammps.in")
# Write a local copy of the amber datalayer for amber.
eex.translators.amber.write_amber_file(dl_amber, "amber_output.prmtop")
## Read the written file into a datalayer ##
dl_lammps = eex.datalayer.DataLayer("butane_lammps")
eex.translators.lammps.read_lammps_input_file(dl_lammps, "output_lammps.in")
f = open("output_lammps.data")
print(f.read())
f.close()
lammps_energy = dl_lammps.evaluate(utype="kcal * mol ** -1")
# Compare energies
for k in energy_system1:
energy_difference = lammps_energy[k] - energy_system2[k]
print(k," difference:\t %.3f" % energy_difference)
Explanation: Writing output files
End of explanation
dl_dna = eex.datalayer.DataLayer("DNA_amber")
DNA_file = os.path.join("..", "examples", "amber","peptides", "alanine_dipeptide.prmtop")
eex.translators.amber.read_amber_file(dl_dna, DNA_file)
dl_dna.summary()
eex.translators.lammps.write_lammps_file(dl_dna,"lammps_ala.data", "lammps_ala.in")
f = open("lammps_ala.data")
print(f.read())
f.close()
Explanation: Translating small peptide structure
Here, we demonstrate translating a small solvated peptide structure (from http://ambermd.org/tutorials/basic/tutorial0/index.htm) to LAMMPS using EEX.
End of explanation |
3,223 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercise 06
Data preparation and model evaluation exercise with Titanic data
We'll be working with a dataset from Kaggle's Titanic competition
Step1: Exercise 6.1
Impute the missing values of the age and Embarked
Step2: Exercise 6.3
Convert the Sex and Embarked to categorical features
Step3: Exercise 6.3 (2 points)
From the set of features ['Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare', 'Embarked']
*Note, use the created categorical features for Sex and Embarked
Select the features that maximize the accuracy the model using K-Fold cross-validation | Python Code:
import pandas as pd
url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/titanic.csv'
titanic = pd.read_csv(url, index_col='PassengerId')
titanic.head()
Explanation: Exercise 06
Data preparation and model evaluation exercise with Titanic data
We'll be working with a dataset from Kaggle's Titanic competition: data, data dictionary
Goal: Predict survival based on passenger characteristics
The sinking of the RMS Titanic is one of the most infamous shipwrecks in history. On April 15, 1912, during her maiden voyage, the Titanic sank after colliding with an iceberg, killing 1502 out of 2224 passengers and crew. This sensational tragedy shocked the international community and led to better safety regulations for ships.
One of the reasons that the shipwreck led to such loss of life was that there were not enough lifeboats for the passengers and crew. Although there was some element of luck involved in surviving the sinking, some groups of people were more likely to survive than others, such as women, children, and the upper-class.
In this challenge, we ask you to complete the analysis of what sorts of people were likely to survive. In particular, we ask you to apply the tools of machine learning to predict which passengers survived the tragedy.
Read the data into Pandas
End of explanation
titanic.Age.fillna(titanic.Age.median(), inplace=True)
titanic.isnull().sum()
titanic.Embarked.mode()
titanic.Embarked.fillna('S', inplace=True)
titanic.isnull().sum()
Explanation: Exercise 6.1
Impute the missing values of the age and Embarked
End of explanation
titanic['Sex_Female'] = titanic.Sex.map({'male':0, 'female':1})
titanic.head()
embarkedummy = pd.get_dummies(titanic.Embarked, prefix='Embarked')
embarkedummy.drop(embarkedummy.columns[0], axis=1, inplace=True)
titanic = pd.concat([titanic, embarkedummy], axis=1)
titanic.head()
Explanation: Exercise 6.3
Convert the Sex and Embarked to categorical features
End of explanation
y = titanic['Survived']
features = ['Pclass', 'Age', 'SibSp', 'Parch', 'Fare','Sex_Female', 'Embarked_Q', 'Embarked_S']
import numpy as np
def comb(n,k) :
return np.math.factorial(n) / (np.math.factorial(n-k) * np.math.factorial(k))
np.sum([comb(8,i) for i in range(0,8)])
import itertools
possible_models = []
for i in range(1,len(features)+1):
possible_models.extend(list(itertools.combinations(features,i)))
possible_models
import itertools
possible_models = []
for i in range(1,len(features)+1):
possible_models.extend(list(itertools.combinations(features,i)))
from sklearn.linear_model import LogisticRegression
from sklearn.cross_validation import cross_val_score
Y = titanic.Survived
resultado = pd.DataFrame(index=possible_models,columns=['presicion'])
for i in range(len(possible_models)):
X = titanic[list(possible_models[i])]
reglogistica = LogisticRegression(C=1e9)
resultado.iloc[i] = cross_val_score(reglogistica, X, Y, cv=10, scoring='accuracy').mean()
resultado.head()
resultado.sort_values('presicion',ascending=False).head(1)
Explanation: Exercise 6.3 (2 points)
From the set of features ['Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare', 'Embarked']
*Note, use the created categorical features for Sex and Embarked
Select the features that maximize the accuracy the model using K-Fold cross-validation
End of explanation |
3,224 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: <i class="fa fa-diamond"></i> Primero pimpea tu libreta!
Step2: Un poco de estadística
Step3: Hacemos dos listas, la primera contendrá las edades de los chavos de clubes de ciencia y la segusda el número de personas que tienen dicha edad
Step4: Distribución uniforme
Lotería mexicana
Step5: Distribución de Poisson
Número de solicitudes de amistad en facebook en una semana
Step6: Distribución normal
Distribución de calificaciones en un exámen
Step7: Una forma de automatizar esto es
Step8: Probabilidad en una distribución normal
$1 \sigma$ = 68.26%
$2 \sigma$ = 95.44%
$3 \sigma$ = 99.74%
$4 \sigma$ = 99.995%
$5 \sigma$ = 99.99995%
Actividades
Grafica lo siguiente | Python Code:
from IPython.core.display import HTML
import os
def css_styling():
Load default custom.css file from ipython profile
base = os.getcwd()
styles = "<style>\n%s\n</style>" % (open(os.path.join(base,'files/custom.css'),'r').read())
return HTML(styles)
css_styling()
Explanation: <i class="fa fa-diamond"></i> Primero pimpea tu libreta!
End of explanation
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Un poco de estadística
End of explanation
Edades = np.array([15, 16, 17, 18, 19, 20, 21, 22, 23, 24])
Frecuencia = np.array([10, 22, 39, 32, 26, 10, 7, 5, 8, 1])
print sum(Frecuencia)
plt.bar(Edades, Frecuencia)
plt.show()
Explanation: Hacemos dos listas, la primera contendrá las edades de los chavos de clubes de ciencia y la segusda el número de personas que tienen dicha edad
End of explanation
x1=np.random.rand(50)
plt.hist(x1)
plt.show()
Explanation: Distribución uniforme
Lotería mexicana
End of explanation
s = np.random.poisson(5,20)
plt.hist(s)
plt.show()
Explanation: Distribución de Poisson
Número de solicitudes de amistad en facebook en una semana
End of explanation
x=np.random.randn(50)
plt.hist(x)
plt.show()
x=np.random.randn(100)
plt.hist(x)
plt.show()
x=np.random.randn(200)
plt.hist(x)
plt.show()
Explanation: Distribución normal
Distribución de calificaciones en un exámen
End of explanation
tams = [1,2,3,4,5,6,7]
for tam in tams:
numeros = np.random.randn(10**tam)
plt.hist(numeros,bins=20 )
plt.title('%d' %tam)
plt.show()
numeros = np.random.normal(loc=2.0,scale=2.0,size=1000)
plt.hist(numeros)
plt.show()
Explanation: Una forma de automatizar esto es:
End of explanation
x = np.random.normal(loc=2.0,scale=2.0,size=100)
y = np.random.normal(loc=2.0,scale=2.0,size=100)
plt.scatter(x,y)
plt.show()
Explanation: Probabilidad en una distribución normal
$1 \sigma$ = 68.26%
$2 \sigma$ = 95.44%
$3 \sigma$ = 99.74%
$4 \sigma$ = 99.995%
$5 \sigma$ = 99.99995%
Actividades
Grafica lo siguiente:
Crear 3 distribuciones variando mean
Crear 3 distribuciones variando std
Crear 2 distribuciones con cierto sobrelape
Campanas gaussianas en la Naturaleza
Examenes de salidad en prepas en Polonia:
Distribución normal en 2D
End of explanation |
3,225 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spherical Harmonic Normalizations and Parseval's theorem
The variance of a single spherical harmonic
We will here demonstrate the relatioship between a function expressed in spherical harmonics and its variance. To make things simple, we will consider only a single harmonic, and note that the results are easily extended to more complicated functions given that the spherical harmonics are orthogonal.
We start by initializing a new coefficient class to zero and setting a single coefficient to 1.
Step1: Given that we will perform some numerical integrations with this function below, we expand it onto a grid appropriate for integration by Gauss-Legendre quadrature
Step2: Next, we would like to calculate the variance of this single spherical harmonic. Since each spherical harmonic has a zero mean, the variance is equal to the integral of the function squared (i.e., its norm N) divided by the surface area of the sphere (4 pi)
Step3: Alternatively, we could have done the integration with a 'DH' grid instead
Step4: Parseval's theorem
We have seen in the previous section that a single 4-pi normalized spherical harmonic has unit variance. In spectral analysis, the word power is often used to mean the value of the function squared divided by the area it spans, and if the function has zero mean, power is equivalent to variance. Since the spherical harmonics are orthogonal functions on the sphere, there exists a simple relationship between the power of the function and its spherical harmonic coefficients
Step5: If the coefficients of all spherical harmonics are independent, the distribution will become Gaussian as predicted by the central limit theorem. If the individual coefficients were Gaussian in the first place, the distribution would naturally be Gaussian as well. We illustrate this below.
First, we create a random realization of normally distributed coefficients whose power spectrum follows a power law
Step6: Next, we calculate a histogram of the data using the Gauss-Legendre quadrature points and weights
Step7: Finally, we compute the expected distribution and plot the two | Python Code:
%matplotlib inline
from __future__ import print_function # only necessary if using Python 2.x
import matplotlib.pyplot as plt
import numpy as np
from pyshtools.shclasses import SHCoeffs, SHGrid, SHWindow
lmax = 100
coeffs = SHCoeffs.from_zeros(lmax)
coeffs.set_coeffs(values=[1], ls=[5], ms=[2])
Explanation: Spherical Harmonic Normalizations and Parseval's theorem
The variance of a single spherical harmonic
We will here demonstrate the relatioship between a function expressed in spherical harmonics and its variance. To make things simple, we will consider only a single harmonic, and note that the results are easily extended to more complicated functions given that the spherical harmonics are orthogonal.
We start by initializing a new coefficient class to zero and setting a single coefficient to 1.
End of explanation
grid = coeffs.expand('GLQ')
fig, ax = grid.plot()
Explanation: Given that we will perform some numerical integrations with this function below, we expand it onto a grid appropriate for integration by Gauss-Legendre quadrature:
End of explanation
N = ((grid.data**2) * grid.weights[np.newaxis,:].T).sum() * (2. * np.pi / grid.nlon)
print('N = ', N)
print('Variance of Ylm = ', N / (4. * np.pi))
Explanation: Next, we would like to calculate the variance of this single spherical harmonic. Since each spherical harmonic has a zero mean, the variance is equal to the integral of the function squared (i.e., its norm N) divided by the surface area of the sphere (4 pi):
$$N_{lm} = \int_\Omega Y^2_{lm}(\mathbf{\theta, \phi})~d\Omega$$
$$Var(Y_{lm}(\mathbf{\theta, \phi}) = \frac{N_{lm}}{4 \pi}$$
When the spherical harmonics are 4-pi normalized, N is equal to 4 pi for all values of l and m. Thus, by definition, the variance of each harmonic is 1 for 4-pi nomalized harmonics.
We can verify the mathemiatical value of N by doing the integration manually. For this, we will perform a Gauss-Legendre Quadrature, making use of the latitudinal weighting function that is stored in the SHGrid class instance.
End of explanation
from pyshtools.utils import DHaj
grid_dh = coeffs.expand('DH')
weights = DHaj(grid_dh.nlat)
N = ((grid_dh.data**2) * weights[np.newaxis,:].T).sum() * 2. * np.sqrt(2.) * np.pi / grid_dh.nlon
print('N = ', N)
print('Variance of Ylm = ', N / (4. * np.pi))
Explanation: Alternatively, we could have done the integration with a 'DH' grid instead:
End of explanation
power = coeffs.spectrum()
print('Total power is ', power.sum())
Explanation: Parseval's theorem
We have seen in the previous section that a single 4-pi normalized spherical harmonic has unit variance. In spectral analysis, the word power is often used to mean the value of the function squared divided by the area it spans, and if the function has zero mean, power is equivalent to variance. Since the spherical harmonics are orthogonal functions on the sphere, there exists a simple relationship between the power of the function and its spherical harmonic coefficients:
$$\frac{1}{4 \pi} \int_{\Omega} f^2(\mathbf{\theta, \phi})~d\Omega = \sum_{lm} C_{lm}^2 \frac{N_{lm}}{4 \pi}$$
This is Parseval's theorem for data on the sphere. For 4-pi normalized harmonics, the last fraction on the right hand side is unity, and the total variance (power) of the function is the sum of the coefficients squared. Knowning this, we can confirm the result of the previous section by showing that the total power of the l=5, m=2 harmonic is unity:
End of explanation
lmax = 200
a = 30
ls = np.arange(lmax+1, dtype=float)
power = 1. / (1. + (ls / a) ** 2) ** 1
coeffs = SHCoeffs.from_random(power)
power_random = coeffs.spectrum()
total_power = power_random.sum()
grid = coeffs.expand('GLQ')
fig, ax = grid.plot()
Explanation: If the coefficients of all spherical harmonics are independent, the distribution will become Gaussian as predicted by the central limit theorem. If the individual coefficients were Gaussian in the first place, the distribution would naturally be Gaussian as well. We illustrate this below.
First, we create a random realization of normally distributed coefficients whose power spectrum follows a power law:
End of explanation
weights = (grid.weights[np.newaxis,:].T).repeat(grid.nlon, axis=1) * (2. * np.pi / grid.nlon)
bins = np.linspace(-50, 50, 30)
center = 0.5 * (bins[:-1] + bins[1:])
dbin = center[1] - center[0]
hist, bins = np.histogram(grid.data, bins=bins, weights=weights, density=True)
Explanation: Next, we calculate a histogram of the data using the Gauss-Legendre quadrature points and weights:
End of explanation
normal_distribution = np.exp( - center ** 2 / (2 * total_power))
normal_distribution /= dbin * normal_distribution.sum()
fig, ax = plt.subplots(1, 1)
ax.plot(center, hist, '-x', c='blue', label='computed distribution')
ax.plot(center, normal_distribution, c='red', label='predicted distribution')
ax.legend(loc=3);
Explanation: Finally, we compute the expected distribution and plot the two:
End of explanation |
3,226 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Test application
Autor
Step1: Test data from the application loaded into a simple data container. One row contains the data of a click. If not changed the first file from files is loaded.
| circle_x | circle_y | click_x | click_y | timestamp | radius |
|----------|----------|---------|---------|---------------------------|--------|
| 391 | 207 | 426 | 232 | 2020-01-23 15
Step3: ID is the index of difficulty.
$ID = \log_2 \left(\dfrac{2D}{W}\right)$
D is the distance from the starting point to the center of the target.
W is the width of the target measured along the axis of motion. W can also be thought of as the allowed error tolerance in the final position, since the final point of the motion must fall within $\pm \frac{W}{2}$ of the target's center.
MT is the average time to complete the movement.
a and b are constants that depend on the choice of input device and are usually determined empirically by regression analysis. a defines the intersection on the y-axis and is often as interpreted as a delay. The b-parameter is a slope and describes an acceleration. Both paramters show the linear dependency in Fitts' Law.
$MT = a + b \cdot ID = a + b \cdot \log_2 \left(\dfrac{2D}{W}\right) $
Step4: Calculate all models that can be drawn on the graph later on.
Step5: All data is put into a pandas dataframe for easier selection and matplot drawing | Python Code:
files = ['clicks_2020-01-24 09:48:51_touchpad_14"_monitor.csv',
'clicks_2020-01-24 09:44:46_mouse_24"_monitor.csv',
'clicks_2020-01-23 16:00:32_mouse_24"_monitor.csv']
Explanation: Test application
Autor: Nils Verheyen\
Matriculation number: 3043171
Mouse and touchpad input were tested on full hd screens with 24" and 14".
End of explanation
import csv
import numpy as np
import pandas as pd
from dataclasses import dataclass
from datetime import datetime, timedelta
@dataclass
class CircleClick():
circle_x: int
circle_y: int
click_x: int
click_y: int
radius: int
timestamp: datetime
clicks = []
with open(files[0]) as src:
reader = csv.reader(src)
for row in reader:
circle_click = CircleClick(circle_x=int(row[0]), circle_y=int(row[1]),
click_x=int(row[2]), click_y=int(row[3]),
timestamp=datetime.strptime(row[4], '%Y-%m-%d %H:%M:%S.%f'),
radius=int(row[5]))
clicks.append(circle_click)
clicks[0]
Explanation: Test data from the application loaded into a simple data container. One row contains the data of a click. If not changed the first file from files is loaded.
| circle_x | circle_y | click_x | click_y | timestamp | radius |
|----------|----------|---------|---------|---------------------------|--------|
| 391 | 207 | 426 | 232 | 2020-01-23 15:59:09.367584| 30 |
End of explanation
def distance(x1: int, x2: int, y1: int, y2: int):
a = np.power(x1 - x2, 2)
b = np.power(y1 - y2, 2)
distance = np.sqrt(a + b)
return distance
distance(0, 1, 0, 1)
@dataclass
class FittsModel:
D: float = 0
W: float = 0
ID: float = 0
MT: timedelta = timedelta(0)
def calculate(self, start: CircleClick, end: CircleClick):
The model calculates its values D, W, ID and MT
based on two clicks
self.D = distance(start.click_x,
end.circle_x + end.radius,
start.click_y,
end.circle_y + end.radius)
self.W = end.radius * 2
self.ID = np.log2(2 * self.D / self.W)
self.MT = end.timestamp - start.timestamp
@property
def MT_in_millis(self):
millis, micros = divmod(self.MT.microseconds, 1000)
return self.MT.total_seconds() * 1000 + millis + micros / 1000
Explanation: ID is the index of difficulty.
$ID = \log_2 \left(\dfrac{2D}{W}\right)$
D is the distance from the starting point to the center of the target.
W is the width of the target measured along the axis of motion. W can also be thought of as the allowed error tolerance in the final position, since the final point of the motion must fall within $\pm \frac{W}{2}$ of the target's center.
MT is the average time to complete the movement.
a and b are constants that depend on the choice of input device and are usually determined empirically by regression analysis. a defines the intersection on the y-axis and is often as interpreted as a delay. The b-parameter is a slope and describes an acceleration. Both paramters show the linear dependency in Fitts' Law.
$MT = a + b \cdot ID = a + b \cdot \log_2 \left(\dfrac{2D}{W}\right) $
End of explanation
models = []
for i in range(1, len(clicks)):
model = FittsModel()
model.calculate(clicks[i - 1], clicks[i])
models.append(model)
models[0]
Explanation: Calculate all models that can be drawn on the graph later on.
End of explanation
data = {'D': [], 'W': [], 'ID': [], 'MT': []}
for m in models:
data['D'].append(m.D)
data['W'].append(m.W)
data['ID'].append(m.ID)
data['MT'].append(m.MT_in_millis)
df = pd.DataFrame(data=data)
df
widths = set([m.W for m in models])
widths
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib
matplotlib.rcParams['figure.figsize']
matplotlib.rcParams['figure.figsize'] = [12, 8]
df['ID'].mean()
df.groupby(['W']).mean()
df.groupby(['W']).median()
from sklearn.linear_model import LinearRegression
# uncomment the next line to select a specific circle width
# widths = [100]
for width in widths:
_df = df[df['W'] == width]
model = LinearRegression()
model.fit(_df[['ID']], _df[['MT']])
min_x = min(df['ID'])
max_x = max(df['ID'])
predicted = model.predict([[min_x], [max_x]])
plt.scatter(x=_df['ID'], y=_df['MT'])
plt.plot([min_x, max_x], predicted)
plt.legend(widths)
plt.show()
Explanation: All data is put into a pandas dataframe for easier selection and matplot drawing
End of explanation |
3,227 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Preparations
Import libraries
Step1: Load data
Step2: Filtering
There are two goals
Step3: Save the hard-earned JDs and skills after all these filters
Step4: Sample job postings
Step5: Find stopword-like skills by TF-IDF
Step6: Setting idf threshold as 1 did not catch stop words like com, can, so I increase the threshold of idf.
Step7: Filter out stopword skills
Step8: Handle reposted jobs
There are jobs reposted several times as shown below. Thus, job ids in job_posts are not unique.
Step9: Remove jobs without title
Step10: Clean employer data
Step11: Merge doc_index, posts and employers to get industry info
Note
Step12: Weird duplications in result of the first merge
The duplications were then detected as below
Step13: The problem is due to upper vs. lower case in employer names! That's why we need to standardize them.
Lesson learnt | Python Code:
import my_util as my_util; from my_util import *
Explanation: Preparations
Import libraries:
End of explanation
HOME_DIR = 'd:/larc_projects/job_analytics/'
DATA_DIR = HOME_DIR + 'data/clean/'
# job descriptions (JDs)
init_posts = pd.read_csv(DATA_DIR + 'jd_df.csv')
skill_df = pd.read_csv(DATA_DIR + 'skill_index.csv')
init_skills = skill_df['skill']
jd_docs = list(init_posts['clean_text'].apply(str.lower))
n_skill, n_jd = len(init_skills) , init_posts.shape[0]
print('Initial no. of skills: %d' %n_skill)
print('Initial no. of JDs: %d' %n_jd) # some garbage JDs with no text already removed
skill_df.head(3)
Explanation: Load data
End of explanation
n_iter, posts = 0, init_posts
n_post = posts.shape[0]
stop_cond, thres = False, .98
while not stop_cond:
n_iter = n_iter + 1
print('Iteration %d' %n_iter)
new_posts = extractJDs(posts, skills, min_n_skill=2)
n_new_post = new_posts.shape[0]
print('No. of posts after filtering: %d' %n_new_post)
skill_df = extractSkills(skills, new_posts, min_n_jd=2)
new_skills = skill_df['skill']
print('No. of skills after filtering: %d' %len(new_skills) )
stop_cond = (n_new_post >= thres*n_post) and (len(new_skills) >= thres*len(skills))
posts = new_posts
n_post = posts.shape[0]
skills = new_skills
# end
Explanation: Filtering
There are two goals: i) to remove JDs with too few skills, and ii) to remove skills occurring in too few JDs. Thus, we repeat the following process until the two goals are satisfied.
+ Count no. of unique skills in each JD
+ Remove JDs with $<= 1$ skills
+ Count no. of JDs containing each skill
+ Remove skills occuring in $<= 1$ JDs
End of explanation
# print min(posts['n_uniq_skill'])
# print min(skill_df['n_jd_with_skill'])
posts.to_csv(DATA_DIR + 'filtered/posts.csv', index=False)
skill_df.to_csv(DATA_DIR + 'filtered/skills.csv', index=False)
Explanation: Save the hard-earned JDs and skills after all these filters:
End of explanation
posts = posts.sort_values(by='n_uniq_skill', ascending=False)
posts.head()
# Sanity check by pull up skills occuring in the JD with most skills
# post_with_most_skill = init_posts.query('job_id == {}'.format('JOB-2015-0196805') )
train_idx, test_idx = mkPartition(n_instance, p=80)
X_train, X_test = doc_skill_tfidf[train_idx, :], doc_skill_tfidf[test_idx, :]
n_train, n_test = X_train.shape[0], X_test.shape[0]
print('Train set has %d JDs and test set has %d JDs' %(n_train, n_test))
stats = pd.DataFrame({'n_train': n_train, 'n_test': n_test, 'n_jd (train & test)': n_post, 'n_skill': len(skills)}, index=[0])
stats.to_csv(RES_DIR + 'stats.csv', index=False)
Explanation: Sample job postings:
End of explanation
from ja_helpers import toIDF
idf = toIDF(terms=skills, doc_term_mat=doc_skill)
idf.sort_values('idf_log10', inplace=True)
idf.to_csv(SKILL_DIR + 'skill_idf.csv', index=False)
idf['idf_log10'] = idf['idf'] * np.log10(np.e)
quantile(idf['idf_log10'])
idf_log10 = idf['idf_log10']
n, bins, patches = plt.hist(idf_log10, bins=np.unique(idf_log10))
plt.xlabel('IDF of term (log-10 scale)')
plt.ylabel('# terms')
plt.grid(True)
plt.savefig(SKILL_DIR + 'idf_hist.pdf')
plt.show()
plt.close()
# terms which occur in at least 10% of docs
idf.query('idf_log10 <= 1')
Explanation: Find stopword-like skills by TF-IDF
End of explanation
idf.query('idf_log10 <= 1.35').to_csv(SKILL_DIR + 'stop_words.csv', index=False)
Explanation: Setting idf threshold as 1 did not catch stop words like com, can, so I increase the threshold of idf.
End of explanation
df = pd.read_csv(SKILL_DIR + 'stop_words.csv')
stop_words = df['term']
skill_df = skill_df[- skill_df['skill'].isin(stop_words)]
print(skill_df.shape)
skill_df.to_csv(SKILL_DIR + 'skill_index.csv', index=False)
Explanation: Filter out stopword skills
End of explanation
job_posts = pd.read_csv(DATA_DIR + 'full_job_posts.csv')
job_posts.head(5)
by_job_id = job_posts[['job_id', 'job_posting_date_history']].groupby('job_id')
res = by_job_id.agg({'job_posting_date_history': lambda x:len(np.unique(x))})
res = res.rename(columns={'job_posting_date_history': 'n_post_date'}).reset_index()
res.sort_values('n_post_date', ascending=False, inplace=True)
res.head()
quantile(res['n_post_date'])
repost_jobs = res.query('n_post_date > 1')
print('# jobs reposted: %d' %repost_jobs.shape[0])
Explanation: Handle reposted jobs
There are jobs reposted several times as shown below. Thus, job ids in job_posts are not unique.
End of explanation
jobs = job_posts[['job_id', 'title', 'employer_name']].drop_duplicates()
print('# records in jobs bf merging: %d' %jobs.shape[0])
jobs = pd.merge(jobs, job_desc)
print('# records in jobs after merging: %d' %jobs.shape[0])
jobs_wo_title = job_posts[job_posts['title'].isnull()]
n_job_wo_title = jobs_wo_title.shape[0]
print('# job posts in WDA without title: %d' %n_job_wo_title)
jobs_wo_title
jobs.to_csv(DATA_DIR + 'jobs.csv', index=False)
jobs.head()
Explanation: Remove jobs without title
End of explanation
employers = pd.read_csv(DATA_DIR + 'employers.csv')
print employers.shape
employers.rename(columns={'company_registration_number_uen_ep': 'employer_id', 'organisation_name_ep': 'employer_name',
'ssic_group_ep': 'industry'}, inplace=True)
# Standardize employer names by uppercase (problem detected below)
employers['employer_name'] = map(str.upper, employers['employer_name'])
employers = employers.drop_duplicates()
employers.shape
# Handle the problem with PRIORITY CONSULTANTS (detected below)
employers.query('employer_name == "PRIORITY CONSULTANTS"')
employers = employers.drop(10278)
employers.query('employer_name == "PRIORITY CONSULTANTS"')
employers.to_csv(DATA_DIR + 'employers.csv', index=False)
Explanation: Clean employer data
End of explanation
posts = pd.read_csv(DATA_DIR + 'full_job_posts.csv')
posts.head()
df = mergeKeepLeftIndex(doc_index, posts[['job_id', 'employer_id']])
df = df.drop_duplicates()
df.shape
df = mergeKeepLeftIndex(df, employers[['employer_id', 'employer_name', 'industry']])
df = df.drop_duplicates()
df.shape[0]
df.to_csv(SKILL_DIR + 'doc_index.csv', index=False)
Explanation: Merge doc_index, posts and employers to get industry info
Note: need to maintain the index in doc_index as this index is required to retrive the correct topic distribution for each document from the matrix doc_topic_distr.
End of explanation
# First, verify duplication exists
print len(df.index)
print len(df.index.unique())
# Then detect them
import collections
print [(item, count) for item, count in collections.Counter(df.index).items() if count > 1]
df.iloc[25569:25571, :]
Explanation: Weird duplications in result of the first merge
The duplications were then detected as below:
End of explanation
print [(item, count) for item, count in collections.Counter(tmp.index).items() if count > 1]
tmp.iloc[29403:29405, :]
Explanation: The problem is due to upper vs. lower case in employer names! That's why we need to standardize them.
Lesson learnt: Watch out for case-sensitive problem in data.
After handling this, we repeat the above process and check for duplications again.
End of explanation |
3,228 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Feladatok
Minden feladatot külön notebookba oldj meg!
A megoldásnotebook neve tartalmazza a feladat számát!
A megoldasok kerüljenek a MEGOLDASOK mappába!<br> Csak azok a feladatok kerülnek elbírálásra amelyek a MEGOLDASOK mappában vannak!
A megoldás tartalmazza a megoldandó feladat szövegét a megoldás notebook első markdown cellájában!
Kommentekkel illetve markdown cellákkal magyarázd hogy éppen mit csinál az adott kódrészlet!<br> Magyarázat nélkül beküldött feladatok csak fél feladatnak számítanak!
01-for
Az alábbi három tömbről döntsük el, hogy a Fibonacci-sorozat részét képezik-e ! A tömbök első két eleme garantáltan jó sorrendben részei a Fibonacci sorozatnak!
- Írjunk egy kód részletet ami for ciklus(ok) segítségével dönti el a vizsgálandó kérdést.
- Markdown cellába fejtsük tapasztalatainkat szóban is.
- Melyik lista része a Fibonacchi-sorozatnak ?
- Ha valamelyik nem része akkor azt is tárgyaljuk hogy miért nem az!
Step1: 02-if
Írjunk egy függvényt a nap az ora és fiulany bemeneti valltozók megadott értékei alapján eldönti, hogy épp az adott időben a kérdéses személy mit csinál. A lehetséges tevékenységeket az alábbiak alapján döntsük el
Step2: 03-Mértani sorozat
Írjunk egy függvényt, amely egy kezdőértékből, egy kvóciensből és egy N egész számból legyárt egy N hoszú mértani sorozatot.
- Írjunk docstringet!
- a függvény egy listával térjen vissza !
Step3: 04-Telefon központ
Írjunk egy függvényt, ami neveket és telefonszámokat tartalmazó szótárakat dolgoz fel!
- Két bemenő paramétert használjunk. Az első egy szám, a második pedig egy szótár (dict).
- A függvény írja ki azoknak az embereknek a nevét, akik abban a körzetben laknak, amelynek az előhívó számát megadjuk (az első három számjegy)!
- A függvény visszatérési értéke legyen az hogy hány ember lakik az adott körzetben.
Itt egy példaadatbázis
Step4: 05-Változó számú argumentumok-I
A harmadik példában megírt mértani sorozat függvényt módosítsuk úgy, hogy
Step5: 07-kulcsszavas függvény változó számú argumentummal ☠
Írjunk egy függvényt, amely egy adott x valós értékre kiértékel egy tetszőleges polinomfüggvényt, vagy annak a reciprokát!
A polinom-együtthatókat egy tetszőleges hosszúságú args nevű listában kapjuk. Ha a függvény kap egy harmadik argumentumot kulcsszavas lista formájában, akkor vizsgáljuk meg, hogy abban a 'fajta' kulcsszó mit tartalmaz.
Ha a kulcsszó 'reciprok', akkor a polinom reciprokát számoljuk! Ellenkező esetben a polinom értékét adjuk vissza! | Python Code:
a=[12586269025, 20365011074, 32951280099, 53316291173, 86267571272, 139583862445, 225851433717,365435296162, 591286729879,
956722026041, 1548008755920, 2504730781961, 4052739537881, 6557470319842, 10610209857723, 17167680177565, 27777890035288,
44945570212853, 72723460248141, 117669030460994]
b=[832040, 1346269, 2175309, 3524578, 5702887, 9227465, 14930352, 24157817, 39088169, 63245986]
c=[267914296, 433494437, 701408733, 1134903170, 1836311903, 2971215073, 4807526976,7778742049,
12586269025, 20365011074, 32951280099, 53316291173, 86267571272]
Explanation: Feladatok
Minden feladatot külön notebookba oldj meg!
A megoldásnotebook neve tartalmazza a feladat számát!
A megoldasok kerüljenek a MEGOLDASOK mappába!<br> Csak azok a feladatok kerülnek elbírálásra amelyek a MEGOLDASOK mappában vannak!
A megoldás tartalmazza a megoldandó feladat szövegét a megoldás notebook első markdown cellájában!
Kommentekkel illetve markdown cellákkal magyarázd hogy éppen mit csinál az adott kódrészlet!<br> Magyarázat nélkül beküldött feladatok csak fél feladatnak számítanak!
01-for
Az alábbi három tömbről döntsük el, hogy a Fibonacci-sorozat részét képezik-e ! A tömbök első két eleme garantáltan jó sorrendben részei a Fibonacci sorozatnak!
- Írjunk egy kód részletet ami for ciklus(ok) segítségével dönti el a vizsgálandó kérdést.
- Markdown cellába fejtsük tapasztalatainkat szóban is.
- Melyik lista része a Fibonacchi-sorozatnak ?
- Ha valamelyik nem része akkor azt is tárgyaljuk hogy miért nem az!
End of explanation
def kiholmit(nap,ora,fiulany):
"..." # ide jön a docstring
#
# ide jön a varázslat..
#
return # ide jön a visszatérési érték
Explanation: 02-if
Írjunk egy függvényt a nap az ora és fiulany bemeneti valltozók megadott értékei alapján eldönti, hogy épp az adott időben a kérdéses személy mit csinál. A lehetséges tevékenységeket az alábbiak alapján döntsük el:
A fiúk is és a lányok is hétköznap délelőtt tanulnak.
A lányok délután 2 és 4 között teáznak, egyébként babáznak.
A fiúk 12 és 4 között fociznak, 4 től golyóznak.
Hétvégén mindenki kirándul. A fiuk a hegyekbe, a lányok a tengerhez mennek szombaton, de vasárnap fordítva.
Mindennap mindenki 8-kor megy aludni, és reggel 8 kor kel.
a függvény egy karakterláncal térjen vissza melynek az értékei a fenti kritérium rendszer alapján alábbiak lehetnek: 'tanul','teázik','babázik','focizik','golyózik','tengernél kirándul','hegyekben kirándul','alszik'
a három bemenő változó lehetséges értékei pedig az alábbiak lehetnek:
nap : 'hétfő','kedd','szerda','csütörtök','péntek','szombat','vasárnap'
ora : egy egész szám 0 és 24 között
fiulany: 'fiú','lány'
End of explanation
def mertani(x0,q,N):
"..." # ide jön a docstring
#
# ide jön a varázslat..
#
Explanation: 03-Mértani sorozat
Írjunk egy függvényt, amely egy kezdőértékből, egy kvóciensből és egy N egész számból legyárt egy N hoszú mértani sorozatot.
- Írjunk docstringet!
- a függvény egy listával térjen vissza !
End of explanation
adatok={'Alonzo Hinton': '(855) 278-2590',
'Cleo Hennings': '(844) 832-0585',
'Daine Ventura': '(833) 832-5081',
'Esther Leeson': '(855) 485-0624',
'Gene Connell': '(811) 973-2926',
'Lashaun Bottorff': '(822) 687-1735',
'Marx Hermann': '(844) 164-8116',
'Nicky Duprey': '(811) 032-6328',
'Piper Subia': '(844) 373-4228',
'Zackary Palomares': '(822) 647-3686'}
def telefon_kozpont(korzet,adatok):
"Ha megadod a körzetszámot (korzet) akkor kiírom ki lakik ott."
#
#ide jön a varázslat...
#
return # ide jön a visszatérési érték
Explanation: 04-Telefon központ
Írjunk egy függvényt, ami neveket és telefonszámokat tartalmazó szótárakat dolgoz fel!
- Két bemenő paramétert használjunk. Az első egy szám, a második pedig egy szótár (dict).
- A függvény írja ki azoknak az embereknek a nevét, akik abban a körzetben laknak, amelynek az előhívó számát megadjuk (az első három számjegy)!
- A függvény visszatérési értéke legyen az hogy hány ember lakik az adott körzetben.
Itt egy példaadatbázis:
End of explanation
def poly(x,*a):
"Polinom függvény f(x)=\sum_i a_i x^i" #Ez csak a docstring
#
# ide jön a varázslat..
#
return # ide jön a visszatérési érték
Explanation: 05-Változó számú argumentumok-I
A harmadik példában megírt mértani sorozat függvényt módosítsuk úgy, hogy:
- ha egy bemeneti értéke van, akkor azt tekintse kezdőértéknek, a kvóciens legyen 0.5, N pedig 10.
- ha két bemeneti érték van, akkor az első legyen a kezdőérték, a második a kvociens, N pedig 10
- ha megvan mind a három paraméter, akkor ugyanúgy viselkedjen, mint ahogy azt az előző feladatban tette.
06-Változó számú argumentumok-II ☠
Írjunk egy függvényt, amelyik egy tetszőleges fokszámú polinomot értékel ki egy adott x helyen!
A polinom fokszámát és együtthatóit határozzuk meg az a változó hosszúságú argumentumból! Használjuk a listákra alkalmazható len() függvényt!
End of explanation
def fuggveny(x,*args,**kwargs):
"Ha a kwargs nem rendelkezik másképp akkor kiértékelek egy polinomot"
#
#ide jön a varázslat
#
if kwargs['fajta']=='inverz':
#
#
else:
#
#
#
return #ide jön a visszatérési érték..
Explanation: 07-kulcsszavas függvény változó számú argumentummal ☠
Írjunk egy függvényt, amely egy adott x valós értékre kiértékel egy tetszőleges polinomfüggvényt, vagy annak a reciprokát!
A polinom-együtthatókat egy tetszőleges hosszúságú args nevű listában kapjuk. Ha a függvény kap egy harmadik argumentumot kulcsszavas lista formájában, akkor vizsgáljuk meg, hogy abban a 'fajta' kulcsszó mit tartalmaz.
Ha a kulcsszó 'reciprok', akkor a polinom reciprokát számoljuk! Ellenkező esetben a polinom értékét adjuk vissza!
End of explanation |
3,229 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Main Workflow
Step1: Load old colab notebook names
Step2: Parse script names from colab notebooks
Step3: Process the code
Ways to import modules in python
* import foo
* import foo as bar
* import foo.bar
* import foo.bar as bar
* from foo import bar
* from foo import *
* from foo.bar import baz
* from foo.bar import baz as qux
Step4: Corrected scripts
Step5: Convert
Step6: Save metadata
Step7: Appendix
Chapter wise figure number map with scripts | Python Code:
from time import time
init = time()
import re
import os
import sys
import json
import yaml
from functools import reduce
from collections import ChainMap
import subprocess
import pandas as pd
from glob import glob
import nbformat
import jax
Explanation: Main Workflow
End of explanation
old_nb_files = glob("../../pml-book/pml1/figure_notebooks/*")
old_nb_files[:2]
Explanation: Load old colab notebook names
End of explanation
new_nb_path = "../notebooks/book1/"
scripts_path = "../scripts/"
def get_fig_wise_scripts(cells):
prev_cell, cell = cells
scripts = re.findall("\[(\S*?\.py)\]\(http", cell["source"])
if scripts:
fig_num = re.findall("## Figure (.*?):", prev_cell["source"])[0]
fig_num = ".".join([fig_num.split(".")[0].zfill(2), fig_num.split(".")[1].zfill(2)])
return {fig_num: scripts}
def process_notebook(file_name):
chap_num, chap_name = file_name.split("/")[-1].split(".")[0].split("_", 1)
chap_num = chap_num.replace("chapter", "").zfill(2)
chap_name = chap_name.replace("_figures", "")
nb = nbformat.read(file_name, as_version=4)
scripts = map(get_fig_wise_scripts, zip(nb["cells"], nb["cells"][1:]))
scripts = filter(None, scripts)
# https://stackoverflow.com/a/15714097
scripts = reduce(lambda x, y: x.update(y) or x, scripts, {})
return {f"{chap_num}_{chap_name}": scripts}
master_metadata = map(process_notebook, old_nb_files)
master_metadata = reduce(lambda x, y: x.update(y) or x, master_metadata, {})
scripts = list(set(jax.tree_leaves(master_metadata)))
print(f"Found {len(set(scripts))} unique scripts")
# Check appendix to see full output mapping
Explanation: Parse script names from colab notebooks
End of explanation
def get_module(line):
line = line.rstrip()
import_kw = None
if line.lstrip().startswith("import "):
import_kw = "import "
elif line.lstrip().startswith("from "):
import_kw = "from "
if import_kw:
module = line.lstrip()[len(import_kw) :].split(" ")[0].split(".")[0]
return module, import_kw
return (None, None)
def get_modules_from_script(file_name):
try:
with open(os.path.join(scripts_path, file_name)) as f:
code = f.read()
codelines = code.split("\n")
modules = set(filter(None, map(lambda x: get_module(x)[0], codelines)))
return modules
except FileNotFoundError:
print(f"{file_name} not found")
INBUILT_MODULES = [
"__future__",
"collections",
"functools",
"io",
"itertools",
"math",
"os",
"pathlib",
"pprint",
"random",
"sys",
"time",
"timeit",
"warnings",
"mpl_toolkits",
]
REMOVE_MODULES = ["superimport"]
SCRIPT_MODULES = [
"rvm_regressor",
"gmm_lib",
"rvm_classifier",
"gauss_utils",
"prefit_voting_classifier",
"mix_bernoulli_lib",
"fisher_lda_fit",
]
TRANSFORM_MODULES = {"PIL": "pillow", "tensorflow_probability": "tensorflow-probability", "sklearn": "scikit-learn"}
with open("../requirements.txt") as f:
REQ_MODULES = f.read().strip().split("\n")
# TODO: Replace import pyprobml_utils with probml_utils
Explanation: Process the code
Ways to import modules in python
* import foo
* import foo as bar
* import foo.bar
* import foo.bar as bar
* from foo import bar
* from foo import *
* from foo.bar import baz
* from foo.bar import baz as qux
End of explanation
all_modules = reduce(
lambda x, y: x.union(y) or x, filter(None, map(get_modules_from_script, jax.tree_leaves(master_metadata)))
)
check_modules = all_modules - set(INBUILT_MODULES) - set(SCRIPT_MODULES) - set(REQ_MODULES) - set(REMOVE_MODULES)
for module in check_modules:
try:
if module in TRANSFORM_MODULES:
module_install = TRANSFORM_MODULES[module]
else:
module_install = module
exec(f"import {module}")
except Exception as e:
print(e)
print(module, "failed")
def get_white_space(line):
space = 0
while line[0] == " ":
line = line[1:]
space += 1
return space * " "
def convert_py_to_ipynb(file_name, chapter, fig_num, prev=""):
chap_num, _ = chapter.split("_", 1)
current_modules = set()
new_lines = []
notebook = nbformat.v4.new_notebook()
with open(os.path.join(scripts_path, file_name)) as f:
code = f.read().strip()
codelines = code.split("\n")
for line in codelines:
# Ignore superimport
if line.strip().startswith("import superimport"):
continue
# consistently use savefig only
line = line.replace("save_fig", "savefig")
# change folder path
line = line.replace("../figures", "figures")
# Change pyprobml_utils to probml_utils
if "pyprobml_utils" in line:
line = line.replace("pyprobml_utils", "probml_utils")
current_modules.add("probml_utils")
# Check if the line is an import command
module, import_kw = get_module(line)
if module:
if module in SCRIPT_MODULES:
if import_kw == "import ":
if " as " in line:
line = line.replace(f"{module}", f"probml_utils.{module}", 1)
else:
line = line.replace(f"{module}", f"probml_utils.{module} as {module}", 1)
elif import_kw == "from ":
line = line.replace(f"{module}", f"probml_utils.{module}", 1)
else:
raise NameError()
elif module not in INBUILT_MODULES + REQ_MODULES + list(current_modules):
current_modules.add(module)
module_install = TRANSFORM_MODULES[module] if module in TRANSFORM_MODULES else module
space = get_white_space(line)
line = f"{space}try:\n {space}{line}\n{space}except ModuleNotFoundError:\n {space}%pip install {module_install}\n {space}{line}"
new_lines.append(line)
new_code = "\n".join(new_lines) + "\n"
if len(prev) == 0:
notebook["cells"].append(nbformat.v4.new_code_cell(new_code))
else:
notebook["cells"].append(nbformat.v4.new_markdown_cell(prev))
save_path = f"../notebooks/book1/{chap_num}"
if not os.path.exists(save_path):
os.makedirs(save_path)
nbformat.write(notebook, os.path.join(save_path, f"{file_name.replace('.py', '.ipynb')}"))
print(f"{file_name} saved")
Explanation: Corrected scripts:
Figure 2.5: typo_fix: changed anscobmes_quartet.py to anscombes_quartet.py
Figure 3.13: name_change: changed mix_ber_em_mnist.py to mix_bernoulli_em_mnist.py
Figure 4.17: missing: gaussInferParamsMean2d.py is not present in scripts folder (changed to gauss_infer_2d.py)
Figure 9.5: name_change: changed fisher_vowel_demo.py to fisher_discrim_vowel.py
End of explanation
global_store = []
global_chap = []
repo_path = "https://github.com/probml/pyprobml/tree/master/notebooks/book1"
for chapter in sorted(master_metadata):
local_store = []
for fig_num, script_names in master_metadata[chapter].items():
for script_name in script_names:
print(f"Processing: chapter {chapter}, figure {fig_num}, script_name {script_name}")
prev = ""
if script_name in global_store:
if script_name not in local_store:
idx = global_store.index(script_name)
chap_num = global_chap[idx]
prev = f"Source of this notebook is here: {repo_path}/{chap_num}/{script_name.replace('.py', '.ipynb')}"
print("##### PREV triggered. duplicate of", chap_num, script_name)
else:
global_store.append(script_name)
global_chap.append(chapter.split("_", 1)[0])
local_store.append(script_name)
convert_py_to_ipynb(script_name, chapter, fig_num, prev)
print("Total notebooks:", len(glob("../notebooks/book1/*/*.ipynb")))
Explanation: Convert
End of explanation
pd.to_pickle(master_metadata, "metadata_book1.pkl")
print("Everything is done in", time() - init, "seconds")
Explanation: Save metadata
End of explanation
def print_names(key):
print(f"Chapter_{key}")
print(yaml.dump(master_metadata[key]))
list(map(print_names, sorted(master_metadata)));
Explanation: Appendix
Chapter wise figure number map with scripts
End of explanation |
3,230 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
This notebook contains all of the code from the corresponding post on the One Codex Blog. These snippets are exactly what are in the blog post, and let you perfectly reproduce those figures.
This is meant to be a starting off point for you to get started analyzing your own samples. You can copy this notebook straight into your account using the button in the header. To "run" or execute a cell, just hit Shift + Enter. A few other resources you may find useful include
Step1: Question #1
Step2: Question #2
Step3: Question #3
Step4: Question #4
Step5: Question #5
Step6: Question #6
Step7: Question #6 | Python Code:
from onecodex import Api
ocx = Api()
project = ocx.Projects.get("d53ad03b010542e3") # get DIABIMMUNE project by ID
samples = ocx.Samples.where(project=project.id, public=True, limit=50)
samples.metadata[[
"gender",
"host_age",
"geo_loc_name",
"totalige",
"eggs",
"vegetables",
"milk",
"wheat",
"rice",
]]
Explanation: Introduction
This notebook contains all of the code from the corresponding post on the One Codex Blog. These snippets are exactly what are in the blog post, and let you perfectly reproduce those figures.
This is meant to be a starting off point for you to get started analyzing your own samples. You can copy this notebook straight into your account using the button in the header. To "run" or execute a cell, just hit Shift + Enter. A few other resources you may find useful include: notes on getting started with our One Codex library; the full documentation on our API (more technical); a cheat sheet on getting started with Pandas, a Python library for data manipulation; and reading a few of our blog posts (where we plan to have nice demos with these notebooks). As always, also feel free to send us questions or suggestions by clicking the chat icon in the bottom right!
Now we're going to dive right in and start crunching some numbers!
Fetching data
To get started, we create an instance of our API, grab the DIABIMMUNE project, and download 500 samples from the cohort.
End of explanation
observed_taxa = samples.plot_metadata(vaxis="observed_taxa", haxis="geo_loc_name", return_chart=True)
simpson = samples.plot_metadata(vaxis="simpson", haxis="geo_loc_name", return_chart=True)
shannon = samples.plot_metadata(vaxis="shannon", haxis="geo_loc_name", return_chart=True)
observed_taxa | simpson | shannon
from onecodex.notebooks.report import *
ref_text = 'Roo, et al. "How to Python." Nature, 2019.'
legend(f"Alpha diversity by location of birth{reference(text=ref_text, label='roo1')}")
Explanation: Question #1: How does alpha diversity vary by sample group?
Here, we display observed taxa, Simpson’s Index, and Shannon Entropy side-by-side, grouped by the region of birth. Each group includes samples taken across the entire three-year longitudinal study.
End of explanation
samples.plot_metadata(haxis="host_age", vaxis="Bacteroides", plot_type="scatter")
Explanation: Question #2: How does the microbiome change over time?
The plot_metadata function can
search through all taxa in your samples and pull out read counts or relative abundances.
End of explanation
# generate a dataframe containing relative abundances
df_rel = samples.to_df(rank="genus")
# fetch all samples for subject P014839
subject_metadata = samples.metadata.loc[samples.metadata["host_subject_id"] == "P014839"]
subject_df = df_rel.loc[subject_metadata.index]
# put them in order of sample date
subject_df = subject_df.loc[subject_metadata["host_age"].sort_values().index]
# you can access our library using the ocx accessor on pandas dataframes!
subject_df.ocx.plot_bargraph(
rank="genus",
label=lambda metadata: str(metadata["host_age"]),
title="Subject P014839 Over Time",
xlabel="Host Age at Sampling Time (days)",
ylabel="Relative Abundance",
legend="Genus",
)
Explanation: Question #3: How does an individual subject's gut change over time?
Here, we're going to drop into a dataframe, slice it to fetch all the data points from a single subject of the study, and generate a stacked bar plot. It's nice to see the expected high abundance of Bifidobacterium early in life, giving way to Bacteroides near age three!
End of explanation
df_rel[:30].ocx.plot_heatmap(legend="Relative Abundance", tooltip="geo_loc_name")
Explanation: Question #4: Heatmaps?!
End of explanation
# generate a dataframe containing read counts
df_abs = samples.to_df()
df_abs[:30].ocx.plot_distance(metric="weighted_unifrac")
Explanation: Question #5: How do samples cluster?
First up, we'll plot a heatmap of weighted UniFrac distance between the first 30 samples in the dataset. This requires unnormalized read counts, so we'll generate a new, unnormalized dataframe.
End of explanation
samples.plot_pca(color="geo_loc_name", size="Bifidobacterium", title="My PCoA Plot")
Explanation: Question #6: Can I do PCA?
End of explanation
samples.plot_mds(
metric="weighted_unifrac", method="pcoa", color="geo_loc_name", title="My PCoA Plot"
)
page_break()
bibliography()
Explanation: Question #6: Can I do something better than PCA?
End of explanation |
3,231 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gaussian processes (GP) are a cornerstone of modern machine learning. They are often used for non-parametric regression and classification, and are extended from the theory behind Gaussian distributions and Gaussian mixture models (GMM), with strong and interesting theoretical ties to kernels and neural networks. While Gaussian mixture models are used to represent a distribution over values, Gaussian processes are a distribution over functions. This is easy enough to say, but what does it really mean? Let's take a look.
<!-- TEASER_END -->
Step1: Here we have a set of values, $X$, and another set of values $y$. The values of $X$ are related to $y$ by a function $f(x)$, which is described by the equation $y = sin(C * X) + \varepsilon$, where $\varepsilon$ is some noise (in this case Gaussian noise with a variance of .05) and $C$ is some constant (in this case, 20, increasing the frequency of the oscillations so things look nice).
This means that for any value $x$ we put into the function, we get out some new value $y$. If we did not know $f(x)$, and were only given $X$ and $y$, we would be very interested in learning $f(x)$ from the data. If we learned it perfectly, then we would be able to accurately predict any new $y$ given an input $x$.
This may not seem exciting, because this particular $f(x)$ is boring. But imagine our $f(x)$ is something complicated, like a price on the stock market, or energy demand, or the probability of being struck be lightning at a given a location... it becomes a lot more interesting to learn $f(x)$ from data! This is the general motivation behind many machine learning tasks, but this definition of learning the "most likely generating function" has a special importance for the Gaussian process.
In the plot above, the blue values represent data that has been measured, while the red value indicates the true generating function. We can see that the red values are the mean of this particular function, while the errors around the red line (where the blue points fall) represents the covariance of this particular function.
Step2: Now imagine a case like the above, where the red line values are unknown. We have points $X$, and measurements from those points $y$. We can also look at the graph and approximate the red line from the previous graph running through the center of the blue points. If we do this procedure in a mathematical way, we are learning $f(x)$ from the data!
This is basically how estimating the mean function as a Gaussian processes works - given a set of existing points, we have mathematical tools for estimating the mean and covariance function for this particular set of data. We are also able to use our prior information (things like
Step3: Looking at the above plot, it is easy to see that generating the "red line" like above would be much more difficult, even though the generating function $sin()$ is the same. In a sense, you could say that the distribution of possible functions to generate those $y$ values from $X$ is very wide, and it is hard to find the "best guess" for $f(x)$.
Well Isn't That Special
This is exactly what is meant by Gaussian processes are distributions over functions. Like a regular Gaussian distribution (or multivariate Gaussian) which is fully defined by it's mean and covariance, a Gaussian process is fully defined by it's mean function $m(x)$ and covariance function $K(x, x')$.
This covariance function (also called a kernel or correlation function in a bunch of other literature!) gives the pairwise distance between all combinations of points. I will use the name covariance function from here on, but it is important to know that covariance function, correlation function, and kernel function are used semi-interchangeably in the existing papers and examples! My thought is that a covariance function uses a kernel function to compute the variance in some kernel space - so you will see a function name covariance that takes a kernel argument later in the code. A great link on this (courtesy of mcoursen) is here. Lets walk through a simple example, modified from Christopher Fonnesbeck's code for Bios366.
We will need to start with some "initial guess" for both the mean function and the covariance function. The simplest guess is 0 mean, with some covariance defined by taking our kernel function at $0$. Though there are many different kernel functions, the exponential kernel is usually one of the first to try. I will be covering kernels in more detail in both this post and a followup, but to keep things simple we will gloss over the details.
Ultimately, a kernel is simply a function that takes in two matrices (or vectors) and compares the distances between every sample in some space. In a linear space, this is as simple as np.dot(x, x.T) if x is a rows-as-samples matrix. The exponential kernel measures distances in a non-linear space, defined by a Gaussian transformation. This sounds pretty complicated, but thinking of these kernels as black box functions that return distances between samples is good enough to get through this post.
Our initial mean function is simply $0$, and the correlation function gives an initial condition by calculating covariance(kernel, 0, 0). Using this as a starting place, we can visualize our initial function estimate, given no information besides a choice for the kernel.
Step4: Now that we have initialized the GP, we want to estimate a new $y$ given a new input $x$. Without any prior knowledge our guess will not be very good, which is represented by the wide blue line across the plot (our confidence bounds). Luckily, we have a set of $x$ values that are paired with $y$ values , called our training set, which we can use to learn a possible model. To make these updates, we will need a new tool
Step5: We can see from the above plots that we have a pretty good idea of the values we would get out of the function given $x = 1$. It is less clear what values we would get for $x = 3$, and only gets worse as we travel off the plot.
Our expected value for the function is simply the mean we get out of the conditional, and the returned variance measures our uncertainty in the answer.
Step6: The numerical results above agree with our intuition looking at the final plot.
It is clear that adding more measured points in a region increases our ability to predict new values in that region - this is the heart of the Gaussian process. Given enough data points, it is possible to have strong prediction ability for many different functions.
We also have the ability to encode prior knowledge about the function generating the data using different kernel functions. There are many, many, many kernel functions which are used in machine learning, and I plan to further cover kernels in general in a follow-up post. Just know that the exponential kernel is a good default choice, though that kernel also has many parameters to tune! This gets into model selection or hyperparameter optimization which is also a topic for another day.
This is all great, but the code is kind of a mess. Let's clean up this code and make a simple, scikit-learn style regression estimator, saving classification for another day.
Step7: Classy
Now we have a proper scikit-learn style class, and a plot helper to visualize things easily. We can now test the $sin()$ function from the start of this blog post quite easily.
Step8: To Boldly Go...
Though the results are not perfect, the SimpleGaussianProcessRegressor has done a good job approximating the low noise $sin()$ function. It could probably get a better fit if we changed the kernel function, but that is a story for another time. What if we feed it the extremely noisy data? | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
rng = np.random.RandomState(1999)
n_samples = 1000
X = rng.rand(n_samples)
y = np.sin(20 * X) + .05 * rng.randn(X.shape[0])
X_t = np.linspace(0, 1, 100)
y_t = np.sin(20 * X_t)
plt.scatter(X, y, color='steelblue', label='measured y')
plt.plot(X_t, y_t, linestyle='-', color='darkred', label='true y')
plt.title('Noisy Example Function')
plt.legend(loc='lower left')
Explanation: Gaussian processes (GP) are a cornerstone of modern machine learning. They are often used for non-parametric regression and classification, and are extended from the theory behind Gaussian distributions and Gaussian mixture models (GMM), with strong and interesting theoretical ties to kernels and neural networks. While Gaussian mixture models are used to represent a distribution over values, Gaussian processes are a distribution over functions. This is easy enough to say, but what does it really mean? Let's take a look.
<!-- TEASER_END -->
End of explanation
rng = np.random.RandomState(1999)
n_samples = 1000
X = rng.rand(n_samples)
y = np.sin(20 * X) + .05 * rng.randn(X.shape[0])
plt.scatter(X, y, color='steelblue')
plt.title('Noisy Data')
Explanation: Here we have a set of values, $X$, and another set of values $y$. The values of $X$ are related to $y$ by a function $f(x)$, which is described by the equation $y = sin(C * X) + \varepsilon$, where $\varepsilon$ is some noise (in this case Gaussian noise with a variance of .05) and $C$ is some constant (in this case, 20, increasing the frequency of the oscillations so things look nice).
This means that for any value $x$ we put into the function, we get out some new value $y$. If we did not know $f(x)$, and were only given $X$ and $y$, we would be very interested in learning $f(x)$ from the data. If we learned it perfectly, then we would be able to accurately predict any new $y$ given an input $x$.
This may not seem exciting, because this particular $f(x)$ is boring. But imagine our $f(x)$ is something complicated, like a price on the stock market, or energy demand, or the probability of being struck be lightning at a given a location... it becomes a lot more interesting to learn $f(x)$ from data! This is the general motivation behind many machine learning tasks, but this definition of learning the "most likely generating function" has a special importance for the Gaussian process.
In the plot above, the blue values represent data that has been measured, while the red value indicates the true generating function. We can see that the red values are the mean of this particular function, while the errors around the red line (where the blue points fall) represents the covariance of this particular function.
End of explanation
rng = np.random.RandomState(1999)
n_samples = 1000
X = rng.rand(n_samples)
y = np.sin(20 * X) + .95 * rng.randn(n_samples)
plt.scatter(X, y, color='steelblue')
plt.title('Really Noisy Data')
Explanation: Now imagine a case like the above, where the red line values are unknown. We have points $X$, and measurements from those points $y$. We can also look at the graph and approximate the red line from the previous graph running through the center of the blue points. If we do this procedure in a mathematical way, we are learning $f(x)$ from the data!
This is basically how estimating the mean function as a Gaussian processes works - given a set of existing points, we have mathematical tools for estimating the mean and covariance function for this particular set of data. We are also able to use our prior information (things like: this function repeats, $X$ values near each other generate $y$ values near each other, etc.) by picking certain formulas to use for the covariance function during the estimation process.
However, there is a problem - if our measurements are very noisy it may be very difficult (or impossible!) to figure out $f(x)$.
End of explanation
# from mrmartin.ner/?p=223
def exponential_kernel(x1, x2):
# Broadcasting tricks to get every pairwise distance.
return np.exp(-(x1[np.newaxis, :, :] - x2[:, np.newaxis, :])[:, :, 0] ** 2).T
# Covariance calculation for a given kernel
def covariance(kernel, x1, x2):
return kernel(x1, x2)
rng = np.random.RandomState(1999)
# Initial guess
kernel = exponential_kernel
init = np.zeros((1, 1))
sigma = covariance(kernel, init, init)
xpts = np.arange(-3, 3, step=0.01).reshape((-1, 1))
plt.errorbar(xpts.squeeze(), np.zeros(len(xpts)), yerr=sigma.squeeze(),
capsize=0, color='steelblue')
plt.ylim(-3, 3)
plt.title("Initial guess")
Explanation: Looking at the above plot, it is easy to see that generating the "red line" like above would be much more difficult, even though the generating function $sin()$ is the same. In a sense, you could say that the distribution of possible functions to generate those $y$ values from $X$ is very wide, and it is hard to find the "best guess" for $f(x)$.
Well Isn't That Special
This is exactly what is meant by Gaussian processes are distributions over functions. Like a regular Gaussian distribution (or multivariate Gaussian) which is fully defined by it's mean and covariance, a Gaussian process is fully defined by it's mean function $m(x)$ and covariance function $K(x, x')$.
This covariance function (also called a kernel or correlation function in a bunch of other literature!) gives the pairwise distance between all combinations of points. I will use the name covariance function from here on, but it is important to know that covariance function, correlation function, and kernel function are used semi-interchangeably in the existing papers and examples! My thought is that a covariance function uses a kernel function to compute the variance in some kernel space - so you will see a function name covariance that takes a kernel argument later in the code. A great link on this (courtesy of mcoursen) is here. Lets walk through a simple example, modified from Christopher Fonnesbeck's code for Bios366.
We will need to start with some "initial guess" for both the mean function and the covariance function. The simplest guess is 0 mean, with some covariance defined by taking our kernel function at $0$. Though there are many different kernel functions, the exponential kernel is usually one of the first to try. I will be covering kernels in more detail in both this post and a followup, but to keep things simple we will gloss over the details.
Ultimately, a kernel is simply a function that takes in two matrices (or vectors) and compares the distances between every sample in some space. In a linear space, this is as simple as np.dot(x, x.T) if x is a rows-as-samples matrix. The exponential kernel measures distances in a non-linear space, defined by a Gaussian transformation. This sounds pretty complicated, but thinking of these kernels as black box functions that return distances between samples is good enough to get through this post.
Our initial mean function is simply $0$, and the correlation function gives an initial condition by calculating covariance(kernel, 0, 0). Using this as a starting place, we can visualize our initial function estimate, given no information besides a choice for the kernel.
End of explanation
def conditional(x_new, x, y, kernel):
cov_xxn = covariance(kernel, x_new, x)
cov_x = covariance(kernel, x, x)
cov_xn = covariance(kernel, x_new, x_new)
mean = cov_xxn.dot(np.linalg.pinv(cov_x)).dot(y)
variance = cov_xn - cov_xxn.dot(np.linalg.pinv(cov_x)).dot(cov_xxn.T)
return mean, variance
# First point estimate
x_new = np.atleast_2d(1.)
# No conditional, this is the first value!
y_new = np.atleast_2d(0 + rng.randn())
x = x_new
y = y_new
# Plotting
y_pred, sigma_pred = conditional(xpts, x, y, kernel=kernel)
plt.errorbar(xpts.squeeze(), y_pred.squeeze(), yerr=np.diag(sigma_pred),
capsize=0, color='steelblue')
plt.plot(x, y, color='darkred', marker='o', linestyle='')
plt.xlim(-3, 3)
plt.ylim(-3, 3)
plt.figure()
# Second point estimate
x_new = np.atleast_2d(-0.7)
mu, s = conditional(x_new, x, y, kernel=kernel)
y_new = np.atleast_2d(mu + np.diag(s)[:, np.newaxis] * rng.randn(*x_new.shape))
x = np.vstack((x, x_new))
y = np.vstack((y, y_new))
# Plotting
y_pred, sigma_pred = conditional(xpts, x, y, kernel=kernel)
plt.errorbar(xpts.squeeze(), y_pred.squeeze(), yerr=np.diag(sigma_pred),
capsize=0, color='steelblue')
plt.plot(x, y, color='darkred', marker='o', linestyle='')
plt.xlim(-3, 3)
plt.ylim(-3, 3)
plt.figure()
# Multipoint estimate
x_new = rng.rand(3, 1)
mu, s = conditional(x_new, x, y, kernel=kernel)
y_new = mu + np.diag(s)[:, np.newaxis] * rng.randn(*x_new.shape)
x = np.vstack((x, x_new))
y = np.vstack((y, y_new))
# Plotting
y_pred, sigma_pred = conditional(xpts, x, y, kernel=kernel)
plt.errorbar(xpts.squeeze(), y_pred.squeeze(), yerr=np.diag(sigma_pred),
capsize=0, color='steelblue')
plt.plot(x, y, color='darkred', marker='o', linestyle='')
plt.xlim(-3, 3)
plt.ylim(-3, 3)
plt.show()
Explanation: Now that we have initialized the GP, we want to estimate a new $y$ given a new input $x$. Without any prior knowledge our guess will not be very good, which is represented by the wide blue line across the plot (our confidence bounds). Luckily, we have a set of $x$ values that are paired with $y$ values , called our training set, which we can use to learn a possible model. To make these updates, we will need a new tool: the conditional distribution.
The conditional formula is fairly straightforward mathematically, and is seen in many other works. For a full derivation, see the slides here or the tutorial here. I will simply state the key mathematics, and show code to compute it.
Conditionals of My Parole
One of the key formulas for the Gaussian process is the conditional function for multivariate Gaussian distributions. This is quite a mouthful, but the idea boils down to "Given my old x, and the y values for those x, what do I expect a new y to be?".
If we have no data, we have no idea what y can be. With a lot of data in a given region, we start to have a pretty strong intuition about y when given an x.
x = 3, what is y?
I have no idea, and this is a terrible question
We are then given the following information:
x = 1, y = 2
x = 2, y = 4
x = 4, y = 8
x = 5, y = 10
Now, if asked again, what is your best guess for $y$?
x = 3, what is y?
My best guess would be $y = 6$
Technically, $y$ could be anything but judging by the past results, $y = 6$ seems to be a reasonable guess.
The mathematical formula for this conditional distibution, with some Gaussian assumptions (this is assumed to be a Gaussian process after all) is shown below.
$p(\hat{x}|x,y) = \mathcal{N}(\mu_\hat{x} + \Sigma_{x\hat{x}}\Sigma_{x}^{-1}(y - \mu_{y}),
\Sigma_\hat{x}-\Sigma_{x\hat{x}}\Sigma_x^{-1}\Sigma_{x\hat{x_n}}^T)$
The new input value is $\hat{x}$, with the previous x and y values being $x$ and $y$. Since we typically assume $\mu_x$ and $\mu_y$ are both $0$, this equation can be simplified.
$p(\hat{x}|x,y) = \mathcal{N}( \Sigma_{x\hat{x}}\Sigma_{x}^{-1}y,
\Sigma_\hat{x}-\Sigma_{x\hat{x}}\Sigma_x^{-1}\Sigma_{x\hat{x_n}}^T)$
The conditional function below is the coded representation of this. Let's use it to make some plots of Gaussian process learning.
End of explanation
mean, var = conditional(np.array([[1]]), x, y, kernel=kernel)
print("Expected value for x = %i, %.4f" % (1, mean))
print("Uncertainty %.4f" % var)
print()
mean, var = conditional(np.array([[3]]), x, y, kernel=kernel)
print("Expected value for x = %i, %.4f" % (3, mean))
print("Uncertainty %.4f" % var)
print()
mean, var = conditional(np.array([[1E6]]), x, y, kernel=kernel)
print("Expected value for x = %i, %.4f" % (1E6, mean))
print("Uncertainty %.4f" % var)
print()
Explanation: We can see from the above plots that we have a pretty good idea of the values we would get out of the function given $x = 1$. It is less clear what values we would get for $x = 3$, and only gets worse as we travel off the plot.
Our expected value for the function is simply the mean we get out of the conditional, and the returned variance measures our uncertainty in the answer.
End of explanation
import numpy as np
from sklearn.base import BaseEstimator, RegressorMixin
from scipy import linalg
from sklearn.utils import check_array
import matplotlib.pyplot as plt
def plot_gp_confidence(gp, show_gp_points=True, X_low=-1, X_high=1,
X_step=.01, xlim=None, ylim=None):
xpts = np.arange(X_low, X_high, step=X_step).reshape((-1, 1))
try:
y_pred = gp.predict(xpts)
mean = gp.predicted_mean_
var = gp.predicted_var_
if gp.predicted_mean_.shape[1] > 1:
raise ValueError("plot_gp_confidence only works for 1 dimensional Gaussian processes!")
rng = np.random.RandomState(1999)
y_new = mean + np.diag(var)[:, np.newaxis] * rng.randn(*xpts.shape)
except TypeError:
y_pred = xpts * 0
var = gp.predicted_var_ * np.ones((xpts.shape[0], xpts.shape[0]))
plt.errorbar(xpts.squeeze(), y_pred.squeeze(), yerr=np.diag(var),
capsize=0, color='steelblue')
if show_gp_points:
plt.plot(gp._X, gp._y, color='darkred', marker='o', linestyle='')
if xlim is not None:
plt.xlim(xlim)
if ylim is not None:
plt.ylim(ylim)
plt.show()
# from mrmartin.ner/?p=223
def exponential_kernel(x1, x2):
# Broadcasting tricks to get every pairwise distance.
return np.exp(-(x1[np.newaxis, :, :] - x2[:, np.newaxis, :])[:, :, 0] ** 2).T
class SimpleGaussianProcessRegressor(BaseEstimator, RegressorMixin):
def __init__(self, kernel_function, copy=True):
self.kernel_function = kernel_function
self.copy = copy
self.predicted_mean_ = 0
self.predicted_var_ = self._covariance(np.zeros((1, 1)), np.zeros((1, 1)))
self._X = None
self._y = None
def _covariance(self, x1, x2):
return self.kernel_function(x1, x2)
def fit(self, X, y):
self._X = None
self._y = None
return self.partial_fit(X, y)
def partial_fit(self, X, y):
X = check_array(X, copy=self.copy)
y = check_array(y, copy=self.copy)
if self._X is None:
self._X = X
self._y = y
else:
self._X = np.vstack((self._X, X))
self._y = np.vstack((self._y, y))
def predict(self, X, y=None):
X = check_array(X, copy=self.copy)
cov_xxn = self._covariance(X, self._X)
cov_x = self._covariance(self._X, self._X)
cov_xn = self._covariance(X, X)
cov_x_inv = linalg.pinv(cov_x)
mean = cov_xxn.dot(cov_x_inv).dot(self._y)
var = cov_xn - cov_xxn.dot(cov_x_inv).dot(cov_xxn.T)
self.predicted_mean_ = mean
self.predicted_var_ = var
return mean
Explanation: The numerical results above agree with our intuition looking at the final plot.
It is clear that adding more measured points in a region increases our ability to predict new values in that region - this is the heart of the Gaussian process. Given enough data points, it is possible to have strong prediction ability for many different functions.
We also have the ability to encode prior knowledge about the function generating the data using different kernel functions. There are many, many, many kernel functions which are used in machine learning, and I plan to further cover kernels in general in a follow-up post. Just know that the exponential kernel is a good default choice, though that kernel also has many parameters to tune! This gets into model selection or hyperparameter optimization which is also a topic for another day.
This is all great, but the code is kind of a mess. Let's clean up this code and make a simple, scikit-learn style regression estimator, saving classification for another day.
End of explanation
gp = SimpleGaussianProcessRegressor(exponential_kernel)
plt.title('Initial GP Confidence')
plot_gp_confidence(gp, X_low=-3, X_high=3, X_step=.01,
xlim=(-3, 3), ylim=(-3, 3))
rng = np.random.RandomState(1999)
n_samples = 200
X = rng.rand(n_samples, 1)
y = np.sin(20 * X) + .05 * rng.randn(X.shape[0], 1)
plt.title('Noisy Data')
plt.scatter(X, y, color='steelblue')
plt.show()
gp.fit(X, y)
X_new = rng.rand(5, 1)
gp.predict(X_new)
plt.title('Final GP Confidence')
plot_gp_confidence(gp, show_gp_points=False, X_low=0, X_high=1, X_step=.01)
Explanation: Classy
Now we have a proper scikit-learn style class, and a plot helper to visualize things easily. We can now test the $sin()$ function from the start of this blog post quite easily.
End of explanation
gp = SimpleGaussianProcessRegressor(exponential_kernel)
plt.title('Initial GP Confidence')
plot_gp_confidence(gp, X_low=-3, X_high=3, X_step=.01,
xlim=(-3, 3), ylim=(-3, 3))
rng = np.random.RandomState(1999)
n_samples = 200
X = rng.rand(n_samples, 1)
y = np.sin(20 * X) + .95 * rng.randn(X.shape[0], 1)
plt.title('Noisy Data')
plt.scatter(X, y, color='steelblue')
plt.show()
gp.fit(X, y)
X_new = rng.rand(5, 1)
gp.predict(X_new)
plt.title('Final GP Confidence')
plot_gp_confidence(gp, show_gp_points=False, X_low=0, X_high=1, X_step=.01)
Explanation: To Boldly Go...
Though the results are not perfect, the SimpleGaussianProcessRegressor has done a good job approximating the low noise $sin()$ function. It could probably get a better fit if we changed the kernel function, but that is a story for another time. What if we feed it the extremely noisy data?
End of explanation |
3,232 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualize source leakage among labels using a circular graph
This example computes all-to-all pairwise leakage among 68 regions in
source space based on MNE inverse solutions and a FreeSurfer cortical
parcellation. Label-to-label leakage is estimated as the correlation among the
labels' point-spread functions (PSFs). It is visualized using a circular graph
which is ordered based on the locations of the regions in the axial plane.
Step1: Load forward solution and inverse operator
We need a matching forward solution and inverse operator to compute
resolution matrices for different methods.
Step2: Read and organise labels for cortical parcellation
Get labels for FreeSurfer 'aparc' cortical parcellation with 34 labels/hemi
Step3: Compute point-spread function summaries (PCA) for all labels
We summarise the PSFs per label by their first five principal components, and
use the first component to evaluate label-to-label leakage below.
Step4: We can show the explained variances of principal components per label. Note
how they differ across labels, most likely due to their varying spatial
extent.
Step5: The output shows the summed variance explained by the first five principal
components as well as the explained variances of the individual components.
Evaluate leakage based on label-to-label PSF correlations
Note that correlations ignore the overall amplitude of PSFs, i.e. they do
not show which region will potentially be the bigger "leaker".
Step6: Most leakage occurs for neighbouring regions, but also for deeper regions
across hemispheres.
Save the figure (optional)
Matplotlib controls figure facecolor separately for interactive display
versus for saved figures. Thus when saving you must specify facecolor,
else your labels, title, etc will not be visible
Step7: Point-spread function for the lateral occipital label in the left hemisphere
Step8: and in the right hemisphere. | Python Code:
# Authors: Olaf Hauk <[email protected]>
# Martin Luessi <[email protected]>
# Alexandre Gramfort <[email protected]>
# Nicolas P. Rougier (graph code borrowed from his matplotlib gallery)
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
from mne.minimum_norm import (read_inverse_operator,
make_inverse_resolution_matrix,
get_point_spread)
from mne.viz import circular_layout, plot_connectivity_circle
print(__doc__)
Explanation: Visualize source leakage among labels using a circular graph
This example computes all-to-all pairwise leakage among 68 regions in
source space based on MNE inverse solutions and a FreeSurfer cortical
parcellation. Label-to-label leakage is estimated as the correlation among the
labels' point-spread functions (PSFs). It is visualized using a circular graph
which is ordered based on the locations of the regions in the axial plane.
End of explanation
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-fixed-inv.fif'
forward = mne.read_forward_solution(fname_fwd)
# Convert forward solution to fixed source orientations
mne.convert_forward_solution(
forward, surf_ori=True, force_fixed=True, copy=False)
inverse_operator = read_inverse_operator(fname_inv)
# Compute resolution matrices for MNE
rm_mne = make_inverse_resolution_matrix(forward, inverse_operator,
method='MNE', lambda2=1. / 3.**2)
src = inverse_operator['src']
del forward, inverse_operator # save memory
Explanation: Load forward solution and inverse operator
We need a matching forward solution and inverse operator to compute
resolution matrices for different methods.
End of explanation
labels = mne.read_labels_from_annot('sample', parc='aparc',
subjects_dir=subjects_dir)
n_labels = len(labels)
label_colors = [label.color for label in labels]
# First, we reorder the labels based on their location in the left hemi
label_names = [label.name for label in labels]
lh_labels = [name for name in label_names if name.endswith('lh')]
# Get the y-location of the label
label_ypos = list()
for name in lh_labels:
idx = label_names.index(name)
ypos = np.mean(labels[idx].pos[:, 1])
label_ypos.append(ypos)
# Reorder the labels based on their location
lh_labels = [label for (yp, label) in sorted(zip(label_ypos, lh_labels))]
# For the right hemi
rh_labels = [label[:-2] + 'rh' for label in lh_labels]
Explanation: Read and organise labels for cortical parcellation
Get labels for FreeSurfer 'aparc' cortical parcellation with 34 labels/hemi
End of explanation
# Compute first PCA component across PSFs within labels.
# Note the differences in explained variance, probably due to different
# spatial extents of labels.
n_comp = 5
stcs_psf_mne, pca_vars_mne = get_point_spread(
rm_mne, src, labels, mode='pca', n_comp=n_comp, norm=None,
return_pca_vars=True)
n_verts = rm_mne.shape[0]
del rm_mne
Explanation: Compute point-spread function summaries (PCA) for all labels
We summarise the PSFs per label by their first five principal components, and
use the first component to evaluate label-to-label leakage below.
End of explanation
with np.printoptions(precision=1):
for [name, var] in zip(label_names, pca_vars_mne):
print(f'{name}: {var.sum():.1f}% {var}')
Explanation: We can show the explained variances of principal components per label. Note
how they differ across labels, most likely due to their varying spatial
extent.
End of explanation
# get PSFs from Source Estimate objects into matrix
psfs_mat = np.zeros([n_labels, n_verts])
# Leakage matrix for MNE, get first principal component per label
for [i, s] in enumerate(stcs_psf_mne):
psfs_mat[i, :] = s.data[:, 0]
# Compute label-to-label leakage as Pearson correlation of PSFs
# Sign of correlation is arbitrary, so take absolute values
leakage_mne = np.abs(np.corrcoef(psfs_mat))
# Save the plot order and create a circular layout
node_order = lh_labels[::-1] + rh_labels # mirror label order across hemis
node_angles = circular_layout(label_names, node_order, start_pos=90,
group_boundaries=[0, len(label_names) / 2])
# Plot the graph using node colors from the FreeSurfer parcellation. We only
# show the 200 strongest connections.
fig = plt.figure(num=None, figsize=(8, 8), facecolor='black')
plot_connectivity_circle(leakage_mne, label_names, n_lines=200,
node_angles=node_angles, node_colors=label_colors,
title='MNE Leakage', fig=fig)
Explanation: The output shows the summed variance explained by the first five principal
components as well as the explained variances of the individual components.
Evaluate leakage based on label-to-label PSF correlations
Note that correlations ignore the overall amplitude of PSFs, i.e. they do
not show which region will potentially be the bigger "leaker".
End of explanation
# left and right lateral occipital
idx = [22, 23]
stc_lh = stcs_psf_mne[idx[0]]
stc_rh = stcs_psf_mne[idx[1]]
# Maximum for scaling across plots
max_val = np.max([stc_lh.data, stc_rh.data])
Explanation: Most leakage occurs for neighbouring regions, but also for deeper regions
across hemispheres.
Save the figure (optional)
Matplotlib controls figure facecolor separately for interactive display
versus for saved figures. Thus when saving you must specify facecolor,
else your labels, title, etc will not be visible::
>>> fname_fig = data_path + '/MEG/sample/plot_label_leakage.png'
>>> fig.savefig(fname_fig, facecolor='black')
Plot PSFs for individual labels
Let us confirm for left and right lateral occipital lobes that there is
indeed no leakage between them, as indicated by the correlation graph.
We can plot the summary PSFs for both labels to examine the spatial extent of
their leakage.
End of explanation
brain_lh = stc_lh.plot(subjects_dir=subjects_dir, subject='sample',
hemi='both', views='caudal',
clim=dict(kind='value',
pos_lims=(0, max_val / 2., max_val)))
brain_lh.add_text(0.1, 0.9, label_names[idx[0]], 'title', font_size=16)
Explanation: Point-spread function for the lateral occipital label in the left hemisphere
End of explanation
brain_rh = stc_rh.plot(subjects_dir=subjects_dir, subject='sample',
hemi='both', views='caudal',
clim=dict(kind='value',
pos_lims=(0, max_val / 2., max_val)))
brain_rh.add_text(0.1, 0.9, label_names[idx[1]], 'title', font_size=16)
Explanation: and in the right hemisphere.
End of explanation |
3,233 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
VIX as a measure of Market Uncertainty
by Brandon Wang (bw1115)
Data Bootcamp Final Project (NYU Stern Spring 2017)
Abstract
The VIX index, calculated and published by the Chicago Board Options Exchange, is known to be a "fear gauge" of the stock market. Specifically designed to move in the opposite direction of the S&P, the volatility index seeks to somehow quantify the Street's anxiety and risk appetite. Also priced into the index are the expected price swings of the broader market, as the VIX's underlying are S&P options and futures.
Objective
This project aims to examine the relationship between the VIX index and several other popular instruments or financial metrics. While the market can be entirely a random-walk, market participants still create narratives to explain movements and trends. For investors, the VIX is an important gauge of the possibility of these narratives. As such, the assumption is that the VIX is a robust indicator of market trends.
Data Sources
This analysis will draw on 2 financial data sources for numerous datasets.
Quandl
CBOE Volatility Index (VIX)
S&P 500 Index
Bloomberg Terminal
S&P Foward Price-Earnings Ratio, 12 months trailing
Global Economic Policy Uncertainty Index
Federal Funds Futures, short-term rates of 30-days
Merrill Lynch Move Index, measuring bond volatility
JP Morgan Currency Volatility Index
S&P E-Mini Futures Bid-Ask Spread (ES1 Index)
Quandl is a financial data company that pools data from many sources into its API. There are also unconvential data sets for purchase, collected by independent companies involved in the market. Luckily, the Chicago Board Options Exchange uploads its data to Quandl for free.
Elsewhere, the data is not-so-public, requiring access to Bloomberg's suite of data. While Bloomberg has its own API for programming languages like Python, the terminal and an account have to be tied to the computer used. Thus, I took the less fancy approach of extracting the data via Bloomberg's excel add-on and storing it locally.
The Bloomberg excel spreadsheets are available here.
These two sources have an underappreciated advantage
Step1: The index shows the relative calm that the stock market has enjoyed, especially in the first few months of 2017. Just recently, the index has hit its lowest closing level since December of 1993. However, long troughs in VIX with long periods of low volatility is troubling to some investors. Blankfein, CEO of Goldman Sachs, has cautioned against the current norm of calmness and the potential hubris of thinking everything is under control.
While many investors use VIX as a metric in their bets, it is worth noting that depending on VIX as a measurement of "fear" can cause ripple effects if it is inaccurate. In the late 2006s and early 2007s, leading up to the large financial crisis, the VIX was also hovering at a low level, also reflecting a period of calm that we also have today.
VIX Movement with S&P 500
Step2: S&P Valuations and VIX - a rare relationship
Step3: With the absence of sharp-moves in the VIX, the S&P 500 index has reached record highs. However, it is difficult to ignore the rarity of how low the VIX is, while stocks enjoy lofty valuations. The orange circles represent the data points for the last 30 trading sessions, nearing the highest P/E multiples for the lowest instances of volatility.
Outliers include the batch of high PE multiples nearing 25, which occured at the height of real-estate bubble. Instances with incredibly high volatility represent days with large swings in prices.
Step4: The density graph above better shows the rarity of the recent S&P valuations paried with the levels of VIX. More commonly, stocks are valued around the 17-18 mark, with a VIX level around the mid teens.
Investors can interpret this in two ways
Step5: The index for global political uncertainty has, to some degree, tracked the VIX and its yearly trends. However, starting from 2016, we observe a divergence, perhaps showing a decline in the VIX's ability to gauge political uncertainty.
Step6: As orthodoxy and populism extend from our own White House to politics across the world, the VIX remains suprisingly low for something representing uncertainty. President Trump's election has spurred a rally in financials, industrials, and the broader market, but struggles to codify his agenda in healthcare and tax reform. As investors pull back from their high expectations, VIX should have taken off. Despite the very volatility of the President himself, the VIX remains at its lowests.
Many investors explain away this divergence by citing strong U.S. corporate earnings, low unemployment, and rising inflation. Key macro indicators that are showing strength are essentially pushing away any concern for the policies of the current administration, or elsewhere in the world.
The Federal Reserve's suppression of VIX
Step7: Investors commonly use implied volatility as shown in the VIX to measure uncertainty about interest rates, and specifically in this case, the implied federal funds target rate. Typically, when the implied federal funds target rate is rising, signaling strong inflation and growth, the VIX remains at its lows.
In monetary policy, the Fed has, since 2008, kept rates low to encourage investment. However, its recent support of higher benchmark rates has increased the implied fed fund rate, as many Fed officials believe the U.S. economy is on a growth path despite signs of weakness in consumer spending and wage growth. That message has had the effect of subduing levels of uncertainty in VIX, towards the latter half of 2016 to today.
VIX beyond Equities
Step8: What is concerning is the inconsistency between the uncertainty we observe on social media or news sites and the low levels of uncertainty in recent months, expressed by the volatility indexes above. The Fed even took this into consideration in their meeting from April, expressing their confusion as to why implied volatility has reached decade-long lows, despite the inaction we see from policy makers on key legislation such as Trump's tax reform and infrastructure program.
VIX Reliability Concerns
Investors commonly debate over whether VIX is a proper metric for volatility. In this section, we'll examine one of the main concerns about VIX's reliability | Python Code:
# Setup
import sys # system module
import pandas as pd # data package
import matplotlib.pyplot as plt # graphics module
import datetime as dt # date and time module
import seaborn as sns # seaborn graphics module
import os # OS interface module
import quandl # financial data
print('Python version:', sys.version)
print('Pandas version: ', pd.__version__)
print('Seaborn version: ', sns.__version__)
print('quandl version: ', quandl.version.VERSION)
print('Today: ', dt.date.today())
# Time parameters used in analysis
start = dt.datetime(2005, 1,1)
end= dt.datetime(2017,5,11)
quandl.ApiConfig.api_key = "7W3a2MNz8r4uQebgVb5g"
vix = quandl.get("CBOE/VIX",start_date="2005-01-01",end_date="2017-12-09")
vix.info()
# cleaning dataset
vix = vix.drop(['VIX Open', 'VIX High', 'VIX Low'], axis=1)
vix.columns = ['Close']
vix.head()
# plotting dataframe
fig, ax = plt.subplots(figsize=(8,5))
sns.set_style('whitegrid')
vix['Close'].plot(color='orange')
fig.suptitle('CBOE Volatility Index (VIX)')
plt.show()
Explanation: VIX as a measure of Market Uncertainty
by Brandon Wang (bw1115)
Data Bootcamp Final Project (NYU Stern Spring 2017)
Abstract
The VIX index, calculated and published by the Chicago Board Options Exchange, is known to be a "fear gauge" of the stock market. Specifically designed to move in the opposite direction of the S&P, the volatility index seeks to somehow quantify the Street's anxiety and risk appetite. Also priced into the index are the expected price swings of the broader market, as the VIX's underlying are S&P options and futures.
Objective
This project aims to examine the relationship between the VIX index and several other popular instruments or financial metrics. While the market can be entirely a random-walk, market participants still create narratives to explain movements and trends. For investors, the VIX is an important gauge of the possibility of these narratives. As such, the assumption is that the VIX is a robust indicator of market trends.
Data Sources
This analysis will draw on 2 financial data sources for numerous datasets.
Quandl
CBOE Volatility Index (VIX)
S&P 500 Index
Bloomberg Terminal
S&P Foward Price-Earnings Ratio, 12 months trailing
Global Economic Policy Uncertainty Index
Federal Funds Futures, short-term rates of 30-days
Merrill Lynch Move Index, measuring bond volatility
JP Morgan Currency Volatility Index
S&P E-Mini Futures Bid-Ask Spread (ES1 Index)
Quandl is a financial data company that pools data from many sources into its API. There are also unconvential data sets for purchase, collected by independent companies involved in the market. Luckily, the Chicago Board Options Exchange uploads its data to Quandl for free.
Elsewhere, the data is not-so-public, requiring access to Bloomberg's suite of data. While Bloomberg has its own API for programming languages like Python, the terminal and an account have to be tied to the computer used. Thus, I took the less fancy approach of extracting the data via Bloomberg's excel add-on and storing it locally.
The Bloomberg excel spreadsheets are available here.
These two sources have an underappreciated advantage: they are neat and tailored for data analysis, without too many unneccessary parameters. This removes the trouble of having to create a datetime index and format individual values.
The Current State of VIX
End of explanation
sp500 = quandl.get("YAHOO/INDEX_GSPC",start_date="2005-01-03",end_date="2017-05-11")
sp500 = sp500.drop(['Open','High','Low','Volume','Adjusted Close'], axis=1)
# creating fig and ax, plotting objects
fig,ax1 = plt.subplots(figsize=(8,5))
sns.set_style('whitegrid')
ax2 = ax1.twinx()
a = ax1.plot(vix['Close'], color='orange', label='VIX')
b = ax2.plot(sp500['Close'], label='S&P 500')
# titling and formating
ax1.set_ylabel('VIX', color='orange')
ax2.set_ylabel('S&P 500', color='blue')
fig.suptitle('S&P gains as VIX remains subdued')
ax2.grid(False)
# adding lines on different axes into one legend
line = a + b
label = [l.get_label() for l in line]
ax1.legend(line, label, loc='upper left')
plt.show()
Explanation: The index shows the relative calm that the stock market has enjoyed, especially in the first few months of 2017. Just recently, the index has hit its lowest closing level since December of 1993. However, long troughs in VIX with long periods of low volatility is troubling to some investors. Blankfein, CEO of Goldman Sachs, has cautioned against the current norm of calmness and the potential hubris of thinking everything is under control.
While many investors use VIX as a metric in their bets, it is worth noting that depending on VIX as a measurement of "fear" can cause ripple effects if it is inaccurate. In the late 2006s and early 2007s, leading up to the large financial crisis, the VIX was also hovering at a low level, also reflecting a period of calm that we also have today.
VIX Movement with S&P 500
End of explanation
# changing directory to where .csv file is downloaded
os.chdir('C:/Users/Brandon/Downloads')
sp_pe = pd.read_excel('SPX PE.xlsx')
# cleaning dataset
sp_pe.columns = sp_pe.iloc[0]
sp_pe = sp_pe.set_index(['Date'])
sp_pe = sp_pe[1:]
sp_pe = sp_pe.rename(columns={'PE_RATIO': 'S&P P/E'})
# merging vix dataset with S&P PE ratios
vix_sppe = pd.merge(vix, sp_pe,
how='left',
right_index=True,
left_index=True,
)
# changing index for scatterplot
vix_sppe = vix_sppe.rename(columns={'Close': 'VIX'})
vix_sppe.index = range(len(vix_sppe))
# array of last 30 days
vix_sppe_30 = vix_sppe.iloc[-30:]
vix_sppe_30 = vix_sppe_30.values
vix_sppe.head()
fig, ax = plt.subplots()
sns.set(style='whitegrid')
sns.regplot('VIX', 'S&P P/E', data=vix_sppe)
fig.suptitle('Historical PE Ratios and Volatility')
ax.set_xlabel('VIX Volatility Level')
ax.set_ylabel('PE Multiple Level')
ax.set_ylim([10, 25])
for item in vix_sppe_30:
item.flatten()
ax.plot(item[0], item[1], 'o',
color='orange', markersize=10)
plt.show()
Explanation: S&P Valuations and VIX - a rare relationship
End of explanation
fig, ax = plt.subplots()
sns.kdeplot(vix_sppe, shade=True, cmap='Blues')
ax.set_xlabel('VIX Volatility Level')
ax.set_ylabel('PE Multiple Level')
ax.set_ylim([10, 25])
for item in vix_sppe_30:
item.flatten()
ax.plot(item[0], item[1], 'o',
color='orange', markersize=8)
plt.show()
Explanation: With the absence of sharp-moves in the VIX, the S&P 500 index has reached record highs. However, it is difficult to ignore the rarity of how low the VIX is, while stocks enjoy lofty valuations. The orange circles represent the data points for the last 30 trading sessions, nearing the highest P/E multiples for the lowest instances of volatility.
Outliers include the batch of high PE multiples nearing 25, which occured at the height of real-estate bubble. Instances with incredibly high volatility represent days with large swings in prices.
End of explanation
gpu = pd.read_excel('EPUCGLCP.xlsx')
# cleaning dataset
gpu.columns = gpu.iloc[0]
gpu = gpu.set_index(['Date'])
gpu = gpu[1:]
gpu = gpu.rename(columns={'PX_LAST': 'GPU Index'})
# merging with vix
vix_gpu = pd.merge(vix, gpu,
how='left',
right_index=True,
left_index=True,
)
vix_gpu.head()
# removing rows with NaN values
vix_gpu = vix_gpu[pd.notnull(vix_gpu['GPU Index'])]
vix_gpu.head()
# creating fig and ax, plotting objects
fig,ax1 = plt.subplots(figsize=(8,5))
sns.set_style('whitegrid')
ax2 = ax1.twinx()
a = ax1.plot(vix_gpu['VIX'], color='orange', label='VIX')
b = ax2.plot(vix_gpu['GPU Index'], color='red', label='GPU Index')
# titling and formating
ax1.set_ylabel('VIX', color='orange')
ax2.set_ylabel('GPU Index', color='red')
fig.suptitle('Global Political Uncertainty grows as VIX suppresed')
ax2.grid(False)
# adding lines on different axes into one legend
line = a + b
label = [l.get_label() for l in line]
ax1.legend(line, label, loc='upper left')
plt.show()
Explanation: The density graph above better shows the rarity of the recent S&P valuations paried with the levels of VIX. More commonly, stocks are valued around the 17-18 mark, with a VIX level around the mid teens.
Investors can interpret this in two ways: either the market is complacent towards high market valuations and a potential equity bubble, or the VIX is inaccurate in measuring investor uncertainty as the S&P crawls towards unexplained high stock valuations.
VIX and the Macro Environment
Global Political Uncertainty
End of explanation
# narrowing the data to this year
today = dt.date.today()
vix_gpu2015 = vix_gpu.loc['2015-01-01':today,
['VIX', 'GPU Index',]
]
# creating fig and ax, plotting objects
fig,ax1 = plt.subplots(figsize=(8,5))
sns.set_style('whitegrid')
ax2 = ax1.twinx()
a = ax1.plot(vix_gpu2015['VIX'], color='orange', label='VIX')
b = ax2.plot(vix_gpu2015['GPU Index'], color='red', label='GPU Index')
# titling and formating
ax1.set_ylabel('VIX', color='orange')
ax2.set_ylabel('GPU Index', color='red')
ax1.set_ylim([8,62.5]) #match limits in previous graph
ax2.set_ylim([47,310])
fig.suptitle('Divergence in recent years')
ax2.grid(False)
# adding lines on different axes into one legend
line = a + b
label = [l.get_label() for l in line]
ax1.legend(line, label, loc='upper left')
plt.show()
Explanation: The index for global political uncertainty has, to some degree, tracked the VIX and its yearly trends. However, starting from 2016, we observe a divergence, perhaps showing a decline in the VIX's ability to gauge political uncertainty.
End of explanation
ffr = pd.read_excel('Short-Term Fed Funds Rate (30 Day).xlsx')
# cleaning dataset
ffr.columns = ffr.iloc[0]
ffr = ffr.set_index(['Date'])
ffr = ffr[1:]
ffr = ffr.rename(columns={'PX_LAST': 'Fed Funds Rate'})
# merging with vix
vix_ffr = pd.merge(vix, ffr,
how='left',
right_index=True,
left_index=True,
)
vix_ffr.head()
# removing rows with NaN values
vix_ffr = vix_ffr[pd.notnull(vix_ffr['Fed Funds Rate'])]
vix_ffr.head()
# building out the implied Federal Funds Rate from the index's data
vix_ffr['Fed Funds Rate'] = 100 - vix_ffr['Fed Funds Rate']
vix_ffr.head()
# creating fig and ax, plotting objects
fig,ax1 = plt.subplots(figsize=(8,5))
sns.set_style('whitegrid')
ax2 = ax1.twinx()
a = ax1.plot(vix_ffr['VIX'], color='orange', label='VIX')
b = ax2.plot(vix_ffr['Fed Funds Rate'], color='green',
label='Fed Funds Rate')
# titling and formating
ax1.set_ylabel('VIX', color='orange')
ax2.set_ylabel('Fed Funds Rate (implied)', color='green')
fig.suptitle('VIX remains low as the Fed predicts growth')
ax2.grid(False)
# adding lines on different axes into one legend
line = a + b
label = [l.get_label() for l in line]
ax1.legend(line, label, loc='upper right')
plt.show()
Explanation: As orthodoxy and populism extend from our own White House to politics across the world, the VIX remains suprisingly low for something representing uncertainty. President Trump's election has spurred a rally in financials, industrials, and the broader market, but struggles to codify his agenda in healthcare and tax reform. As investors pull back from their high expectations, VIX should have taken off. Despite the very volatility of the President himself, the VIX remains at its lowests.
Many investors explain away this divergence by citing strong U.S. corporate earnings, low unemployment, and rising inflation. Key macro indicators that are showing strength are essentially pushing away any concern for the policies of the current administration, or elsewhere in the world.
The Federal Reserve's suppression of VIX
End of explanation
bondvol = pd.read_excel('MOVE Index.xlsx')
currvol = pd.read_excel('TYVIX Index.xlsx')
# cleaning dataset
bondvol.columns = bondvol.iloc[0]
bondvol = bondvol.set_index(['Date'])
bondvol = bondvol[1:]
bondvol = bondvol.rename(columns={'PX_LAST': 'Treasury Vol Index'})
currvol.columns = currvol.iloc[0]
currvol = currvol.set_index(['Date'])
currvol = currvol[1:]
currvol = currvol.rename(columns={'PX_LAST': 'Currency Vol Index'})
# merging with vix (equity vol)
vix = vix.rename(columns={'Close': 'VIX'})
marketvol = pd.merge(vix, currvol,
how='left',
right_index=True,
left_index=True,
)
marketvol = pd.merge(marketvol, bondvol,
how='left',
right_index=True,
left_index=True,
)
marketvol.head()
# narrowing the data to this year
today = dt.date.today()
marketvol = marketvol.loc['2017-01-01':today,
['VIX', 'Currency Vol Index',
'Treasury Vol Index']
]
marketvol.head()
# creating fig and ax, plotting objects
fig,ax1 = plt.subplots(figsize=(8,5))
sns.set_style('whitegrid')
ax2 = ax1.twinx()
a = ax1.plot(marketvol['VIX'], color='orange', label='VIX')
b = ax2.plot(marketvol['Treasury Vol Index'],
color='purple',
label='Treasury Vol Index')
c = ax1.plot(marketvol['Currency Vol Index'],
color='cyan',
label='Currency Vol Index')
# titling and formating
ax1.set_ylabel('VIX & Currency Vol Index')
ax2.set_ylabel('Treasury Vol Index', color='purple')
fig.suptitle('Volatility falling across all assets')
ax2.grid(False)
ax1.tick_params(axis='x', labelsize=8)
# adding lines on different axes into one legend
line = a + b + c
label = [l.get_label() for l in line]
ax1.legend(line, label, loc='upper center')
plt.show()
Explanation: Investors commonly use implied volatility as shown in the VIX to measure uncertainty about interest rates, and specifically in this case, the implied federal funds target rate. Typically, when the implied federal funds target rate is rising, signaling strong inflation and growth, the VIX remains at its lows.
In monetary policy, the Fed has, since 2008, kept rates low to encourage investment. However, its recent support of higher benchmark rates has increased the implied fed fund rate, as many Fed officials believe the U.S. economy is on a growth path despite signs of weakness in consumer spending and wage growth. That message has had the effect of subduing levels of uncertainty in VIX, towards the latter half of 2016 to today.
VIX beyond Equities
End of explanation
sp_fut = pd.read_excel('S&P E-Mini Futures.xlsx')
# cleaning dataset
sp_fut.columns = sp_fut.iloc[0]
sp_fut = sp_fut.set_index(['Date'])
sp_fut = sp_fut[1:]
sp_fut = sp_fut.rename(columns={'PX_BID': 'E-Mini Bid',
'PX_ASK': 'E-Mini Ask'})
# new column - bid-ask spread
title = 'S&P500 E-Mini Fut Bid-Ask Spread'
sp_fut[title] = sp_fut['E-Mini Ask'] - sp_fut['E-Mini Bid']
sp_fut.head()
# resampling by month and taking the average
sp_fut.index = pd.to_datetime(sp_fut.index)
sp_fut_resample = sp_fut.resample('MS').sum()
sp_fut_count = sp_fut.resample('MS').count()
sp_fut_resample[title] = sp_fut_resample[title] / sp_fut_count[title] # mean
# narrowing the data to this year
today = dt.date.today()
vix2 = vix.loc['2007-01-01':today, ['VIX']]
sp_fut_resample = sp_fut_resample.loc['2007-01-01':today, [title]]
sp_fut_resample.head()
# creating fig and ax, plotting objects
fig,ax1 = plt.subplots(figsize=(8,5))
sns.set_style('whitegrid')
ax2 = ax1.twinx()
a = ax1.plot(vix2['VIX'], color='orange', label='VIX')
b = ax2.plot(sp_fut_resample[title],
color='blue',
label=title)
# titling and formating
ax1.set_ylabel('VIX', color='orange')
ax2.set_ylabel(title, color='blue')
fig.suptitle('Market Depth reaching Recession levels')
ax2.grid(False)
# adding lines on different axes into one legend
line = a + b
label = [l.get_label() for l in line]
ax1.legend(line, label, loc='upper center')
plt.show()
Explanation: What is concerning is the inconsistency between the uncertainty we observe on social media or news sites and the low levels of uncertainty in recent months, expressed by the volatility indexes above. The Fed even took this into consideration in their meeting from April, expressing their confusion as to why implied volatility has reached decade-long lows, despite the inaction we see from policy makers on key legislation such as Trump's tax reform and infrastructure program.
VIX Reliability Concerns
Investors commonly debate over whether VIX is a proper metric for volatility. In this section, we'll examine one of the main concerns about VIX's reliability: the erosion of demand for S&P 500 options as an insurance against instability.
End of explanation |
3,234 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Science Summer School - Split '17
5. Generating images of digits with Generative Adversarial Networks
Step1: Goals
Step2: What are we going to do with the data?
We have $70000$ images of hand-written digits generated from some distribution $X \sim P_{real}$
We have $70000$ labels $y_i \in {0,..., 9}$ indicating which digit is written on the image $x_i$
Problem
Step6: 5.3 The generator network
Step9: 5.4 The basic network for the discriminator
Step10: Intermezzo
Step11: 5.6 Check the implementation of the classes
Step12: Drawing samples from the latent space
Step13: 5.5 Define the model loss -- Vanilla GAN
The objective for the vanilla version of the GAN was defined as follows
Step14: Intermezzo | Python Code:
%matplotlib inline
%load_ext autoreload
%autoreload 2
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import os, util
Explanation: Data Science Summer School - Split '17
5. Generating images of digits with Generative Adversarial Networks
End of explanation
data_folder = 'data'; dataset = 'mnist' # the folder in which the dataset is going to be stored
download_folder = util.download_mnist(data_folder, dataset)
images, labels = util.load_mnist(download_folder)
print("Folder:", download_folder)
print("Image shape:", images.shape) # greyscale, so the last dimension (color channel) = 1
print("Label shape:", labels.shape) # one-hot encoded
show_n_images = 25
sample_images, mode = util.get_sample_images(images, n=show_n_images)
mnist_sample = util.images_square_grid(sample_images, mode)
plt.imshow(mnist_sample, cmap='gray')
sample = images[3]*50 #
sample = sample.reshape((28, 28))
print(np.array2string(sample.astype(int), max_line_width=100, separator=',', precision=0))
plt.imshow(sample, cmap='gray')
Explanation: Goals:
Implement the model from "Generative Adversarial Networks" by Goodfellow et al. (1284 citations since 2014.)
Understand how the model learns to generate realistic images
In ~two hours.
5.1 Downloading the datasets and previewing data
End of explanation
# the mnist dataset is stored in the variable 'images', and the labels are stored in 'labels'
images = images.reshape(-1, 28*28) # 70000 x 784
print (images.shape, labels.shape)
mnist = util.Dataset(images, labels)
print ("Number of samples:", mnist.n)
Explanation: What are we going to do with the data?
We have $70000$ images of hand-written digits generated from some distribution $X \sim P_{real}$
We have $70000$ labels $y_i \in {0,..., 9}$ indicating which digit is written on the image $x_i$
Problem: Imagine that the number of images we have is not enough - a common issue in computer vision and machine learning.
We can pay experts to create new images
Expensive
Slow
Realiable
We can generate new images ourselves
Cheap
Fast
Unreliable?
Problem: Not every image that we generate is going to be perfect (or even close to perfect). Therefore, we need some method to determine which images are realistic.
We can pay experts to determine which images are good enough
Expensive
Slow
Reliable
We can train a model to determine which images are good enough
Cheap
Fast
Unreliable?
Formalization
$X \sim P_{real}$ : existing images of shape $s$
$Z \sim P_z$ : a $k$-dimensional random vector
$G(z; \theta_G): Z \to \hat{X}$ : the generator, a function that transforms the random vector $z$ into an image of shape $s$
$D(x, \theta_D): X \to (Real, Fake)$ : the discriminator a function that given an image of shape $s$ decides if the image is real or fake
Details
The existing images $X$ in our setup are images from the mnist dataset. We will arbitrarily decide that vectors $z$ will be sampled from a uniform distribution, and $G$ and $D$ will both be 'deep' neural networks.
For simplicity, and since we are using the mnist dataset, both $G$ and $D$ will be multi-layer perceptrons (and not deep convolutional networks) with one hidden layer. The generated images $G(z) \sim P_{fake}$ as well as real images $x \sim P_{real}$ will be passed on to the discriminator, which will classify them into $(Real, Fake)$.
<center>
<img src="data/img/gan_general_layout.png">
<strong>Figure 1. </strong> General adversarial network architecture
</center>
Discriminator
The goal of the discriminator is to successfully recognize which image is sampled from the true distribution, and which image is sampled from the generator.
<center>
<img src="data/img/discriminator.png">
<strong>Figure 2.</strong> Discriminator network sketch
</center>
Generator
The goal of the generator is that the discriminator missclassifies the images that the generator generated as if they were generated by the true distribution.
<center>
<img src="data/img/generator.png">
<strong>Figure 3.</strong> Generator network sketch
</center>
5.2 Data transformation
Since we are going to use a fully connected network (we are not going to use local convolutional filters), we are going to flatten the input images for simplicity. Also, the pixel values are scaled to the interval $[0,1]$ (this was already done beforehand).
We will also use a pre-made Dataset class to iterate over the dataset in batches. The class is defined in util.py, and only consists of a constructor and a method next_batch.
Question: Having seen the architecture of the network, why are we the pixels scaled to $[0,1]$ and not, for example, $[-1, 1]$, or left at $[0, 255]$?
Answer:
End of explanation
class Generator:
The generator network
the generator network takes as input a vector z of dimension input_dim, and transforms it
to a vector of size output_dim. The network has one hidden layer of size hidden_dim.
We will define the following methods:
__init__: initializes all variables by using tf.get_variable(...)
and stores them to the class, as well a list in self.theta
forward: defines the forward pass of the network - how do the variables
interact with respect to the inputs
def __init__(self, input_dim, hidden_dim, output_dim):
Constructor for the generator network. In the constructor, we will
just initialize all the variables in the network.
Args:
input_dim: The dimension of the input data vector (z).
hidden_dim: The dimension of the hidden layer of the neural network (h)
output_dim: The dimension of the output layer (equivalent to the size of the image)
with tf.variable_scope("generator"):
self.W1 = tf.get_variable(name="W1",
shape=[input_dim, hidden_dim],
initializer=tf.contrib.layers.xavier_initializer())
self.b1 = tf.get_variable(name="b1",
shape=[hidden_dim],
initializer=tf.zeros_initializer())
self.W2 = tf.get_variable(name="W2",
shape=[hidden_dim, output_dim],
initializer=tf.contrib.layers.xavier_initializer())
self.b2 = tf.get_variable(name="b2",
shape=[output_dim],
initializer=tf.zeros_initializer())
self.theta = [self.W1, self.W2, self.b1, self.b2]
def forward(self, z):
The forward pass of the network -- here we will define the logic of how we combine
the variables through multiplication and activation functions in order to get the
output.
h1 = tf.nn.relu(tf.matmul(z, self.W1) + self.b1)
log_prob = tf.matmul(h1, self.W2) + self.b2
prob = tf.nn.sigmoid(log_prob)
return prob
Explanation: 5.3 The generator network
End of explanation
class Discriminator:
The discriminator network
the discriminator network takes as input a vector x of dimension input_dim, and transforms it
to a vector of size output_dim. The network has one hidden layer of size hidden_dim.
You will define the following methods:
__init__: initializes all variables by using tf.get_variable(...)
and stores them to the class, as well a list in self.theta
forward: defines the forward pass of the network - how do the variables
interact with respect to the inputs
def __init__(self, input_dim, hidden_dim, output_dim):
with tf.variable_scope("discriminator"):
self.W1 = tf.get_variable(name="W1",
shape=[input_dim, hidden_dim],
initializer=tf.contrib.layers.xavier_initializer())
self.b1 = tf.get_variable(name="b1", shape=[hidden_dim],
initializer=tf.zeros_initializer())
self.W2 = tf.get_variable(name="W2",
shape=[hidden_dim, output_dim],
initializer=tf.contrib.layers.xavier_initializer())
self.b2 = tf.get_variable(name="b2",
shape=[output_dim],
initializer=tf.zeros_initializer())
self.theta = [self.W1, self.W2, self.b1, self.b2]
def forward(self, x):
The forward pass of the network -- here we will define the logic of how we combine
the variables through multiplication and activation functions in order to get the
output.
h1 = tf.nn.relu(tf.matmul(x, self.W1) + self.b1)
logit = tf.matmul(h1, self.W2) + self.b2
prob = tf.nn.sigmoid(logit)
return prob, logit
Explanation: 5.4 The basic network for the discriminator
End of explanation
image_dim = 784 # The dimension of the input image vector to the discrminator
discriminator_hidden_dim = 128 # The dimension of the hidden layer of the discriminator
discriminator_output_dim = 1 # The dimension of the output layer of the discriminator
random_sample_dim = 100 # The dimension of the random noise vector z
generator_hidden_dim = 128 # The dimension of the hidden layer of the generator
generator_output_dim = 784 # The dimension of the output layer of the generator
Explanation: Intermezzo: Xavier initialization of weights
Glorot, X., & Bengio, Y. (2010, March). Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (pp. 249-256).
Implemented in tensorflow, as part of the standard library: https://www.tensorflow.org/api_docs/python/tf/contrib/layers/xavier_initializer
1. Idea:
If the weights in a network are initialized to too small values, then the signal shrinks as it passes through each layer until it’s too tiny to be useful.
If the weights in a network are initialized to too large, then the signal grows as it passes through each layer until it’s too massive to be useful.
2. Goal:
We need initial weight values that are just right for the signal not to explode or vanish during the forward pass
3. Math
Trivial
4. Solution
$v = \frac{2}{n_{in} + n_{out}}$
In the case of a Gaussian distribution, we set the variance to $v$.
In the case of a uniform distribution, we set the interval to $\pm v$ (the default distr. in tensorflow is the uniform).
<sub>http://andyljones.tumblr.com/post/110998971763/an-explanation-of-xavier-initialization</sub>
5.5 Define the model parameters
We will take a brief break to set the values for the parameters of the model. Since we know the dataset we are working with, as well as the shape of the generator and discriminator networks, your task is to fill in the values of the following variables.
End of explanation
d = Discriminator(image_dim, discriminator_hidden_dim, discriminator_output_dim)
for param in d.theta:
print (param)
g = Generator(random_sample_dim, generator_hidden_dim, generator_output_dim)
for param in g.theta:
print (param)
Explanation: 5.6 Check the implementation of the classes
End of explanation
def sample_Z(m, n):
return np.random.uniform(-1., 1., size=[m, n])
plt.imshow(sample_Z(16, 100), cmap='gray')
Explanation: Drawing samples from the latent space
End of explanation
def gan_model_loss(X, Z, discriminator, generator):
G_sample = g.forward(Z)
D_real, D_logit_real = d.forward(X)
D_fake, D_logit_fake = d.forward(G_sample)
D_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(
logits=D_logit_real, labels=tf.ones_like(D_logit_real)))
D_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(
logits=D_logit_fake, labels=tf.zeros_like(D_logit_fake)))
D_loss = D_loss_real + D_loss_fake
G_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(
logits=D_logit_fake, labels=tf.ones_like(D_logit_fake)))
return G_sample, D_loss, G_loss
Explanation: 5.5 Define the model loss -- Vanilla GAN
The objective for the vanilla version of the GAN was defined as follows:
<center>
$\min_G \max_D V(D, G) = \mathbb{E}{x \sim p{real}} [log(D(x))] + \mathbb{E}{z \sim p{z}} [log(1 -D(G(z)))]$
</center>
The function contains a minimax formulation, and cannot be directly optimized. However, if we freeze $D$, we can derive the loss for $G$ and vice versa.
Discriminator loss:
<center>
$p_{fake} = G(p_z)$
</center>
<center>
$D_{loss} = \mathbb{E}{x \sim p{real}} [log(D(x))] + \mathbb{E}{\hat{x} \sim p{fake}} [log(1 -D(\hat{x}))]$
</center>
We estimate the expectation over each minibatch and arrive to the following formulation:
<center>
$D_{loss} = \frac{1}{m}\sum_{i=0}^{m} log(D(x_i)) + \frac{1}{m}\sum_{i=0}^{m} log(1 -D(\hat{x_i}))$
</center>
Generator loss:
<center>
$G_{loss} = - \mathbb{E}{z \sim p{z}} [log(1 -D(G(z)))]$
</center>
<center>
$G_{loss} = \frac{1}{m}\sum_{i=0}^{m} [log(D(G(z)))]$
</center>
Model loss, translated from math
The discriminator wants to:
- maximize the (log) probability of a real image being classified as real,
- minimize the (log) probability of a fake image being classified as real.
The generator wants to:
- maximize the (log) probability of a fake image being classified as real.
Model loss, translated to practical machine learning
The output of the discriminator is a scalar, $p$, which we interpret as the probability that an input image is real ($1-p$ is the probability that the image is fake).
The discriminator takes as input:
a minibatch of images from our training set with a vector of ones for class labels: $D_{loss_real}$.
a minibatch of images from the generator with a vector of zeros for class labels: $D_{loss_fake}$.
a minibatch of images from the generator with a vector of ones for class labels: $G_{loss}$.
The generator takes as input:
a minibatch of vectors sampled from the latent space and transforms them to a minibatch of generated images
End of explanation
X = tf.placeholder(tf.float32, name="input", shape=[None, image_dim])
Z = tf.placeholder(tf.float32, name="latent_sample", shape=[None, random_sample_dim])
G_sample, D_loss, G_loss = gan_model_loss(X, Z, d, g)
with tf.variable_scope('optim'):
D_solver = tf.train.AdamOptimizer(name='discriminator').minimize(D_loss, var_list=d.theta)
G_solver = tf.train.AdamOptimizer(name='generator').minimize(G_loss, var_list=g.theta)
saver = tf.train.Saver()
# Some runtime parameters predefined for you
minibatch_size = 128 # The size of the minibatch
num_epoch = 500 # For how many epochs do we run the training
plot_every_epochs = 5 # After this many epochs we will save & display samples of generated images
print_every_batches = 1000 # After this many minibatches we will print the losses
restore = False
checkpoint = 'fc_2layer_e100_2.170.ckpt'
model = 'gan'
model_save_folder = os.path.join('data', 'chkp', model)
print ("Model checkpoints will be saved to:", model_save_folder)
image_save_folder = os.path.join('data', 'model_output', model)
print ("Image samples will be saved to:", image_save_folder)
minibatch_counter = 0
epoch_counter = 0
d_losses = []
g_losses = []
with tf.device("/gpu:0"), tf.Session() as sess:
sess.run(tf.global_variables_initializer())
if restore:
saver.restore(sess, os.path.join(model_save_folder, checkpoint))
print("Restored model:", checkpoint, "from:", model_save_folder)
while epoch_counter < num_epoch:
new_epoch, X_mb = mnist.next_batch(minibatch_size)
_, D_loss_curr = sess.run([D_solver, D_loss],
feed_dict={
X: X_mb,
Z: sample_Z(minibatch_size, random_sample_dim)
})
_, G_loss_curr = sess.run([G_solver, G_loss],
feed_dict={
Z: sample_Z(minibatch_size, random_sample_dim)
})
# Plotting and saving images and the model
if new_epoch and epoch_counter % plot_every_epochs == 0:
samples = sess.run(G_sample, feed_dict={Z: sample_Z(16, random_sample_dim)})
fig = util.plot(samples)
figname = '{}.png'.format(str(minibatch_counter).zfill(3))
plt.savefig(os.path.join(image_save_folder, figname), bbox_inches='tight')
plt.show()
plt.close(fig)
im = util.plot_single(samples[0], epoch_counter)
plt.savefig(os.path.join(image_save_folder, 'single_' + figname), bbox_inches='tight')
plt.show()
chkpname = "fc_2layer_e{}_{:.3f}.ckpt".format(epoch_counter, G_loss_curr)
saver.save(sess, os.path.join(model_save_folder, chkpname))
# Printing runtime statistics
if minibatch_counter % print_every_batches == 0:
print('Epoch: {}/{}'.format(epoch_counter, num_epoch))
print('Iter: {}/{}'.format(mnist.position_in_epoch, mnist.n))
print('Discriminator loss: {:.4}'. format(D_loss_curr))
print('Generator loss: {:.4}'.format(G_loss_curr))
print()
# Bookkeeping
minibatch_counter += 1
if new_epoch:
epoch_counter += 1
d_losses.append(D_loss_curr)
g_losses.append(G_loss_curr)
chkpname = "fc_2layer_e{}_{:.3f}.ckpt".format(epoch_counter, G_loss_curr)
saver.save(sess, os.path.join(model_save_folder, chkpname))
disc_line, = plt.plot(range(len(d_losses[:10000])), d_losses[:10000], c='b', label="Discriminator loss")
gen_line, = plt.plot(range(len(d_losses[:10000])), g_losses[:10000], c='r', label="Generator loss")
plt.legend([disc_line, gen_line], ["Discriminator loss", "Generator loss"])
Explanation: Intermezzo: sigmoid cross entropy with logits
We defined the loss of the model as the log of the probability, but we are not using a $log$ function or the model probablities anywhere?
Enter sigmoid cross entropy with logits: https://www.tensorflow.org/api_docs/python/tf/nn/sigmoid_cross_entropy_with_logits
<center>
<img src="data/img/logitce.png">
From the tensorflow documentation
</center>
Putting it all together
End of explanation |
3,235 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
IST256 Lesson 04
Iterations
Zybook Ch 4
P4E Ch5
Links
Participation
Step1: A. 4
B. 5
C. 6
D. 7
Vote Now
Step2: The sequence of code that repeats is known as the Body.
The Boolean expression which is tested is known as the Test Condition or Exit Condition.
Variables which are part of the Test condition are called Loop Control Variables or Iteration Variables.
Our goal is to make the Test condition False so that the loop stops. This is accomplished through changing the loop control variable within the body of the loop.
Watch Me Code 1
## Say My Name
Step3: A. 1
B. 2
C. 3
D. 4
Vote Now
Step4: Range
The range() function returns an iterable.
range(n) returns n iterations from 0 to n-1
Step5: Watch Me Code 2
Say My Name
Step6: A. 0
B. 10
C. 5
D. Unknown
Vote Now
Step7: A. 0
B. 10
C. 5
D. 15
Vote Now | Python Code:
i,j,k = 1, 20, 1
while (i<j):
n = k*(j-i)
print(n)
i = i + 1
j = j - 1
k = k * 5
Explanation: IST256 Lesson 04
Iterations
Zybook Ch 4
P4E Ch5
Links
Participation: https://poll.ist256.com
In-Class Questions: Zoom Chat!
Agenda
Exam 1 this week
Go over HW 03
Iterations
Make our code execute in a non linear fashion.
Definite loops (for loops) and iterators.
Indefinite looping, infinite loops, and the break and continue statements
How to build complex loops easily.
Connect Activity
Select the line number where the increment occurs:
End of explanation
x = 1
while x<=200:
print(x)
x = x + 1
#print(x)
Explanation: A. 4
B. 5
C. 6
D. 7
Vote Now: https://poll.ist256.com
Increment and Decrement
Increment means to add a value to a variable.
X = X + 1
Decrement means to subtract a value from a variable.
X = X - 1
These are common patterns in iteration statements which you will see today.
Anatomy of a loop
A Loop is a sequence of code that repeats as long as a Boolean expression is <font color="green">True </font>.
End of explanation
x = 1
while x<5:
print(x, end=" ")
x = x + 1
print(x)
Explanation: The sequence of code that repeats is known as the Body.
The Boolean expression which is tested is known as the Test Condition or Exit Condition.
Variables which are part of the Test condition are called Loop Control Variables or Iteration Variables.
Our goal is to make the Test condition False so that the loop stops. This is accomplished through changing the loop control variable within the body of the loop.
Watch Me Code 1
## Say My Name:
- This program will say you name a number of times.
- This is an example of a Definite loop because the number of iterations are pre-determined.
Check Yourself: Loop
on which line is the test / exit condition?
End of explanation
for i in range(3):
print("Hello ")
for char in "mike":
print (char)
Explanation: A. 1
B. 2
C. 3
D. 4
Vote Now: https://poll.ist256.com
For Loop
The For Loop iterates over a python list, string, or range of numbers.
It is the preferred statement for Definite loops, where the number of iterations are pre-determined. Definite loops do not require an exit condition.
The for loop uses an iterator to select each item from the list or range and take action in the loop body.
The range() function is useful for getting an iterator of numbers.
The for loop and iterate over any iterable.
End of explanation
print("range(10) =>", list(range(10)) )
print("range(1,10) =>", list(range(1,10)) )
print("range(1,10,2) =>",list(range(1,10,2)) )
Explanation: Range
The range() function returns an iterable.
range(n) returns n iterations from 0 to n-1
End of explanation
k = 0
j = 10
for j in range(5):
k = k + j
print(k)
Explanation: Watch Me Code 2
Say My Name:
Range() function
Refactored as a For Loop.
Check Yourself: For Range 1
How many iterations are in this loop?
End of explanation
k = 0
for j in range(5):
print(f"k={k},j={j},k+j={k+j}")
k = k + j
Explanation: A. 0
B. 10
C. 5
D. Unknown
Vote Now: https://poll.ist256.com
Check Yourself: For Range 2
What is the value of k on line 4?
End of explanation
for x in 'mike':
if x == 'k':
print('x', end="")
else:
print('o', end="")
Explanation: A. 0
B. 10
C. 5
D. 15
Vote Now: https://poll.ist256.com
Watch Me Code 3
Count the "i"'s
Definite Loop
Indefinite,Infinite Loops and Break
The Indefinite Loop has no pre-determined exit condition. There are no guarantees an indefinite loop will end, as it is typically based on user input.
Infinite Loops are loops which can never reach their exit condition. These should be avoided at all costs.
The break statement is used to exit a loop immediately.It is often used to force an exit condition in the body of the loop.
Indefinite Loops The Easy Way
Determine the code to repeat
Determine the loop control variables & exit conditions
Write exit conditions as if statements with break
Wrap the code in a while True: loop!
Watch Me Code 4
Guess My Name:
This program will execute until you guess my name.
Uses the indefinite loop approach.
Check Yourself: Loop Matching 1
A loop where the test condition is never false is known as which kind of loop?
A. Break
B. Infinite
C. Definite
D. Indefinite
Vote Now: https://poll.ist256.com
Check Yourself: Loop Matching 2
The Python keyword to exit a loop is?
A. break
B. exit
C. quit
D. while
Vote Now: https://poll.ist256.com
End-To-End Example
Password Program:
5 attempts for the password
On correct password, print: “Access Granted”, then end the program
On incorrect password “Invalid Password Attempt #” and give the user another try
After 5 attempts, print “You are locked out”. Then end the program.
Conclusion Activity Exit Ticket
This program will output?
End of explanation |
3,236 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hitting and Cold Weather in Baseball
A project by Nathan Ding ([email protected]) on the effects of temperature on major league batters
Spring 2016 Semester
Introduction
The Natural Gas Law (PV = nRT) tells us that as the temperature of a gas rises in a rigid container, the pressure of the said gas will steadily increase as well due to a rise in the average speed of gas molecules. In essence, the amount of energy contained within the system rises as heat is nothing more than thermal (kenetic) energy. While the Natural Gas Law holds for gasses, a similar increase in molecular vibrations - and therefore energy - is seen in solid objects as well. When the temperature rises, the amount of energy contained within a solid increases. The purpose of this project is to examine the effects of temperatures on the game of baseball, with specific regard to the hitting aspect of the game.
Hitting in Baseball
The art of hitting a MLB fastball combines an incredible amount of luck, lightning-fast reflexes, and skill. Hitters often have less than half a second to determine whether to swing at a ball or not. However, when sharp contact is made with a fastball screaming towads the plate at over 90 miles/hour, the sheer velocity and energy the ball carries with it helps it fly off of the bat at an even faster speed. The higher the pitch velocity, the more energy a ball contains, and the faster its "exit velocity" (the speed of the ball when it is hit). This project looks to examine whether or no the extra energy provided by the ball's temperature plays a significant factor in MLB hitters' abilities to hit the ball harder. By analyzing the rates of extra base hits (doubles, triples, and home runs which generally require a ball to be hit much harder and further than a single) at different temperature ranges, I hope to discover a significant correlation between temperature and hitting rates.
Packages Used
Step1: Pandas
Step2: Data Cleansing
Because of the nature of the baseball-reference.com Play Index, there were some repeated games in the CSV files, and after every 25 games the headers would reappear. In order to clean the data, I removed each row of data where the 'Temp' value was 'Temp', because those rows were the header rows. I removed the unnecessary columns by iterating through the column values with a for loop and removing the ones that were "unimportant" (as opposed to the "important" ones in the important list) Finally, I removed the duplicate entries from the datafram using the df.drop_duplicates() method.
Step3: Data Plots
I made a couple bar graphs to compare average extra base hits per game by temperature range and to compare home runs per game as well because home runs are the furthest hit balls, and in theory should see the largest temperature impact if there is in fact a measureable impact on the baseballs. I then made a couple of scatterplots to compare the complete data results, and look for some sort of trendline. Unfortunately, because of the limited amount of possible results, the scatterplots did not come out as I had hoped.
Step4: Statistical Analysis
I ran a linear regression of the total extra base hits and teperatures for the master data set to see if there was a correletion. Although the r-squared value is so small, due to the fact that there are a limited amount of possible home runs per game (realistically) and the sample size is so large (see the scatterplots above), the regressions for extra base hits and temperature, as well as home runs and temperature, both show a miniscule correlation between temperature and hits. Because the slope values are so small (a 100 degree increase in temperature correleates to a 1 extra base hit increase and a .7 home run increase), there is basically no correlation. After all, a 100 degree increase is basically the entire range of this project. | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
import statsmodels.formula.api as smf
%matplotlib inline
Explanation: Hitting and Cold Weather in Baseball
A project by Nathan Ding ([email protected]) on the effects of temperature on major league batters
Spring 2016 Semester
Introduction
The Natural Gas Law (PV = nRT) tells us that as the temperature of a gas rises in a rigid container, the pressure of the said gas will steadily increase as well due to a rise in the average speed of gas molecules. In essence, the amount of energy contained within the system rises as heat is nothing more than thermal (kenetic) energy. While the Natural Gas Law holds for gasses, a similar increase in molecular vibrations - and therefore energy - is seen in solid objects as well. When the temperature rises, the amount of energy contained within a solid increases. The purpose of this project is to examine the effects of temperatures on the game of baseball, with specific regard to the hitting aspect of the game.
Hitting in Baseball
The art of hitting a MLB fastball combines an incredible amount of luck, lightning-fast reflexes, and skill. Hitters often have less than half a second to determine whether to swing at a ball or not. However, when sharp contact is made with a fastball screaming towads the plate at over 90 miles/hour, the sheer velocity and energy the ball carries with it helps it fly off of the bat at an even faster speed. The higher the pitch velocity, the more energy a ball contains, and the faster its "exit velocity" (the speed of the ball when it is hit). This project looks to examine whether or no the extra energy provided by the ball's temperature plays a significant factor in MLB hitters' abilities to hit the ball harder. By analyzing the rates of extra base hits (doubles, triples, and home runs which generally require a ball to be hit much harder and further than a single) at different temperature ranges, I hope to discover a significant correlation between temperature and hitting rates.
Packages Used
End of explanation
#import data from CSVs
total = pd.read_csv('C:/Users/Nathan/Desktop/BaseBall/data/play-index_game_finder.cgi_ajax_result_table.csv')
for i in range(1,32):
file = 'C:/Users/Nathan/Desktop/Baseball/data/play-index_game_finder.cgi_ajax_result_table (' + str(i) +').csv'
data = pd.read_csv(file)
total = total.append(data)
total.head(30)
Explanation: Pandas: I imported pandas for use in reading my many .csv files and because the pandas module contains dataframes, which are much easier to use for data analysis than lists or dictionaries.
matplotlib.pyplot: matplotlib.pyplot was used to create graphs and scatterplots of the data, and because the creation of figure and axis objects with matplotlib allows for easier manipulation of the physical aspects of a plot.
statsmodels.formula.api was imported for the linear regression models at the end of this project.
Data Inputting
The data for this project was collected from baseball-reference.com's Play Index, which allows users to sort and search for baseball games based on a multitude of criteria including team, player, weather conditions (temperature, wind speed/direction, and precipitation). Unfortunately, the play index only allows registered users to access and export a list of 300 games at a time. As a result, I had to download 33 seperate CSV files from the website to gather all 9-inning MLB games from the 2013 - 2015 seasons. The total number of games used in this data set was 8805. Because the filenames were all 'C:/Users/Nathan/Desktop/BaseBall/data/play-index_game_finder.cgi_ajax_result_table.csv' followed by a number in parenthesis, I was able to use a for loop to combine all the data into one large dataframe.
An online version of these files are avaliable at this link
End of explanation
#Clean data to remove duplicates, unwanted stats
important = ['Date','H', '2B', '3B', 'HR', 'Temp']
for i in total:
if i in important:
continue
del total[i]
#remove headers
total = total[total.Temp != 'Temp']
#remove duplicates
total = total.drop_duplicates()
#remove date -> cannot remove before because there are items that are identical except for date
del total['Date']
# remove date from important list
important.remove('Date')
total.head(5)
#change dtypes to int
total[['Temp', 'HR', '2B', '3B', 'H']] = total[['Temp', 'HR', '2B', '3B', 'H']].astype(int)
total.dtypes
#calculte extra-base-hits (XBH) (doubles, triples, home runs) for each game
#by creating a new column in the dataframe
total['XBH'] = total['2B'] + total['3B'] + total['HR']
#append XBH to important list
important.append('XBH')
#seperate data into new dataframes based on temperature ranges
#below 50
minus50 = total[total.Temp <= 50]
#50-60
t50 = total[total.Temp <= 60]
t50 = t50[t50.Temp > 50]
#60-70
t60 = total[total.Temp <= 70]
t60 = t60[t60.Temp > 60]
#70-80
t70 = total[total.Temp <= 80]
t70 = t70[t70.Temp > 70]
#80-90
t80 = total[total.Temp <= 90]
t80 = t80[t80.Temp > 80]
#90-100
t90 = total[total.Temp <= 100]
t90 = t90[t90.Temp > 90]
#over 100
over100= total[total.Temp > 100]
minus50.head(5)
#New dataframe organized by temperature
rangelist = [minus50, t60, t70, t80, t90, over100]
data_by_temp = pd.DataFrame()
data_by_temp['ranges']=['<50', "60's","70's","80's","90's",">100"]
#calculate per-game averages by temperature range
for i in important:
data_by_temp[i+'/Game'] = [sum(x[i])/len(x) for x in rangelist]
#set index to temperature ranges
data_by_temp = data_by_temp.set_index('ranges')
data_by_temp.head(10)
Explanation: Data Cleansing
Because of the nature of the baseball-reference.com Play Index, there were some repeated games in the CSV files, and after every 25 games the headers would reappear. In order to clean the data, I removed each row of data where the 'Temp' value was 'Temp', because those rows were the header rows. I removed the unnecessary columns by iterating through the column values with a for loop and removing the ones that were "unimportant" (as opposed to the "important" ones in the important list) Finally, I removed the duplicate entries from the datafram using the df.drop_duplicates() method.
End of explanation
#plots
fig, ax=plt.subplots()
data_by_temp['XBH/Game'].plot(ax=ax,kind='bar',color='blue', figsize=(10,6))
ax.set_title("Extra Base Hits Per Game by Temp Range", fontsize=18)
ax.set_ylim(2,3.6)
ax.set_ylabel("XBH/Game")
ax.set_xlabel("Temperature")
plt.xticks(rotation='horizontal')
#plots
fig, ax=plt.subplots()
data_by_temp['HR/Game'].plot(ax=ax,kind='bar',color='red', figsize=(10,6))
ax.set_title("Home Runs Per Game by Temp Range", fontsize=18)
ax.set_ylim(0,1.2)
ax.set_ylabel("HR/Game")
ax.set_xlabel("Temperature")
plt.xticks(rotation='horizontal')
#scatterplot
x = data_by_temp.index
fig, ax = plt.subplots()
ax.scatter(total['Temp'],total['XBH'])
ax.set_title("Temp vs Total Extra Base Hits", fontsize = 18)
ax.set_ylabel("XBH/Game")
ax.set_xlabel("Temperature")
plt.xticks(rotation='horizontal')
ax.set_ylim(-1,14)
#scatterplot
x = data_by_temp.index
fig, ax = plt.subplots()
ax.scatter(total['Temp'],total['HR'])
ax.set_title("Temp vs Total Home Runs", fontsize = 18)
ax.set_ylabel("HR/Game")
ax.set_xlabel("Temperature")
plt.xticks(rotation='horizontal')
ax.set_ylim(-1,10)
Explanation: Data Plots
I made a couple bar graphs to compare average extra base hits per game by temperature range and to compare home runs per game as well because home runs are the furthest hit balls, and in theory should see the largest temperature impact if there is in fact a measureable impact on the baseballs. I then made a couple of scatterplots to compare the complete data results, and look for some sort of trendline. Unfortunately, because of the limited amount of possible results, the scatterplots did not come out as I had hoped.
End of explanation
regression= smf.ols(formula="total['XBH'] ~ total['Temp']", data = total).fit()
regression.params
regression.summary()
regression2 = smf.ols(formula="total['HR'] ~ total['Temp']", data = total).fit()
regression2.params
regression2.summary()
Explanation: Statistical Analysis
I ran a linear regression of the total extra base hits and teperatures for the master data set to see if there was a correletion. Although the r-squared value is so small, due to the fact that there are a limited amount of possible home runs per game (realistically) and the sample size is so large (see the scatterplots above), the regressions for extra base hits and temperature, as well as home runs and temperature, both show a miniscule correlation between temperature and hits. Because the slope values are so small (a 100 degree increase in temperature correleates to a 1 extra base hit increase and a .7 home run increase), there is basically no correlation. After all, a 100 degree increase is basically the entire range of this project.
End of explanation |
3,237 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Variational Autoencoders
Introduction
The variational autoencoder (VAE) is arguably the simplest setup that realizes deep probabilistic modeling. Note that we're being careful in our choice of language here. The VAE isn't a model as such—rather the VAE is a particular setup for doing variational inference for a certain class of models. The class of models is quite broad
Step1: The main thing to draw attention to here is that we use transforms.ToTensor() to normalize the pixel intensities to the range $[0.0, 1.0]$.
Next we define a PyTorch module that encapsulates our decoder network
Step2: Given a latent code $z$, the forward call of Decoder returns the parameters for a Bernoulli distribution in image space. Since each image is of size
$28\times28=784$, loc_img is of size batch_size x 784.
Next we define a PyTorch module that encapsulates our encoder network
Step3: Given an image $\bf x$ the forward call of Encoder returns a mean and covariance that together parameterize a (diagonal) Gaussian distribution in latent space.
With our encoder and decoder networks in hand, we can now write down the stochastic functions that represent our model and guide. First the model
Step4: Note that model() is a callable that takes in a mini-batch of images x as input. This is a torch.Tensor of size batch_size x 784.
The first thing we do inside of model() is register the (previously instantiated) decoder module with Pyro. Note that we give it an appropriate (and unique) name. This call to pyro.module lets Pyro know about all the parameters inside of the decoder network.
Next we setup the hyperparameters for our prior, which is just a unit normal gaussian distribution. Note that
Step5: Just like in the model, we first register the PyTorch module we're using (namely encoder) with Pyro. We take the mini-batch of images x and pass it through the encoder. Then using the parameters output by the encoder network we use the normal distribution to sample a value of the latent for each image in the mini-batch. Crucially, we use the same name for the latent random variable as we did in the model
Step6: The point we'd like to make here is that the two Modules encoder and decoder are attributes of VAE (which itself inherits from nn.Module). This has the consequence they are both automatically registered as belonging to the VAE module. So, for example, when we call parameters() on an instance of VAE, PyTorch will know to return all the relevant parameters. It also means that if we're running on a GPU, the call to cuda() will move all the parameters of all the (sub)modules into GPU memory.
Inference
We're now ready for inference. Refer to the full code in the next section.
First we instantiate an instance of the VAE module.
Step7: Then we setup an instance of the Adam optimizer.
Step8: Then we setup our inference algorithm, which is going to learn good parameters for the model and guide by maximizing the ELBO
Step9: That's all there is to it. Now we just have to define our training loop
Step10: Note that all the mini-batch logic is handled by the data loader. The meat of the training loop is svi.step(x). There are two things we should draw attention to here
Step11: Basically the only change we need to make is that we call evaluate_loss instead of step. This function will compute an estimate of the ELBO but won't take any gradient steps.
The final piece of code we'd like to highlight is the helper method reconstruct_img in the VAE class | Python Code:
import os
import numpy as np
import torch
from pyro.contrib.examples.util import MNIST
import torch.nn as nn
import torchvision.transforms as transforms
import pyro
import pyro.distributions as dist
import pyro.contrib.examples.util # patches torchvision
from pyro.infer import SVI, Trace_ELBO
from pyro.optim import Adam
assert pyro.__version__.startswith('1.7.0')
pyro.distributions.enable_validation(False)
pyro.set_rng_seed(0)
# Enable smoke test - run the notebook cells on CI.
smoke_test = 'CI' in os.environ
# for loading and batching MNIST dataset
def setup_data_loaders(batch_size=128, use_cuda=False):
root = './data'
download = True
trans = transforms.ToTensor()
train_set = MNIST(root=root, train=True, transform=trans,
download=download)
test_set = MNIST(root=root, train=False, transform=trans)
kwargs = {'num_workers': 1, 'pin_memory': use_cuda}
train_loader = torch.utils.data.DataLoader(dataset=train_set,
batch_size=batch_size, shuffle=True, **kwargs)
test_loader = torch.utils.data.DataLoader(dataset=test_set,
batch_size=batch_size, shuffle=False, **kwargs)
return train_loader, test_loader
Explanation: Variational Autoencoders
Introduction
The variational autoencoder (VAE) is arguably the simplest setup that realizes deep probabilistic modeling. Note that we're being careful in our choice of language here. The VAE isn't a model as such—rather the VAE is a particular setup for doing variational inference for a certain class of models. The class of models is quite broad: basically
any (unsupervised) density estimator with latent random variables. The basic structure of such a model is simple, almost deceptively so (see Fig. 1).
Here we've depicted the structure of the kind of model we're interested in as a graphical model. We have $N$ observed datapoints ${ \bf x_i }$. Each datapoint is generated by a (local) latent random variable $\bf z_i$. There is also a parameter $\theta$, which is global in the sense that all the datapoints depend on it (which is why it's drawn outside the rectangle). Note that since $\theta$ is a parameter, it's not something we're being Bayesian about. Finally, what's of particular importance here is that we allow for each $\bf x_i$ to depend on $\bf z_i$ in a complex, non-linear way. In practice this dependency will be parameterized by a (deep) neural network with parameters $\theta$. It's this non-linearity that makes inference for this class of models particularly challenging.
Of course this non-linear structure is also one reason why this class of models offers a very flexible approach to modeling complex data. Indeed it's worth emphasizing that each of the components of the model can be 'reconfigured' in a variety of different ways. For example:
the neural network in $p_\theta({\bf x} | {\bf z})$ can be varied in all the usual ways (number of layers, type of non-linearities, number of hidden units, etc.)
we can choose observation likelihoods that suit the dataset at hand: gaussian, bernoulli, categorical, etc.
we can choose the number of dimensions in the latent space
The graphical model representation is a useful way to think about the structure of the model, but it can also be fruitful to look at an explicit factorization of the joint probability density:
$$ p({\bf x}, {\bf z}) = \prod_{i=1}^N p_\theta({\bf x}_i | {\bf z}_i) p({\bf z}_i) $$
The fact that $p({\bf x}, {\bf z})$ breaks up into a product of terms like this makes it clear what we mean when we call $\bf z_i$ a local random variable. For any particular $i$, only the single datapoint $\bf x_i$ depends on $\bf z_i$. As such the ${\bf z_i}$ describe local structure, i.e. structure that is private to each data point. This factorized structure also means that we can do subsampling during the course of learning. As such this sort of model is amenable to the large data setting. (For more discussion on this and related topics see SVI Part II.)
That's all there is to the model. Since the observations depend on the latent random variables in a complicated, non-linear way, we expect the posterior over the latents to have a complex structure. Consequently in order to do inference in this model we need to specify a flexibly family of guides (i.e. variational distributions). Since we want to be able to scale to large datasets, our guide is going to make use of amortization to keep the number of variational parameters under control (see SVI Part II for a somewhat more general discussion of amortization).
Recall that the job of the guide is to 'guess' good values for the latent random variables—good in the sense that they're true to the model prior and true to the data. If we weren't making use of amortization, we would introduce variational parameters
${ \lambda_i }$ for each datapoint $\bf x_i$. These variational parameters would represent our belief about 'good' values of $\bf z_i$; for example, they could encode the mean and variance of a gaussian distribution in ${\bf z}i$ space. Amortization means that, rather than introducing variational parameters ${ \lambda_i }$, we instead learn a _function that maps each $\bf x_i$ to an appropriate $\lambda_i$. Since we need this function to be flexible, we parameterize it as a neural network. We thus end up with a parameterized family of distributions over the latent $\bf z$ space that can be instantiated for all $N$ datapoint ${\bf x}_i$ (see Fig. 2).
Note that the guide $q_{\phi}({\bf z} | {\bf x})$ is parameterized by a global parameter $\phi$ shared by all the datapoints. The goal of inference will be to find 'good' values for $\theta$ and $\phi$ so that two conditions are satisfied:
the log evidence $\log p_\theta({\bf x})$ is large. this means our model is a good fit to the data
the guide $q_{\phi}({\bf z} | {\bf x})$ provides a good approximation to the posterior
(For an introduction to stochastic variational inference see SVI Part I.)
At this point we can zoom out and consider the high level structure of our setup. For concreteness, let's suppose the ${ \bf x_i }$ are images so that the model is a generative model of images. Once we've learned a good value of $\theta$ we can generate images from the model as follows:
sample $\bf z$ according to the prior $p({\bf z})$
sample $\bf x$ according to the likelihood $p_\theta({\bf x}|{\bf z})$
Each image is being represented by a latent code $\bf z$ and that code gets mapped to images using the likelihood, which depends on the $\theta$ we've learned. This is why the likelihood is often called the decoder in this context: its job is to decode $\bf z$ into $\bf x$. Note that since this is a probabilistic model, there is uncertainty about the $\bf z$ that encodes a given datapoint $\bf x$.
Once we've learned good values for $\theta$ and $\phi$ we can also go through the following exercise.
we start with a given image $\bf x$
using our guide we encode it as $\bf z$
using the model likelihood we decode $\bf z$ and get a reconstructed image ${\bf x}_{\rm reco}$
If we've learned good values for $\theta$ and $\phi$, $\bf x$ and ${\bf x}_{\rm reco}$ should be similar. This should clarify how the word autoencoder ended up being used to describe this setup: the model is the decoder and the guide is the encoder. Together, they can be thought of as an autoencoder.
VAE in Pyro
Let's see how we implement a VAE in Pyro.
The dataset we're going to model is MNIST, a collection of images of handwritten digits.
Since this is a popular benchmark dataset, we can make use of PyTorch's convenient data loader functionalities to reduce the amount of boilerplate code we need to write:
End of explanation
class Decoder(nn.Module):
def __init__(self, z_dim, hidden_dim):
super().__init__()
# setup the two linear transformations used
self.fc1 = nn.Linear(z_dim, hidden_dim)
self.fc21 = nn.Linear(hidden_dim, 784)
# setup the non-linearities
self.softplus = nn.Softplus()
self.sigmoid = nn.Sigmoid()
def forward(self, z):
# define the forward computation on the latent z
# first compute the hidden units
hidden = self.softplus(self.fc1(z))
# return the parameter for the output Bernoulli
# each is of size batch_size x 784
loc_img = self.sigmoid(self.fc21(hidden))
return loc_img
Explanation: The main thing to draw attention to here is that we use transforms.ToTensor() to normalize the pixel intensities to the range $[0.0, 1.0]$.
Next we define a PyTorch module that encapsulates our decoder network:
End of explanation
class Encoder(nn.Module):
def __init__(self, z_dim, hidden_dim):
super().__init__()
# setup the three linear transformations used
self.fc1 = nn.Linear(784, hidden_dim)
self.fc21 = nn.Linear(hidden_dim, z_dim)
self.fc22 = nn.Linear(hidden_dim, z_dim)
# setup the non-linearities
self.softplus = nn.Softplus()
def forward(self, x):
# define the forward computation on the image x
# first shape the mini-batch to have pixels in the rightmost dimension
x = x.reshape(-1, 784)
# then compute the hidden units
hidden = self.softplus(self.fc1(x))
# then return a mean vector and a (positive) square root covariance
# each of size batch_size x z_dim
z_loc = self.fc21(hidden)
z_scale = torch.exp(self.fc22(hidden))
return z_loc, z_scale
Explanation: Given a latent code $z$, the forward call of Decoder returns the parameters for a Bernoulli distribution in image space. Since each image is of size
$28\times28=784$, loc_img is of size batch_size x 784.
Next we define a PyTorch module that encapsulates our encoder network:
End of explanation
# define the model p(x|z)p(z)
def model(self, x):
# register PyTorch module `decoder` with Pyro
pyro.module("decoder", self.decoder)
with pyro.plate("data", x.shape[0]):
# setup hyperparameters for prior p(z)
z_loc = x.new_zeros(torch.Size((x.shape[0], self.z_dim)))
z_scale = x.new_ones(torch.Size((x.shape[0], self.z_dim)))
# sample from prior (value will be sampled by guide when computing the ELBO)
z = pyro.sample("latent", dist.Normal(z_loc, z_scale).to_event(1))
# decode the latent code z
loc_img = self.decoder(z)
# score against actual images
pyro.sample("obs", dist.Bernoulli(loc_img).to_event(1), obs=x.reshape(-1, 784))
Explanation: Given an image $\bf x$ the forward call of Encoder returns a mean and covariance that together parameterize a (diagonal) Gaussian distribution in latent space.
With our encoder and decoder networks in hand, we can now write down the stochastic functions that represent our model and guide. First the model:
End of explanation
# define the guide (i.e. variational distribution) q(z|x)
def guide(self, x):
# register PyTorch module `encoder` with Pyro
pyro.module("encoder", self.encoder)
with pyro.plate("data", x.shape[0]):
# use the encoder to get the parameters used to define q(z|x)
z_loc, z_scale = self.encoder(x)
# sample the latent code z
pyro.sample("latent", dist.Normal(z_loc, z_scale).to_event(1))
Explanation: Note that model() is a callable that takes in a mini-batch of images x as input. This is a torch.Tensor of size batch_size x 784.
The first thing we do inside of model() is register the (previously instantiated) decoder module with Pyro. Note that we give it an appropriate (and unique) name. This call to pyro.module lets Pyro know about all the parameters inside of the decoder network.
Next we setup the hyperparameters for our prior, which is just a unit normal gaussian distribution. Note that:
- we specifically designate independence amongst the data in our mini-batch (i.e. the leftmost dimension) via pyro.plate. Also, note the use of .to_event(1) when sampling from the latent z - this ensures that instead of treating our sample as being generated from a univariate normal with batch_size = z_dim, we treat them as being generated from a multivariate normal distribution with diagonal covariance. As such, the log probabilities along each dimension is summed out when we evaluate .log_prob for a "latent" sample. Refer to the Tensor Shapes tutorial for more details.
- since we're processing an entire mini-batch of images, we need the leftmost dimension of z_loc and z_scale to equal the mini-batch size
- in case we're on GPU, we use new_zeros and new_ones to ensure that newly created tensors are on the same GPU device.
Next we sample the latent z from the prior, making sure to give the random variable a unique Pyro name 'latent'.
Then we pass z through the decoder network, which returns loc_img. We then score the observed images in the mini-batch x against the Bernoulli likelihood parametrized by loc_img.
Note that we flatten x so that all the pixels are in the rightmost dimension.
That's all there is to it! Note how closely the flow of Pyro primitives in model follows the generative story of our model, e.g. as encapsulated by Figure 1. Now we move on to the guide:
End of explanation
class VAE(nn.Module):
# by default our latent space is 50-dimensional
# and we use 400 hidden units
def __init__(self, z_dim=50, hidden_dim=400, use_cuda=False):
super().__init__()
# create the encoder and decoder networks
self.encoder = Encoder(z_dim, hidden_dim)
self.decoder = Decoder(z_dim, hidden_dim)
if use_cuda:
# calling cuda() here will put all the parameters of
# the encoder and decoder networks into gpu memory
self.cuda()
self.use_cuda = use_cuda
self.z_dim = z_dim
# define the model p(x|z)p(z)
def model(self, x):
# register PyTorch module `decoder` with Pyro
pyro.module("decoder", self.decoder)
with pyro.plate("data", x.shape[0]):
# setup hyperparameters for prior p(z)
z_loc = x.new_zeros(torch.Size((x.shape[0], self.z_dim)))
z_scale = x.new_ones(torch.Size((x.shape[0], self.z_dim)))
# sample from prior (value will be sampled by guide when computing the ELBO)
z = pyro.sample("latent", dist.Normal(z_loc, z_scale).to_event(1))
# decode the latent code z
loc_img = self.decoder(z)
# score against actual images
pyro.sample("obs", dist.Bernoulli(loc_img).to_event(1), obs=x.reshape(-1, 784))
# define the guide (i.e. variational distribution) q(z|x)
def guide(self, x):
# register PyTorch module `encoder` with Pyro
pyro.module("encoder", self.encoder)
with pyro.plate("data", x.shape[0]):
# use the encoder to get the parameters used to define q(z|x)
z_loc, z_scale = self.encoder(x)
# sample the latent code z
pyro.sample("latent", dist.Normal(z_loc, z_scale).to_event(1))
# define a helper function for reconstructing images
def reconstruct_img(self, x):
# encode image x
z_loc, z_scale = self.encoder(x)
# sample in latent space
z = dist.Normal(z_loc, z_scale).sample()
# decode the image (note we don't sample in image space)
loc_img = self.decoder(z)
return loc_img
Explanation: Just like in the model, we first register the PyTorch module we're using (namely encoder) with Pyro. We take the mini-batch of images x and pass it through the encoder. Then using the parameters output by the encoder network we use the normal distribution to sample a value of the latent for each image in the mini-batch. Crucially, we use the same name for the latent random variable as we did in the model: 'latent'. Also, note the use of pyro.plate to designate independence of the mini-batch dimension, and .to_event(1) to enforce dependence on z_dims, exactly as we did in the model.
Now that we've defined the full model and guide we can move on to inference. But before we do so let's see how we package the model and guide in a PyTorch module:
End of explanation
vae = VAE()
Explanation: The point we'd like to make here is that the two Modules encoder and decoder are attributes of VAE (which itself inherits from nn.Module). This has the consequence they are both automatically registered as belonging to the VAE module. So, for example, when we call parameters() on an instance of VAE, PyTorch will know to return all the relevant parameters. It also means that if we're running on a GPU, the call to cuda() will move all the parameters of all the (sub)modules into GPU memory.
Inference
We're now ready for inference. Refer to the full code in the next section.
First we instantiate an instance of the VAE module.
End of explanation
optimizer = Adam({"lr": 1.0e-3})
Explanation: Then we setup an instance of the Adam optimizer.
End of explanation
svi = SVI(vae.model, vae.guide, optimizer, loss=Trace_ELBO())
Explanation: Then we setup our inference algorithm, which is going to learn good parameters for the model and guide by maximizing the ELBO:
End of explanation
def train(svi, train_loader, use_cuda=False):
# initialize loss accumulator
epoch_loss = 0.
# do a training epoch over each mini-batch x returned
# by the data loader
for x, _ in train_loader:
# if on GPU put mini-batch into CUDA memory
if use_cuda:
x = x.cuda()
# do ELBO gradient and accumulate loss
epoch_loss += svi.step(x)
# return epoch loss
normalizer_train = len(train_loader.dataset)
total_epoch_loss_train = epoch_loss / normalizer_train
return total_epoch_loss_train
Explanation: That's all there is to it. Now we just have to define our training loop:
End of explanation
def evaluate(svi, test_loader, use_cuda=False):
# initialize loss accumulator
test_loss = 0.
# compute the loss over the entire test set
for x, _ in test_loader:
# if on GPU put mini-batch into CUDA memory
if use_cuda:
x = x.cuda()
# compute ELBO estimate and accumulate loss
test_loss += svi.evaluate_loss(x)
normalizer_test = len(test_loader.dataset)
total_epoch_loss_test = test_loss / normalizer_test
return total_epoch_loss_test
Explanation: Note that all the mini-batch logic is handled by the data loader. The meat of the training loop is svi.step(x). There are two things we should draw attention to here:
any arguments to step are passed to the model and the guide. consequently model and guide need to have the same call signature
step returns a noisy estimate of the loss (i.e. minus the ELBO). this estimate is not normalized in any way, so e.g. it scales with the size of the mini-batch
The logic for adding evaluation logic is analogous:
End of explanation
# Run options
LEARNING_RATE = 1.0e-3
USE_CUDA = False
# Run only for a single iteration for testing
NUM_EPOCHS = 1 if smoke_test else 100
TEST_FREQUENCY = 5
train_loader, test_loader = setup_data_loaders(batch_size=256, use_cuda=USE_CUDA)
# clear param store
pyro.clear_param_store()
# setup the VAE
vae = VAE(use_cuda=USE_CUDA)
# setup the optimizer
adam_args = {"lr": LEARNING_RATE}
optimizer = Adam(adam_args)
# setup the inference algorithm
svi = SVI(vae.model, vae.guide, optimizer, loss=Trace_ELBO())
train_elbo = []
test_elbo = []
# training loop
for epoch in range(NUM_EPOCHS):
total_epoch_loss_train = train(svi, train_loader, use_cuda=USE_CUDA)
train_elbo.append(-total_epoch_loss_train)
print("[epoch %03d] average training loss: %.4f" % (epoch, total_epoch_loss_train))
if epoch % TEST_FREQUENCY == 0:
# report test diagnostics
total_epoch_loss_test = evaluate(svi, test_loader, use_cuda=USE_CUDA)
test_elbo.append(-total_epoch_loss_test)
print("[epoch %03d] average test loss: %.4f" % (epoch, total_epoch_loss_test))
Explanation: Basically the only change we need to make is that we call evaluate_loss instead of step. This function will compute an estimate of the ELBO but won't take any gradient steps.
The final piece of code we'd like to highlight is the helper method reconstruct_img in the VAE class: This is just the image reconstruction experiment we described in the introduction translated into code. We take an image and pass it through the encoder. Then we sample in latent space using the gaussian distribution provided by the encoder. Finally we decode the latent code into an image: we return the mean vector loc_img instead of sampling with it. Note that since the sample() statement is stochastic, we'll get different draws of z every time we run the reconstruct_img function. If we've learned a good model and guide—in particular if we've learned a good latent representation—this plurality of z samples will correspond to different styles of digit writing, and the reconstructed images should exhibit an interesting variety of different styles.
Code and Sample results
Training corresponds to maximizing the evidence lower bound (ELBO) over the training dataset. We train for 100 iterations and evaluate the ELBO for the test dataset, see Figure 3.
End of explanation |
3,238 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Outline
Glossary
1. Radio Science using Interferometric Arrays
Previous
Step1: Import section specific modules | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
Explanation: Outline
Glossary
1. Radio Science using Interferometric Arrays
Previous: 1.0 Introduction
Next: 1.2 Electromagnetic radiation and astronomical quantities
Section status: <span style="background-color:green"> </span>
Import standard modules:
End of explanation
pass
Explanation: Import section specific modules:
End of explanation |
3,239 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CSE 6040, Fall 2015 [12]
Step1: Exercise. Write a snippet of code to verify that the vertex IDs are dense in some interval $[1, n]$. That is, there is a minimum value of $1$, some maximum value $n$, and no missing values between $1$ and $n$.
Step2: Exercise. Make sure every edge has its end points in the vertex table.
Step3: Exercise. Determine which vertices have no incident edges. Store the number of such vertices in a variable, num_solo_vertices.
Step4: Exercise. Compute a view called Outdegrees, which contains the following columns
Step5: Exercise. Query the database to extract a report of which URLs point to which URLs. Also include the source vertex out-degree and order the rows in descending order by it.
Part 2
Step8: Exercise. Implement a function to multiply a sparse matrix by a dense vector, assuming a dense vector defined as follows.
Step11: Exercise. Complete the PageRank implementation for this dataset.
Step12: Exercise. Check your result by first inserting the final computed PageRank vector back into the database, and then using a SQL query to see the ranked URLs. In your query output, also include both the in-degrees and out-degrees of each vertex. | Python Code:
import sqlite3 as db
import pandas as pd
def get_table_names (conn):
assert type (conn) == db.Connection # Only works for sqlite3 DBs
query = "SELECT name FROM sqlite_master WHERE type='table'"
return pd.read_sql_query (query, conn)
def print_schemas (conn, table_names=None, limit=0):
assert type (conn) == db.Connection # Only works for sqlite3 DBs
if table_names is None:
table_names = get_table_names (conn)
c = conn.cursor ()
query = "PRAGMA TABLE_INFO ({table})"
for name in table_names:
c.execute (query.format (table=name))
columns = c.fetchall ()
print ("=== {table} ===".format (table=name))
col_string = "[{id}] {name} : {type}"
for col in columns:
print (col_string.format (id=col[0],
name=col[1],
type=col[2]))
print ("\n")
conn = db.connect ('poliblogs.db')
for name in get_table_names (conn)['name']:
print_schemas (conn, [name])
query = '''SELECT * FROM %s LIMIT 5''' % name
print (pd.read_sql_query (query, conn))
print ("\n")
Explanation: CSE 6040, Fall 2015 [12]: PageRank
In this notebook, you'll implement the PageRank algorithm summarized in class. You'll test it on a real dataset (circa 2005) that consists of political blogs and their links among one another.
Note that the presentation in class follows the matrix view of the algorithm. Cleve Moler (inventor of MATLAB) has a nice set of notes here.
For today's notebook, you'll need to download the following additional materials:
* A cse6040utils module, which is a Python module containing some handy routines from previous classes: link (Note: This module is already part of the git repo for our notebooks if you are pulling from there.)
* A SQLite version of the political blogs dataset: http://cse6040.gatech.edu/fa15/poliblogs.db (~ 611 KiB)
Part 1: Explore the Dataset
Let's start by looking at the dataset, to get a feel for what it contains.
Incidentally, one of you asked recently how to get the schema for a SQLite database when using Python. Here is some code adapted from a few ideas floating around on the web. Let's use these to inspect the tables available in the political blogs dataset.
End of explanation
# Insert your code here
Explanation: Exercise. Write a snippet of code to verify that the vertex IDs are dense in some interval $[1, n]$. That is, there is a minimum value of $1$, some maximum value $n$, and no missing values between $1$ and $n$.
End of explanation
# Insert your code here
Explanation: Exercise. Make sure every edge has its end points in the vertex table.
End of explanation
# Insert your code here:
# Our testing code follows, assuming your `num_solo_vertices` variable:
print ("==> %d vertices have no incident edges." % num_solo_vertices)
assert num_solo_vertices == 266
Explanation: Exercise. Determine which vertices have no incident edges. Store the number of such vertices in a variable, num_solo_vertices.
End of explanation
# Complete this query:
query = '''
CREATE VIEW IF NOT EXISTS Outdegrees AS
...
'''
c = conn.cursor ()
c.execute (query)
from IPython.display import display
query = '''
SELECT Outdegrees.Id, Degree, Url
FROM Outdegrees, Vertices
WHERE Outdegrees.Id = Vertices.Id
ORDER BY -Degree
'''
df_outdegrees = pd.read_sql_query (query, conn)
print "==> A few entries with large out-degrees:"
display (df_outdegrees.head ())
print "\n==> A few entries with small out-degrees:"
display (df_outdegrees.tail ())
Explanation: Exercise. Compute a view called Outdegrees, which contains the following columns:
Id: vertex ID
Degree: the out-degree of this vertex.
To help you test your view, the following snippet includes a second query that selects from your view but adds a Url field and orders the results in descending order of degree. It also prints first few and last few rows of this query, so you can inspect the URLs as a sanity check. (Perhaps it also provides a small bit of entertainment!)
End of explanation
from cse6040utils import sparse_matrix
A_1 = sparse_matrix () # Initially all zeros, with no rows or columns
# Insert your code here
Explanation: Exercise. Query the database to extract a report of which URLs point to which URLs. Also include the source vertex out-degree and order the rows in descending order by it.
Part 2: Implement PageRank
The following exercises will walk you through a possible implementation of PageRank for this dataset.
Exercise. Build a sparse matrix, A_1, that stores $G^TD^{-1}$, where $G^T$ is the transpose of the connectivity matrix $G$, and $D^{-1}$ is the diagonal matrix of inverse out-degrees.
End of explanation
def dense_vector (n, init_val=0.0):
Returns a dense vector of length `n`, with all entries set to
`init_val`.
return [init_val] * n
# Implement this routine:
def spmv (n, A, x):
Returns a dense vector y of length n, where y = A*x.
pass
Explanation: Exercise. Implement a function to multiply a sparse matrix by a dense vector, assuming a dense vector defined as follows.
End of explanation
def scale_vector (x, alpha):
Scales the dense vector x by a constant alpha.
return [x_i*alpha for x_i in x]
def offset_vector (x, c):
Adds the scalar value c to every element of a dense vector x.
return [x_i+c for x_i in x]
ALPHA = 0.85 # Probability of following some link
MAX_ITERS = 25
# Let X[t] store the dense x(t) vector at time t
X = []
x_0 = dense_vector (n, 1.0/n) # Initial distribution: 1/n at each page
X.append (x_0)
for t in range (1, MAX_ITERS):
# Complete this implementation
X.append (...)
Explanation: Exercise. Complete the PageRank implementation for this dataset.
End of explanation
query = '''
CREATE VIEW IF NOT EXISTS Indegrees AS
SELECT Target AS Id, COUNT(*) AS Degree
FROM Edges
GROUP BY Target
'''
c = conn.cursor ()
c.execute (query)
# Complete this query:
query = '''
...
'''
df_ranks = pd.read_sql_query (query, conn)
display (df_ranks)
Explanation: Exercise. Check your result by first inserting the final computed PageRank vector back into the database, and then using a SQL query to see the ranked URLs. In your query output, also include both the in-degrees and out-degrees of each vertex.
End of explanation |
3,240 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<hr>
Script Development - <code>addPosTags.py</code>
Development notebook for script to add tokens and categories to review data.
<hr>
Setup
Step1: <hr>
Development
Add tri-grams
Step2: Add Pos Tags
row
Step3: data frame
Step4: Tri Gram POS Tags
row
Step5: Function
Step6: <hr>
Filter Pos Tags
We are interested in nouns and adjectives. Nouns identify product features and adjectives expresses customer opinions of those features.
However, we cannot use consecutive adjective/noun or noun/adjective pairs alone. Consider this phrase
Step7: Test on Row
Step8: <hr>
Save | Python Code:
import pyspark as ps
from sentimentAnalysis import dataProcessing as dp
# create spark session
spark = ps.sql.SparkSession(sc)
# get dataframes
# specify s3 as sourc with s3a://
#df = spark.read.json("s3a://amazon-review-data/user_dedup.json.gz")
#df_meta = spark.read.json("s3a://amazon-review-data/metadata.json.gz")
# get shard
df_raw_data = spark.read.json("s3a://amazon-review-data/reviews_Musical_Instruments_5.json.gz")
# subset asin, reviewText
df_subset = df_raw_data.select("asin", "reviewText")
df_tokens = dp.add_tokens(df_subset)
Explanation: <hr>
Script Development - <code>addPosTags.py</code>
Development notebook for script to add tokens and categories to review data.
<hr>
Setup
End of explanation
from pyspark.ml.feature import NGram
# instantiate ngram object
ngram = NGram(n=3, inputCol="rawTokens", outputCol="triGrams")
# add ngrams
df_triGrams = ngram.transform(df_tokens)
df_triGrams.show(3)
Explanation: <hr>
Development
Add tri-grams
End of explanation
import nltk
# get test row
test_row = df_triGrams.first()
type(test_row["triGrams"])
# test tiler
nltk.pos_tag(test_row["tokens"])
Explanation: Add Pos Tags
row
End of explanation
from pyspark.sql.types import ArrayType, StringType
# create udf
pos_udf = ps.sql.functions.udf(lambda x: nltk.pos_tag(x), ArrayType(ArrayType(StringType())))
# apply udf, create new column
df_posTag = df_tokens.withColumn("posTags", pos_udf(df_tokens["tokens"]))
df_posTag.show(3)
df_posTag.select("posTags").first()
Explanation: data frame
End of explanation
test_row["triGrams"][:10]
def tag_triGrams(triGrams):
tagged = []
for triGram in triGrams:
tagged.append(nltk.pos_tag(triGram.split()))
return tagged
test_row["triGrams"][0].split()
tag_triGrams(test_row["triGrams"])[:10]
# create udf
pos_triTag_udf = ps.sql.functions.udf(lambda x: tag_triGrams(x), ArrayType(ArrayType(ArrayType(StringType()))))
# apply udf, create new column
df_triPosTags = df_triGrams.withColumn("triPosTags", pos_triTag_udf(df_triGrams["triGrams"]))
df_triPosTags.show(3)
test_row = df_triPosTags.first()
test_row["triPosTags"]
Explanation: Tri Gram POS Tags
row
End of explanation
# import nltk
# from pyspark.sql.types import ArrayType, StringType
def addPosTags(df_tokens):
# create udf
pos_udf = ps.sql.functions.udf(lambda x: nltk.pos_tag(x), ArrayType(ArrayType(StringType())))
# apply udf, create new column
df_posTag = df_tokens.withColumn("posTags", pos_udf(df_tokens["tokens"]))
df_posTag = df_posTag.withColumn("raw_posTags", pos_udf(df_tokens["rawTokens"]))
return df_posTag
# test
df_posTag = addPosTags(df_tokens)
df_posTag.show(3)
Explanation: Function
End of explanation
tag_seqs_re = [('JJ', '^(NN|NS)', '.*'),
('^(RB|RBR|RBS)', 'JJ', '^(?!(NN|NS)).*'),
('JJ', 'JJ', '^(?!(NN|NS)).*'),
('^(NN|NS)', 'JJ', '^(?!(NN|NS)).*'),
('^(RB|RBR|RBS)', '^(VB|VBN|VBD|VBG)', '.*')
]
Explanation: <hr>
Filter Pos Tags
We are interested in nouns and adjectives. Nouns identify product features and adjectives expresses customer opinions of those features.
However, we cannot use consecutive adjective/noun or noun/adjective pairs alone. Consider this phrase: The chair was not great. If we only extracted the noun chair and the adjective great, the resulting pair chair great does not accurately reflect the sentiment expressed in the sentence. The adverb not negates the positive connotation of great. This scenario illustrates one of a number of ways in which adjective/noun pair meanings are influenced by neighboring words.
We need a set of POS sequences that can help identify sequences we are interested in. Thanfuklly, such a set exists (Turney, 2002), and we can use it here:
<br><br>
| Word 1 | Word 2 | Word 3 |
|--------------|-------------------|---------------|
| JJ | NN/NS | anything |
| RB/RBR/RBS | JJ | Not NN or NNS |
| JJ | JJ | Not NN or NNS |
| NN/ NNS | JJ | Not NN or NNS |
| RB/ RBR/ RBS | VB/ VBN/ VBD/ VBG | anything |
<br><br>
Citations
```
Turney, Peter D. 2002. Thumbs Up or Thumbs
Down? Semantic Orientation Applied to
Unsupervised, Classification of Reviews.
Proceedings of the 40th Annual Meeting of
the Association for Computational
Linguistics (ACL'02), Philadelphia,
Pennsylvania, USA, July 8-10, 2002. pp
417-424. NRC 44946
Feature-based Customer Review Mining
Jingye Wang Heng Ren
Department of Computer Science
Stanford University
```
<hr>
Identify Tag Sequences
Sequence Regex Patterns
End of explanation
# get python regex
import re
# get test row
test_row = df_posTag.first()
# check triGram tags- want tagged raw tokens (stopwords not removed)
test_row["triPosTags"][:10]
# function to check if a tagged triGram matches a single sequence
def is_match(triPosTag, seq):
# iterate over tags in triPosTag
for i,el in enumerate(triPosTag):
print(el[1]+" match "+seq[i])
# return False if tag does not match sequence
if not re.match(el[1], seq[i]):
return False
# returns true if no mismatches found
return True
def match_pos_seq(taggedTriGram):
for el in taggedTriGram:
pass
# get test tag
test_triPosTag = test_row["triPosTags"][0]
# create test match tag
test_triPosTag_match = [["a", "NN"], ["b", "JJ"], ["c", "RR"]]
# test regex match works
tag_seqs_re[3]
re.match(tag_seqs_re[3][0], "NN")
# test is_match()
is_match(test_triPosTag_match, tag_seqs_re[3])
Explanation: Test on Row
End of explanation
#df_obj_only.write.json("s3a://amazon-review-data/review-data")
Explanation: <hr>
Save
End of explanation |
3,241 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: Guided ES Demo
This is a fully self-contained notebook that reproduces the toy example in Fig.1 of the guided evolutionary strategies paper.
The main code is in the 'Algorithms' section below.
Contact
Step5: Helper functions
Antithetic sampler
Creates custom getters for perturbing variables.
These are used to evaluate f(x + epsilon), where epsilon is some perturbation applied to the parameters, x.
This also stores the sampled noise (epsilon) in a dictionary, since we need to reuse the noise for the negative sample, when we want to compute f(x - epsilon). (note
Step6: Noise distributions
We draw perturbations of parameters from either a diagonal covariance (the standard evolutionary strategies algorithm), or from a diagonal plus low rank covariance (guided ES).
Step7: Algorithms
Gradient descent
As a baseline, we will compare against running gradient descent directly on the biased gradients.
Step9: Evolutionary strategies
To compute descent directions using evolutionary strategies, we will use the antithetic sampler defined above.
This will let us perturb model parameters centered on the current iterate.
Step10: Vanilla ES
Vanilla ES is the standard evolutionary strategies algorithm. It uses a diagonal covariance matrix for perturbing parameters.
Step12: Guided ES
Guided ES is our proposed method. It uses a diagonal plus low-rank covariance matrix for drawing perturbations, where the low-rank subspace is spanned by the available gradient information.
Step13: Tasks
Perturbed quadratic
This is a toy problem where we explicitly add bias and variance to the gradient
Step14: Demo
Vanilla ES
First, we run minimize the problem using vanilla evolutionary strategies.
Step15: Gradient descent
Our next baseline is gradient descent, applied directly to the biased gradients.
Step16: Guided ES
Finally, we will run the same problem using the guided evolutionary strategies method.
Step17: Plots
Step18: As we see in the plot below, Guided ES combines the benefits of gradient descent (quick initial descent) and vanilla evolutionary strategies (converges on the true solution). | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_probability as tfp
print(f'tensorflow version: {tf.__version__}')
print(f'tensorflow_probability version: {tfp.__version__}')
Explanation: Guided ES Demo
This is a fully self-contained notebook that reproduces the toy example in Fig.1 of the guided evolutionary strategies paper.
The main code is in the 'Algorithms' section below.
Contact: [email protected]
Date: 6/22/18
End of explanation
class AntitheticSampler(object):
def __init__(self, distributions):
Antithetic perturbations.
Generates samples eta, and two custom getters that return
(x + eta) and (x - eta)
for a variable x.
This is used to evaluate a loss at perturbed parameter values, e.g.:
[f(x+eta), f(x-eta)]
# stores the sampled noise
self.perturbations = {}
# store the distributions
self.distributions = distributions
def pos_getter(self, getter, name, *args, **kwargs):
Custom getter for positive perturbation
# get the variable
variable = getter(name, *args, **kwargs)
# check if we have pulled this variable before
if name not in self.perturbations:
# generate a noise sample and store it
self.perturbations[name] = self.distributions[name].sample()
# return the perturbed variable
return variable + tf.reshape(self.perturbations[name], variable.shape)
def neg_getter(self, getter, name, *args, **kwargs):
Custom getter for negative perturbation
# get the variable
variable = getter(name, *args, **kwargs)
# check if we have pulled this variable before
if name not in self.perturbations:
# generate a noise sample and store it
self.perturbations[name] = self.distributions[name].sample(shape=variable.shape)
# return the perturbed variable
return variable - tf.reshape(self.perturbations[name], variable.shape)
Explanation: Helper functions
Antithetic sampler
Creates custom getters for perturbing variables.
These are used to evaluate f(x + epsilon), where epsilon is some perturbation applied to the parameters, x.
This also stores the sampled noise (epsilon) in a dictionary, since we need to reuse the noise for the negative sample, when we want to compute f(x - epsilon). (note: this is where the name antithetic comes from)
End of explanation
mvn_diag = tfp.distributions.MultivariateNormalDiag
mvn_lowrank = tfp.distributions.MultivariateNormalDiagPlusLowRank
Explanation: Noise distributions
We draw perturbations of parameters from either a diagonal covariance (the standard evolutionary strategies algorithm), or from a diagonal plus low rank covariance (guided ES).
End of explanation
def gradient_descent(loss_fn, grads_and_vars):
return grads_and_vars
Explanation: Algorithms
Gradient descent
As a baseline, we will compare against running gradient descent directly on the biased gradients.
End of explanation
def evostrat_update(loss_fn, dists, grads_and_vars, beta, sigma):
Function to compute the evolutionary strategies.
See the guided ES paper for details on the method.
Args:
loss_fn: function that builds the graph that computes the loss. loss_fn,
when called, returns a scalar loss tensor.
dists: dict mapping from variable names to distributions for perturbing
those variables.
grads_and_vars: list of (gradient, variable) tuples. The gradient and
variable are tensors of the same shape. The gradient may be biased (it
is not necessarily the gradient of the loss_fn).
beta: float, scale hyperparameter of the guided ES algorithm.
sigma: float, controls the overall std. dev. of the perturbation
distribution.
Returns:
updates_and_vars: a list of (update, variable) tuples contaniing the
estimated descent direction (update) and variable for each variable to
optimize. (This list will be passed to a tf.train.Optimizer instance).
# build the antithetic sampler
anti = AntitheticSampler(dists)
# evaluate the loss at different parameters
with tf.variable_scope('', custom_getter=anti.pos_getter):
y_pos = loss_fn()
with tf.variable_scope('', custom_getter=anti.neg_getter):
y_neg = loss_fn()
# use these losses to compute the evolutionary strategies update
c = beta / (2 * sigma ** 2)
updates_and_vars = [
(c * tf.reshape(anti.perturbations[v.op.name], v.shape) * (y_pos - y_neg), v)
for _, v in grads_and_vars]
return updates_and_vars
Explanation: Evolutionary strategies
To compute descent directions using evolutionary strategies, we will use the antithetic sampler defined above.
This will let us perturb model parameters centered on the current iterate.
End of explanation
def vanilla_es(loss_fn, grads_and_vars, sigma=0.1, beta=1.0):
def vardist(v):
n = v.shape[0]
scale_diag = (sigma / tf.sqrt(tf.cast(n, tf.float32))) * tf.ones(n)
return mvn_diag(scale_diag=scale_diag)
# build distributions
dists = {v.op.name: vardist(v) for _, v in grads_and_vars}
updates_and_vars = evostrat_update(loss_fn, dists, grads_and_vars, beta, sigma)
return updates_and_vars
Explanation: Vanilla ES
Vanilla ES is the standard evolutionary strategies algorithm. It uses a diagonal covariance matrix for perturbing parameters.
End of explanation
def guided_es(loss_fn, grads_and_vars, sigma=0.1, alpha=0.5, beta=1.0):
def vardist(grad, variable):
Builds the sampling distribution for the given variable.
n = tf.cast(variable.shape[0], tf.float32)
k = 1
a = sigma * tf.sqrt(alpha / n)
c = sigma * tf.sqrt((1-alpha) / k)
b = tf.sqrt(a ** 2 + c ** 2) - a
scale_diag = a * tf.ones(tf.cast(n, tf.int32))
perturb_diag = b * tf.ones(1,)
perturb_factor, _ = tf.qr(grad)
return mvn_lowrank(scale_diag=scale_diag,
scale_perturb_factor=perturb_factor,
scale_perturb_diag=perturb_diag)
dists = {v.op.name: vardist(g, v) for g, v in grads_and_vars}
# antithetic getter
updates_and_vars = evostrat_update(loss_fn, dists, grads_and_vars, beta, sigma)
return updates_and_vars
Explanation: Guided ES
Guided ES is our proposed method. It uses a diagonal plus low-rank covariance matrix for drawing perturbations, where the low-rank subspace is spanned by the available gradient information.
End of explanation
def generate_problem(n, m, seed=None):
rs = np.random.RandomState(seed=seed)
# sample a random problem
A = rs.randn(m, n)
b = rs.randn(m, 1)
grad_bias = rs.randn(n, 1)
return A, b, grad_bias
def perturbed_quadratic(n, m, problem_seed):
tf.reset_default_graph()
# generate problem
A_np, b_np, bias_np = generate_problem(n, m, seed=problem_seed)
A = tf.convert_to_tensor(A_np, dtype=tf.float32)
b = tf.convert_to_tensor(b_np, dtype=tf.float32)
# sample gradient bias and noise
grad_bias = 1.0 * tf.nn.l2_normalize(tf.convert_to_tensor(bias_np, dtype=tf.float32))
grad_noise = 1.5 * tf.nn.l2_normalize(tf.random_normal(shape=(n, 1)))
# compute loss
def loss_fn():
with tf.variable_scope('perturbed_quadratic', reuse=tf.AUTO_REUSE):
x = tf.get_variable('x', shape=(n, 1), initializer=tf.zeros_initializer)
resid = tf.matmul(A, x) - b
return 0.5*tf.norm(resid)**2 / float(m)
# compute perturbed gradient
with tf.variable_scope('perturbed_quadratic', reuse=tf.AUTO_REUSE):
x = tf.get_variable('x', shape=(n, 1), initializer=tf.zeros_initializer)
err = tf.matmul(tf.transpose(A), tf.matmul(A, x) - b) / float(m)
grad = err + (grad_bias + grad_noise) * tf.norm(err)
grads_and_vars = [(grad, x)]
return loss_fn, grads_and_vars
Explanation: Tasks
Perturbed quadratic
This is a toy problem where we explicitly add bias and variance to the gradient
End of explanation
tf.reset_default_graph()
loss_fn, gav = perturbed_quadratic(1000, 2000, 2)
updates = vanilla_es(loss_fn, gav, sigma=0.1, beta=1.0)
opt = tf.train.GradientDescentOptimizer(0.2)
train_op = opt.apply_gradients(updates)
loss = loss_fn()
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
# train
fobj = []
for k in range(10000):
f, _ = sess.run([loss, train_op])
fobj.append(f)
# store results for plotting
ves = np.array(fobj).copy()
sess.close()
Explanation: Demo
Vanilla ES
First, we run minimize the problem using vanilla evolutionary strategies.
End of explanation
tf.reset_default_graph()
loss_fn, gav = perturbed_quadratic(1000, 2000, 2)
updates = gradient_descent(loss_fn, gav)
opt = tf.train.GradientDescentOptimizer(5e-3)
train_op = opt.apply_gradients(updates)
loss = loss_fn()
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
# train
fobj = []
for k in range(10000):
f, _ = sess.run([loss, train_op])
fobj.append(f)
# store results for plotting
gd = np.array(fobj).copy()
sess.close()
Explanation: Gradient descent
Our next baseline is gradient descent, applied directly to the biased gradients.
End of explanation
tf.reset_default_graph()
loss_fn, gav = perturbed_quadratic(1000, 2000, 2)
updates = guided_es(loss_fn, gav, sigma=0.1, alpha=0.5, beta=2.0)
opt = tf.train.GradientDescentOptimizer(0.2)
train_op = opt.apply_gradients(updates)
loss = loss_fn()
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
# train
fobj = []
for k in range(10000):
f, _ = sess.run([loss, train_op])
fobj.append(f)
# store results for plotting
ges = np.array(fobj).copy()
sess.close()
Explanation: Guided ES
Finally, we will run the same problem using the guided evolutionary strategies method.
End of explanation
A, b, _ = generate_problem(1000, 2000, seed=2)
xstar = np.linalg.lstsq(A, b, rcond=None)[0]
f_opt = (0.5/2000) * np.linalg.norm(np.dot(A, xstar) - b) ** 2
Explanation: Plots
End of explanation
COLORS = {'ges': '#7570b3', 'ves': '#1b9e77', 'sgdm': '#d95f02'}
plt.figure(figsize=(8, 6))
plt.plot(ves - f_opt, color=COLORS['ves'], label='Vanilla ES')
plt.plot(gd - f_opt, color=COLORS['sgdm'], label='Grad. Descent')
plt.plot(ges - f_opt, color=COLORS['ges'], label='Guided ES')
plt.legend(fontsize=16, loc=0)
plt.xlabel('Iteration', fontsize=16)
plt.ylabel('Loss', fontsize=16)
plt.title('Demo of Guided Evolutionary Strategies', fontsize=16);
Explanation: As we see in the plot below, Guided ES combines the benefits of gradient descent (quick initial descent) and vanilla evolutionary strategies (converges on the true solution).
End of explanation |
3,242 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Notebook-11
Step1: Layout of a Function
As we briefly mentioned in another notebook, any 'word' followed by a set of parenthesis is a function. The 'word' is the function's name, and anything that you write within the parantheses are the function's inputs (also known as parameters). Like so
Step2: Notice that the sequence of function definiton (def) and then function call (function_name()) is important! Think about it
Step3: Reading (out loud!) the error message hopefully makes the error obvious... Quite explicit, isn't it?
Step4: A challenge for you!
Define a new function called "sunnyDay" that prints the string "What a lovely day!"
Step5: Now define a function named "gloomyDay" that prints "I hate rainy days!"
Step6: Finally, call the two functions you have defined so that "I hate rainy days!" is printed before "What a lovely day!"
Step7: Arguments
Those are pretty basic functions, and as you might have noticed they all kind of do the same thing but are no shorter than the thing they replaced (a single print command). You will definetely need them though whenever you are using a function to process some input and return some output. In that case the paramters are inputs that you are passing to the function.
```python
def myFunction( input_parameter )
Step8: A challenge for you!
We've already defined printMyName, so you don't need to do that again. Just ask the function to print your name!
Step9: A little more useful, right? If we had to print out name badges for a big conference, rather than typing "Hi! My name is ..." hundreds of times, if we had a list of people's names, we could just use a for loop to print out each one in turn using this function. The function adds the part that is the same for every name badge and all we need to do is pass it the input parameters. In fact, why don't we try that now?
Step10: In the function printMyName we used just one parameter as an input, but we are not constrained to just one. We can input many parameters separated by commas; let's redefine the printMyName function
Step11: And now can pass input parameters to a function dynamically from a data structure within a loop
Step12: Neat right? We've simplified things to that we can focus only on what's important
Step13: There's actually another way to do this that is quite helpful because it's easier to read
Step14: Scoping
Now I'd like you to focus on a particuarly important concept
Step15: Notice how the ErrorMessage is the same as before when we tried to print a variable that wasn't defined yet? It's the same concept
Step16: Default Parameters
Let's say that your namebade printing function is a worldwide hit, and while most conferences take place in English, in some cases they might need to say 'Hello' in a different languages. In this case, we might like to have a parameter with a default value ("Hi") but allow the programmer to override that with a different value (e.g. "Bonjour").
Here's how that works
Step17: So we only have to provide a value for a parameter with a default setting if we want to change it for some reason.
Return statement
Up to here we've only had a function that printed out whatever we told it to. Of course, that's pretty limited and there are a lot of cases where we would want the function to do something and then come back to us with an answer! And remember that the problem of variable scoping means that variables declared inside a function aren't visible to the rest of the program.
So if you want to access a value calculated inside a function then you have to explicitely return it using the reserved keyword return
Step18: Assigning to a Variable
The return keyword, somewhat obviously, returns whatever you tell it to so that that 'thing' become accessible outside of the function's scope. You can do whatever you want with the returned value, like assign it to a new variable
Step19: One important thing to remember is that return always marks the end of the list of instructions in a function. So whatever code is written below return and yet still indented in the function scope won't be executed
Step21: 5 is the last value printed becayse a return statement ends the execution of the function, regardless of whether a result (i.e. a value following the return keyword on the same line ) to the caller.
Now that you have seen a bit more what is happening in a function, we can combine some concepts that we have seen in previous notebooks to produce interesting bits of code. Take a look at how I've combined the range function, and the for in loop to print only the odd numbers for a given range.
Step22: Let's take a closer look at what's happening above...
python
def oddNumbers(inputRange)
Step24: A Challenge for you!
Now modify the oddNumbers function so that it also prints "Yuck, an even number!" for every even number...
Step25: Functions as Parameters of Other Functions
This leads us to another intersting idea
Step26: Code (Applied Geo-example)
For the last Geo-Example, let's revisit a couple of old exercises, combining them and making them a bit more sophisticated with the help of our newly acquired concept of functions.
First, let's define some variables to contain data that we will then use with the functions.
Step27: Now, fix the code in the next cell to use the variables defined in the last cell. The calcProportion function should return the proportion of the population that the boro borough composes of London. The getLocation function should return the coordinates of the boro borough.
Step28: Write some code to print the longitude of Lambeth. This could be done in a single line but don't stress if you need to use more lines...
Step29: Write some code to print the proportion of the London population that lives in the City of London. Using the function defined above, this should take only one line of code.
Step30: Write code to loop over the london_boroughs dictionary, use the calcProportion and getLocation functions to then print proportions and locations of all the boroughs. | Python Code:
myList = [1,"two", False, 9.99]
len(myList) # A function
print(myList) # A different function!
Explanation: Notebook-11: Introduction to Functions
Lesson Content
Function Anatomy 101
Function definiton & call
Arguments
Return statement
Function calling!
Assign a function to a variable
Function as a parameter to another function
In this lesson we'll cover functions in Python, a concept that you've already encountered but to which you've not yet been formally introduced. Now we're going to dig into this a little bit more because writing functions is where lazy programmers become good programmers.
In other words, as we saw with the concept of iteration, programmers are lazy and they tend want to avoid doing boring tasks over and over again. The idea is to avoid "wasting time re-inventing the wheel" and programmers have abbreviated this idea to the acronym D.R.Y. (Do not Repeat Yourself): if you are doing something more than once or twice, ask yourself if there's a way to encapsulate what you are doing in a function: you write the function once, and then call it whenever you need to complete that task.
Naturally, D.R.Y. has its opposite: W.E.T. (We Enjoy Typing or Write Everything Twice). Dry is nearly always better than wet.
Encapsulating regularly-used bits of code in functions has several advantages:
* Your code is more readable: because you only have to write a function once and can then re-use it as many times as you like, your files are shorter.
* Your code is easier to maintain: because you only have to write a function once, if you find a mistake in your code, you also only have to fix it in one place.
* Your can code more quickly: things that you do a lot can even be stuck in a separate file that you import into your code so that your most-used functions are immediately available.
Basically, a function is a way to do something to something in a portable, easy-to-use little bundle of code.
Functions 101
We've already met and used some functions, especially when we dealt with lists and dictionaries:
End of explanation
# the function definition
def myFirstFunc():
print("Nice to meet you!")
# the function call
myFirstFunc()
Explanation: Layout of a Function
As we briefly mentioned in another notebook, any 'word' followed by a set of parenthesis is a function. The 'word' is the function's name, and anything that you write within the parantheses are the function's inputs (also known as parameters). Like so:
```python
function_name(optional_parameter_1, optional_parameter_2, ...)
```
So how do we create (instantiate in programming terms) a new function? Like everything else in Python, functions have specific rules that you have to follow for the computer to understand what you want it to do. In this case there are two separate steps: the function definition and the function call.
Function Definition
This is a function definition:
python
def myFirstFunc():
print("Nice to meet you!")
Let's see what happened there:
- We indicated that we wanted to define (lazy version: def) a new function.
- Right after def we gave the function a name: myFirstFunc.
- After the new function's name there's the set of parenthesis and a colon.
- The line(s) of the function are indented (just like a loop).
The reason for the indenting is the same as for a while loop or an if condition! It indicates to the Python interpreter that whatever is indented belongs to the function. Is like saying: "Look man, I'm going to define this myFirstFunc function, and whatever is indented afterwards is part of the function". That is what we call the function's body, and it's the full package of instructions that we want the computer to run every time we call the function.
Function Call
Cool, now that we have defined a function how do we use it?
The same that we do with 'built-in' functions like print and len; we call it by just typing:
python
myFirstFunct()
Try it yourself in the code cell below!
End of explanation
print(myVariable)
myVariable = "Hallo Hallo!"
Explanation: Notice that the sequence of function definiton (def) and then function call (function_name()) is important! Think about it: how would Python know what we are referring to (i.e. what is the myFirstFunc it has to call?), if we haven't yet specified it?
It's the same as with variables: try to print one before you've defined it and Python will complain!
End of explanation
myVariable = "Hallo Hallo!"
print(myVariable)
Explanation: Reading (out loud!) the error message hopefully makes the error obvious... Quite explicit, isn't it? :)
End of explanation
#your code here
def sunnyDay():
print("What a lovely day!")
Explanation: A challenge for you!
Define a new function called "sunnyDay" that prints the string "What a lovely day!"
End of explanation
#your code here
Explanation: Now define a function named "gloomyDay" that prints "I hate rainy days!"
End of explanation
#your code here
gloomyDay()
sunnyDay()
Explanation: Finally, call the two functions you have defined so that "I hate rainy days!" is printed before "What a lovely day!"
End of explanation
def printMyName( name ):
print("Hi! My name is: " + name)
printMyName("Gerardus")
Explanation: Arguments
Those are pretty basic functions, and as you might have noticed they all kind of do the same thing but are no shorter than the thing they replaced (a single print command). You will definetely need them though whenever you are using a function to process some input and return some output. In that case the paramters are inputs that you are passing to the function.
```python
def myFunction( input_parameter ):
do something to the input
return input_parameter
```
End of explanation
#your code here
printMyName("James")
Explanation: A challenge for you!
We've already defined printMyName, so you don't need to do that again. Just ask the function to print your name!
End of explanation
for name in ["Jon Reades", "James Millington", "Chen Zhong", "Naru Shiode"]:
printMyName(name)
Explanation: A little more useful, right? If we had to print out name badges for a big conference, rather than typing "Hi! My name is ..." hundreds of times, if we had a list of people's names, we could just use a for loop to print out each one in turn using this function. The function adds the part that is the same for every name badge and all we need to do is pass it the input parameters. In fact, why don't we try that now?
End of explanation
def printMyName(name, surname):
print("Hi! My name is "+ name + " " + surname)
printMyName("Gerardus", "Merkatoor")
Explanation: In the function printMyName we used just one parameter as an input, but we are not constrained to just one. We can input many parameters separated by commas; let's redefine the printMyName function:
End of explanation
britishProgrammers = [
["Babbage", "Charles"],
["Lovelace", "Ada"],
["Turing", "Alan"],
]
for p in britishProgrammers:
printMyName(p[1], p[0])
Explanation: And now can pass input parameters to a function dynamically from a data structure within a loop:
End of explanation
#your code here
def printMyAge(name, age):
print(name + " is " + str(age) + " years old.")
printMyAge('Jon',25)
Explanation: Neat right? We've simplified things to that we can focus only on what's important: we have our 'data structure' (the list-of-lists) and we have our printing function (printMyName). And now we just use a for loop to do the hard work. If we had 1,000 british programmers to print out it would be the same level of effort.
See what we mean about it being like Lego? We've combined a new concept with a concept covered in the last notebook to simplify the process of printing out nametags.
A challenge for you!
Define and use a function that takes as input parameters a <name> (String) and <age> (Integer) and then prints out the phrase: <name> + "is" + <age> +" years old"
End of explanation
def printMyAge(name, age):
print(f"{name} is {age} years old.") # This is called a 'f-string' and we use {...} to add variables
printMyAge('Jon',25)
Explanation: There's actually another way to do this that is quite helpful because it's easier to read:
End of explanation
def whoAmI(myname, mysurname):
if not myname:
myname = 'Charles'
if not mysurname:
mysurname = 'Babbage'
print("Hi! My name is "+ myname + " " + mysurname + "!")
print(myname) # myname _only_ exists 'inside' the function definition
Explanation: Scoping
Now I'd like you to focus on a particuarly important concept: something called 'scoping'. Notice that the names we are using for the parameters are de facto creating new variables that we then use in the function body (the indented block of code). In the example below, 'name' and 'surname' are scoped to the body of the funciton. Outside of that block (outside of that scope) they don't exit!
Here's the proof:
End of explanation
whoAmI('Ada','Lovelace')
Explanation: Notice how the ErrorMessage is the same as before when we tried to print a variable that wasn't defined yet? It's the same concept: the variables defined as parameters exist only in the indented code block of the function (the function scope ).
But notice too that if you replace print name with whoAmI("Ada", "Lovelace") then the error disappears and you will see the output: "Hi! My name is Ada Lovelace." So to reiterate: parameters to a function exist as variables only within the function scope.
End of explanation
def printInternational(name, surname, greeting="Hi"):
print(greeting + "! My name is "+ name + " " + surname)
printInternational("Ada", "Lovelace")
printInternational("Charles", "Babbage")
printInternational("Laurent", "Ribardière", "Bonjour")
printInternational("François", "Lionet", "Bonjour")
printInternational("Alan", "Turing")
printInternational("Harsha","Suryanarayana", "Namaste")
Explanation: Default Parameters
Let's say that your namebade printing function is a worldwide hit, and while most conferences take place in English, in some cases they might need to say 'Hello' in a different languages. In this case, we might like to have a parameter with a default value ("Hi") but allow the programmer to override that with a different value (e.g. "Bonjour").
Here's how that works:
End of explanation
def sumOf(firstQuantity, secondQuantity):
return firstQuantity + secondQuantity
print(sumOf(1,2))
print(sumOf(109845309234.30945098345,223098450985698054902309342.43598723900923489))
Explanation: So we only have to provide a value for a parameter with a default setting if we want to change it for some reason.
Return statement
Up to here we've only had a function that printed out whatever we told it to. Of course, that's pretty limited and there are a lot of cases where we would want the function to do something and then come back to us with an answer! And remember that the problem of variable scoping means that variables declared inside a function aren't visible to the rest of the program.
So if you want to access a value calculated inside a function then you have to explicitely return it using the reserved keyword return:
End of explanation
returnedValue = sumOf(4, 3)
# Notice the casting from int to str!
print(f"This is the returned value: {returnedValue}")
Explanation: Assigning to a Variable
The return keyword, somewhat obviously, returns whatever you tell it to so that that 'thing' become accessible outside of the function's scope. You can do whatever you want with the returned value, like assign it to a new variable:
End of explanation
def printNumbers():
print(2)
print(5)
return
print(9999)
print(800000)
printNumbers()
Explanation: One important thing to remember is that return always marks the end of the list of instructions in a function. So whatever code is written below return and yet still indented in the function scope won't be executed:
python
def genericFunc(parameter):
# do something to parameter
# ...
# do something else..
# ...
return
print("this line won't be ever executed! how sad!")
print("nope. this won't either, sorry.")
A challenge for you!
Guess which will be the highest number to be printed from this function (think about your guess before you execute the code):
End of explanation
def oddNumbers(inputRange):
A function that prints only the odd numbers for a given range from 0 to inputRange.
inputRange - an integer representing the maximum of the range
for i in range(inputRange):
if i%2 != 0:
print(i)
oddNumbers(10)
print("And...")
oddNumbers(15)
help(oddNumbers)
Explanation: 5 is the last value printed becayse a return statement ends the execution of the function, regardless of whether a result (i.e. a value following the return keyword on the same line ) to the caller.
Now that you have seen a bit more what is happening in a function, we can combine some concepts that we have seen in previous notebooks to produce interesting bits of code. Take a look at how I've combined the range function, and the for in loop to print only the odd numbers for a given range.
End of explanation
help(len)
myList = [1,2,3]
help(myList.append)
Explanation: Let's take a closer look at what's happening above...
python
def oddNumbers(inputRange):
A function that prints only the odd numbers for a given range from 0 to inputRange.
inputRange - an integer representing the maximum of the range
for i in range(inputRange):
if i%2 != 0:
print(i)
This defines a new function called oddNumbers which takes one parameter – it's not immediately clear what type of variable inputRange is, but we can guess it pretty quickly from what happens next.
You'll notice that there's are some lines immediately after the function definition (between the triple-quotes) that aren't printed or obviously used, but that look like documentation of some sort. We'll come back to that in a minute.
The next line is a simple for loop: for i in range(inputRange). The range function generates a list of numbers from 0 to the input parameter passed to it. So we are going to be running a loop from 0 to n (where n=inputRange) and assigning the result of that to i.
The next line is nested inside the for loop: so we take each i in turn and perform the modulo calculation on it: if i%2 is 0 then i is divisble by 2. It's even. If it's not equal to 0 then it's not an even number, and in that case we'll print it out.
Which is exactly what happens with:
python
oddNumbers(10)
oddNumbers(15)
The last line is something new:
python
help(oddNumbers)
If you look at the output of this, you'll see that it prints out the content we wrote into the triple-quotes in the function definition. So if you want to give your function some documentation that others can access, this is how you do it. In fact, this is how every function in Python should be documented.
Try these (and others) in the empty code block below:
python
help(len)
help(str)
myList = [1,2,3]
help(myList.append)
End of explanation
#your code here
def oddNumbers(inputRange):
for i in range(inputRange):
if i%2 != 0:
print(i)
else:
print("Yuck, an even number!")
oddNumbers(8)
Explanation: A Challenge for you!
Now modify the oddNumbers function so that it also prints "Yuck, an even number!" for every even number...
End of explanation
def addTwo(param1):
return param1 + 2
def multiplyByThree(param1): # Note: this is a *separate* variable from the param1 in addTwo() because of scoping!
return param1 * 3
# you can use multiplyByThree
# with a regular argument as input
print(multiplyByThree(2))
# but also with a function as input
print(multiplyByThree(addTwo(2)))
# And then
print(addTwo(multiplyByThree(2)))
Explanation: Functions as Parameters of Other Functions
This leads us to another intersting idea: since moving around functions is so easy, what happens when we use them as inputs to other functions?
End of explanation
# London's total population
london_pop = 7375000
# list with some of London's borough. Feel free to add more!
london_boroughs = {
"City of London": {
"population": 8072,
"coordinates" : [-0.0933, 51.5151]
},
"Camden": {
"population": 220338,
"coordinates" : [-0.2252,1.5424]
},
"Hackney": {
"population": 220338,
"coordinates" : [-0.0709, 51.5432]
},
"Lambeth": {
"population": 303086,
"coordinates" : [-0.1172,51.5013]
}
}
Explanation: Code (Applied Geo-example)
For the last Geo-Example, let's revisit a couple of old exercises, combining them and making them a bit more sophisticated with the help of our newly acquired concept of functions.
First, let's define some variables to contain data that we will then use with the functions.
End of explanation
def calcProportion(boro,city_pop=???):
return ???['population']/???
def getLocation(???):
return boro[???]
#in this function definition we provide a default value for city_pop
#this makes sense here because we are only dealing with london
def calcProportion(boro,city_pop=7375000):
return boro['population']/city_pop
def getLocation(boro):
return boro['coordinates'] #returns the value for the `coordinates` key from the value for the `Lambeth` key
Explanation: Now, fix the code in the next cell to use the variables defined in the last cell. The calcProportion function should return the proportion of the population that the boro borough composes of London. The getLocation function should return the coordinates of the boro borough.
End of explanation
#one-liner (see if you can understand how it works)
print(getLocation(london_boroughs['Lambeth'])[0])
# A longer but possibly more user-friendly way:
coord = getLocation(london_boroughs['Lambeth'])
long = coord[0]
print(long)
Explanation: Write some code to print the longitude of Lambeth. This could be done in a single line but don't stress if you need to use more lines...
End of explanation
print(calcProportion(london_boroughs['City of London']))
Explanation: Write some code to print the proportion of the London population that lives in the City of London. Using the function defined above, this should take only one line of code.
End of explanation
for boro, data in london_boroughs.items():
prop = calcProportion(data)
location = getLocation(data)
print(prop)
print(location)
print("")
#to print more nicely you could use string formatting:
#print("Proportion is {0:3.3f}%".format(prop*100))
#print("Location of " + boro + " is " + str(location))
Explanation: Write code to loop over the london_boroughs dictionary, use the calcProportion and getLocation functions to then print proportions and locations of all the boroughs.
End of explanation |
3,243 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
This Notebook will help you to identify anomalies in your historical timeseries data (IoT data) in simple steps. Also, derive the threshold value for your historical data. This threshold value can be used to set rules in Watson IoT Platform, such that you get an alert when your IoT device reports an abnormal reading in the future.
Accepted file format
Note that, this Notebook accepts the CSV file in one of the following file formats
Step1: Enter the following command in the next cell to look at the first record and click Run
Step2: Enter the following command in the next cell to get the number of rows in the CSV file (DataFrame) and click Run,
Step3: Create Pandas DataFrame
Enter the following commands in the next cell to create a Pandas DataFrame from the Spark SQL DataFrame and click Run. This line prints the schema of the newly created Pandas DataFrame which will be same as the Spark SQL DataFrame.
The Python Data Analysis Library (a.k.a. pandas) provides high-performance, easy-to-use data structures and data analysis tools that are designed to make working with “relational” or “labeled” data both easy and intuitive. Also, plotting is very easy with Pandas DataFrame.
Step4: Enter the following commands in the next cell to set timestamp as the index if its present and click Run,
Step5: Calculate z-score
We detect the anomaly events using z-score, aka, a standard score indicating how many standard deviations an element is from the mean.
Enter the following commands to calculate z-score for each of the values and add it as a new column in the same DataFrame,
Step6: Plot Anomalies
When we work in notebooks, we can decide how to present your anlysis results and derived information. So far, we have used normal print functions, which are informative. However, we can also show the results in a visual way by using the popular matplotlib package to create plots.
Enter the following snippet of the code in the next cell to view the anomaly events in your data and click Run. Observe that the values for which the z-score is above 3 or below -3, marked as abnormal events in the graph shown below,
Step7: As shown, the red marks are the unexpected spikes and dips whose z-score value is greater than 3 or less than -3. Incase if you want to detect the lower spikes, modify the value to 2 or even lower and run. Similarly, if you want to detect only the higher spikes, try increasing the z-score value from 3 to 4 and beyond.
Derive thresholds
Enter the following command into the next cell to derive the Spike threshold value corresponding to z-score value 3 and click Run.
Step8: Similarly, Enter the following command into the next cell to derive the dip threshold value corresponding to z-score value -3. | Python Code:
from pyspark.sql import SQLContext
# adding the PySpark module to SparkContext
sc.addPyFile("https://raw.githubusercontent.com/seahboonsiew/pyspark-csv/master/pyspark_csv.py")
import pyspark_csv as pycsv
# you may need to modify this line if the filename or path is different.
sqlContext = SQLContext(sc)
data = sc.textFile("/resources/sample-data.csv")
def skip_header(idx, iterator):
if (idx == 0):
next(iterator)
return iterator
body = data.mapPartitionsWithIndex(skip_header)
header = data.first()
header_list = header.split(",")
# create Spark DataFrame using pyspark-csv
data_df = pycsv.csvToDataFrame(sqlContext, body, sep=",", columns=header_list)
data_df.cache()
data_df.printSchema()
Explanation: Introduction
This Notebook will help you to identify anomalies in your historical timeseries data (IoT data) in simple steps. Also, derive the threshold value for your historical data. This threshold value can be used to set rules in Watson IoT Platform, such that you get an alert when your IoT device reports an abnormal reading in the future.
Accepted file format
Note that, this Notebook accepts the CSV file in one of the following file formats:
2 column format: <Date and time in DD/MM/YYYY or MM/DD/YYYY format, Numeric value>
1 column format: <Numeric value>
Sample data
In case if you don’t have any file, try downloading the sample file from this link. The sample file contains a temperature data updated for ever 15 minutes. Also, the sample data contains spikes to demonstrate the danger situation.
Load data
Drag and drop your CSV file into this Notebook. Once the file is uploaded successfully, you can see the file in the Recent Data section. Also, expand the file name and click on Insert Path link to get the location of the file. It must be like, /resources/file-name.
The next step is to create the SQL DataFrame from the CSV file. Instead of specifying the schema for a Spark DataFrame programmatically, you can use the pyspark-csv module. It is an external PySpark module and works like the pandas read_csv function.
Enter the following lines of code into your Notebook to create Spark SQL DataFrame from the given CSV file. Modify the path of the file if its different and click Run. And observe that it prints the schema.
End of explanation
# retrieve the first row
data_df.take(1)
Explanation: Enter the following command in the next cell to look at the first record and click Run
End of explanation
# retrieve the number of rows
data_df.count()
Explanation: Enter the following command in the next cell to get the number of rows in the CSV file (DataFrame) and click Run,
End of explanation
# create a pandas dataframe from the SQL dataframe
import pprint
import pandas as pd
pandaDF = data_df.toPandas()
#Fill NA/NaN values to 0
pandaDF.fillna(0, inplace=True)
pandaDF.columns
Explanation: Create Pandas DataFrame
Enter the following commands in the next cell to create a Pandas DataFrame from the Spark SQL DataFrame and click Run. This line prints the schema of the newly created Pandas DataFrame which will be same as the Spark SQL DataFrame.
The Python Data Analysis Library (a.k.a. pandas) provides high-performance, easy-to-use data structures and data analysis tools that are designed to make working with “relational” or “labeled” data both easy and intuitive. Also, plotting is very easy with Pandas DataFrame.
End of explanation
# change index to time if its present
valueHeaderName = 'value'
timeHeaderName = 'null'
if (len(header_list) == 2):
timeHeaderName = header_list[0]
valueHeaderName = header_list[1]
else:
valueHeaderName = header_list[0]
# Drop the timestamp column as the index is replaced with timestamp now
if (len(header_list) == 2):
pandaDF.index = pandaDF[timeHeaderName]
pandaDF = pandaDF.drop([timeHeaderName], axis=1)
# Also, sort the index with the timestamp
pandaDF.sort_index(inplace=True)
pandaDF.head(n=5)
Explanation: Enter the following commands in the next cell to set timestamp as the index if its present and click Run,
End of explanation
# calculate z-score and populate a new column
pandaDF['zscore'] = (pandaDF[valueHeaderName] - pandaDF[valueHeaderName].mean())/pandaDF[valueHeaderName].std(ddof=0)
pandaDF.head(n=5)
Explanation: Calculate z-score
We detect the anomaly events using z-score, aka, a standard score indicating how many standard deviations an element is from the mean.
Enter the following commands to calculate z-score for each of the values and add it as a new column in the same DataFrame,
End of explanation
# ignore warnings if any
import warnings
warnings.filterwarnings('ignore')
# render the results as inline charts:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
'''
This function detects the spike and dip by returning a non-zero value
when the z-score is above 3 (spike) and below -3(dip). Incase if you
want to capture the smaller spikes and dips, lower the zscore value from
3 to 2 in this function.
'''
def spike(row):
if(row['zscore'] >=3 or row['zscore'] <=-3):
return row[valueHeaderName]
else:
return 0
pandaDF['spike'] = pandaDF.apply(spike, axis=1)
# select rows that are required for plotting
plotDF = pandaDF[[valueHeaderName,'spike']]
#calculate the y minimum value
y_min = (pandaDF[valueHeaderName].max() - pandaDF[valueHeaderName].min()) / 10
fig, ax = plt.subplots(num=None, figsize=(14, 6), dpi=80, facecolor='w', edgecolor='k')
ax.set_ylim(plotDF[valueHeaderName].min() - y_min, plotDF[valueHeaderName].max() + y_min)
x_filt = plotDF.index[plotDF.spike != 0]
plotDF['xyvaluexy'] = plotDF[valueHeaderName]
y_filt = plotDF.xyvaluexy[plotDF.spike != 0]
#Plot the raw data in blue colour
line1 = ax.plot(plotDF.index, plotDF[valueHeaderName], '-', color='blue', animated = True, linewidth=1)
#plot the anomalies in red circle
line2 = ax.plot(x_filt, y_filt, 'ro', color='red', linewidth=2, animated = True)
#Fill the raw area
ax.fill_between(plotDF.index, (pandaDF[valueHeaderName].min() - y_min), plotDF[valueHeaderName], interpolate=True, color='blue',alpha=0.6)
# Label the axis
ax.set_xlabel("Sequence",fontsize=20)
ax.set_ylabel(valueHeaderName,fontsize=20)
plt.tight_layout()
plt.legend()
plt.show()
Explanation: Plot Anomalies
When we work in notebooks, we can decide how to present your anlysis results and derived information. So far, we have used normal print functions, which are informative. However, we can also show the results in a visual way by using the popular matplotlib package to create plots.
Enter the following snippet of the code in the next cell to view the anomaly events in your data and click Run. Observe that the values for which the z-score is above 3 or below -3, marked as abnormal events in the graph shown below,
End of explanation
# calculate the value that is corresponding to z-score 3
(pandaDF[valueHeaderName].std(ddof=0) * 3) + pandaDF[valueHeaderName].mean()
Explanation: As shown, the red marks are the unexpected spikes and dips whose z-score value is greater than 3 or less than -3. Incase if you want to detect the lower spikes, modify the value to 2 or even lower and run. Similarly, if you want to detect only the higher spikes, try increasing the z-score value from 3 to 4 and beyond.
Derive thresholds
Enter the following command into the next cell to derive the Spike threshold value corresponding to z-score value 3 and click Run.
End of explanation
# calculate the value that is corresponding to z-score -3
(pandaDF[valueHeaderName].std(ddof=0) * -3) + pandaDF[valueHeaderName].mean()
Explanation: Similarly, Enter the following command into the next cell to derive the dip threshold value corresponding to z-score value -3.
End of explanation |
3,244 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convolutional Neural Networks
Step1: Run the next cell to load the "SIGNS" dataset you are going to use.
Step2: As a reminder, the SIGNS dataset is a collection of 6 signs representing numbers from 0 to 5.
<img src="images/SIGNS.png" style="width
Step3: In Course 2, you had built a fully-connected network for this dataset. But since this is an image dataset, it is more natural to apply a ConvNet to it.
To get started, let's examine the shapes of your data.
Step5: 1.1 - Create placeholders
TensorFlow requires that you create placeholders for the input data that will be fed into the model when running the session.
Exercise
Step7: Expected Output
<table>
<tr>
<td>
X = Tensor("Placeholder
Step9: Expected Output
Step11: Expected Output
Step13: Expected Output
Step14: Run the following cell to train your model for 100 epochs. Check if your cost after epoch 0 and 5 matches our output. If not, stop the cell and go back to your code!
Step15: Expected output | Python Code:
import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import scipy
from PIL import Image
from scipy import ndimage
import tensorflow as tf
from tensorflow.python.framework import ops
from cnn_utils import *
%matplotlib inline
np.random.seed(1)
Explanation: Convolutional Neural Networks: Application
Welcome to Course 4's second assignment! In this notebook, you will:
Implement helper functions that you will use when implementing a TensorFlow model
Implement a fully functioning ConvNet using TensorFlow
After this assignment you will be able to:
Build and train a ConvNet in TensorFlow for a classification problem
We assume here that you are already familiar with TensorFlow. If you are not, please refer the TensorFlow Tutorial of the third week of Course 2 ("Improving deep neural networks").
1.0 - TensorFlow model
In the previous assignment, you built helper functions using numpy to understand the mechanics behind convolutional neural networks. Most practical applications of deep learning today are built using programming frameworks, which have many built-in functions you can simply call.
As usual, we will start by loading in the packages.
End of explanation
# Loading the data (signs)
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
Explanation: Run the next cell to load the "SIGNS" dataset you are going to use.
End of explanation
# Example of a picture
index = 6
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
Explanation: As a reminder, the SIGNS dataset is a collection of 6 signs representing numbers from 0 to 5.
<img src="images/SIGNS.png" style="width:800px;height:300px;">
The next cell will show you an example of a labelled image in the dataset. Feel free to change the value of index below and re-run to see different examples.
End of explanation
X_train = X_train_orig/255.
X_test = X_test_orig/255.
Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
conv_layers = {}
Explanation: In Course 2, you had built a fully-connected network for this dataset. But since this is an image dataset, it is more natural to apply a ConvNet to it.
To get started, let's examine the shapes of your data.
End of explanation
# GRADED FUNCTION: create_placeholders
def create_placeholders(n_H0, n_W0, n_C0, n_y):
Creates the placeholders for the tensorflow session.
Arguments:
n_H0 -- scalar, height of an input image
n_W0 -- scalar, width of an input image
n_C0 -- scalar, number of channels of the input
n_y -- scalar, number of classes
Returns:
X -- placeholder for the data input, of shape [None, n_H0, n_W0, n_C0] and dtype "float"
Y -- placeholder for the input labels, of shape [None, n_y] and dtype "float"
### START CODE HERE ### (≈2 lines)
X = tf.placeholder(tf.float32, [None, n_H0, n_W0, n_C0], name = 'X')
Y = tf.placeholder(tf.float32, [None, n_y], name = 'Y')
### END CODE HERE ###
return X, Y
X, Y = create_placeholders(64, 64, 3, 6)
print ("X = " + str(X))
print ("Y = " + str(Y))
Explanation: 1.1 - Create placeholders
TensorFlow requires that you create placeholders for the input data that will be fed into the model when running the session.
Exercise: Implement the function below to create placeholders for the input image X and the output Y. You should not define the number of training examples for the moment. To do so, you could use "None" as the batch size, it will give you the flexibility to choose it later. Hence X should be of dimension [None, n_H0, n_W0, n_C0] and Y should be of dimension [None, n_y]. Hint.
End of explanation
# GRADED FUNCTION: initialize_parameters
def initialize_parameters():
Initializes weight parameters to build a neural network with tensorflow. The shapes are:
W1 : [4, 4, 3, 8]
W2 : [2, 2, 8, 16]
Returns:
parameters -- a dictionary of tensors containing W1, W2
tf.set_random_seed(1) # so that your "random" numbers match ours
### START CODE HERE ### (approx. 2 lines of code)
W1 = tf.get_variable('W1', [4, 4, 3, 8], initializer=tf.contrib.layers.xavier_initializer(seed=0))
W2 = tf.get_variable('W2', [2, 2, 8, 16], initializer=tf.contrib.layers.xavier_initializer(seed=0))
### END CODE HERE ###
parameters = {"W1": W1,
"W2": W2}
return parameters
tf.reset_default_graph()
with tf.Session() as sess_test:
parameters = initialize_parameters()
init = tf.global_variables_initializer()
sess_test.run(init)
print("W1 = " + str(parameters["W1"].eval()[1,1,1]))
print("W2 = " + str(parameters["W2"].eval()[1,1,1]))
Explanation: Expected Output
<table>
<tr>
<td>
X = Tensor("Placeholder:0", shape=(?, 64, 64, 3), dtype=float32)
</td>
</tr>
<tr>
<td>
Y = Tensor("Placeholder_1:0", shape=(?, 6), dtype=float32)
</td>
</tr>
</table>
1.2 - Initialize parameters
You will initialize weights/filters $W1$ and $W2$ using tf.contrib.layers.xavier_initializer(seed = 0). You don't need to worry about bias variables as you will soon see that TensorFlow functions take care of the bias. Note also that you will only initialize the weights/filters for the conv2d functions. TensorFlow initializes the layers for the fully connected part automatically. We will talk more about that later in this assignment.
Exercise: Implement initialize_parameters(). The dimensions for each group of filters are provided below. Reminder - to initialize a parameter $W$ of shape [1,2,3,4] in Tensorflow, use:
python
W = tf.get_variable("W", [1,2,3,4], initializer = ...)
More Info.
End of explanation
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
Implements the forward propagation for the model:
CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED
Arguments:
X -- input dataset placeholder, of shape (input size, number of examples)
parameters -- python dictionary containing your parameters "W1", "W2"
the shapes are given in initialize_parameters
Returns:
Z3 -- the output of the last LINEAR unit
# Retrieve the parameters from the dictionary "parameters"
W1 = parameters['W1']
W2 = parameters['W2']
### START CODE HERE ###
# CONV2D: stride of 1, padding 'SAME'
Z1 = tf.nn.conv2d(X, W1, strides = [1, 1, 1, 1],padding = 'SAME')
# RELU
A1 = tf.nn.relu(Z1)
# MAXPOOL: window 8x8, sride 8, padding 'SAME'
P1 = tf.nn.max_pool(A1, [1, 8, 8, 1], [1, 8, 8, 1], 'SAME')
# CONV2D: filters W2, stride 1, padding 'SAME'
Z2 = tf.nn.conv2d(P1, W2, strides = [1, 1, 1, 1],padding = 'SAME')
# RELU
A2 = tf.nn.relu(Z2)
# MAXPOOL: window 4x4, stride 4, padding 'SAME'
P2 = tf.nn.max_pool(A2, [1, 4, 4, 1], [1, 4, 4, 1], 'SAME')
# FLATTEN
P2 = tf.contrib.layers.flatten(P2)
# FULLY-CONNECTED without non-linear activation function (not not call softmax).
# 6 neurons in output layer. Hint: one of the arguments should be "activation_fn=None"
Z3 = tf.contrib.layers.fully_connected(P2, 6, activation_fn = None)
### END CODE HERE ###
return Z3
tf.reset_default_graph()
with tf.Session() as sess:
np.random.seed(1)
X, Y = create_placeholders(64, 64, 3, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
init = tf.global_variables_initializer()
sess.run(init)
a = sess.run(Z3, {X: np.random.randn(2,64,64,3), Y: np.random.randn(2,6)})
print("Z3 = " + str(a))
Explanation: Expected Output:
<table>
<tr>
<td>
W1 =
</td>
<td>
[ 0.00131723 0.14176141 -0.04434952 0.09197326 0.14984085 -0.03514394 <br>
-0.06847463 0.05245192]
</td>
</tr>
<tr>
<td>
W2 =
</td>
<td>
[-0.08566415 0.17750949 0.11974221 0.16773748 -0.0830943 -0.08058 <br>
-0.00577033 -0.14643836 0.24162132 -0.05857408 -0.19055021 0.1345228 <br>
-0.22779644 -0.1601823 -0.16117483 -0.10286498]
</td>
</tr>
</table>
1.2 - Forward propagation
In TensorFlow, there are built-in functions that carry out the convolution steps for you.
tf.nn.conv2d(X,W1, strides = [1,s,s,1], padding = 'SAME'): given an input $X$ and a group of filters $W1$, this function convolves $W1$'s filters on X. The third input ([1,f,f,1]) represents the strides for each dimension of the input (m, n_H_prev, n_W_prev, n_C_prev). You can read the full documentation here
tf.nn.max_pool(A, ksize = [1,f,f,1], strides = [1,s,s,1], padding = 'SAME'): given an input A, this function uses a window of size (f, f) and strides of size (s, s) to carry out max pooling over each window. You can read the full documentation here
tf.nn.relu(Z1): computes the elementwise ReLU of Z1 (which can be any shape). You can read the full documentation here.
tf.contrib.layers.flatten(P): given an input P, this function flattens each example into a 1D vector it while maintaining the batch-size. It returns a flattened tensor with shape [batch_size, k]. You can read the full documentation here.
tf.contrib.layers.fully_connected(F, num_outputs): given a the flattened input F, it returns the output computed using a fully connected layer. You can read the full documentation here.
In the last function above (tf.contrib.layers.fully_connected), the fully connected layer automatically initializes weights in the graph and keeps on training them as you train the model. Hence, you did not need to initialize those weights when initializing the parameters.
Exercise:
Implement the forward_propagation function below to build the following model: CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED. You should use the functions above.
In detail, we will use the following parameters for all the steps:
- Conv2D: stride 1, padding is "SAME"
- ReLU
- Max pool: Use an 8 by 8 filter size and an 8 by 8 stride, padding is "SAME"
- Conv2D: stride 1, padding is "SAME"
- ReLU
- Max pool: Use a 4 by 4 filter size and a 4 by 4 stride, padding is "SAME"
- Flatten the previous output.
- FULLYCONNECTED (FC) layer: Apply a fully connected layer without an non-linear activation function. Do not call the softmax here. This will result in 6 neurons in the output layer, which then get passed later to a softmax. In TensorFlow, the softmax and cost function are lumped together into a single function, which you'll call in a different function when computing the cost.
End of explanation
# GRADED FUNCTION: compute_cost
def compute_cost(Z3, Y):
Computes the cost
Arguments:
Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)
Y -- "true" labels vector placeholder, same shape as Z3
Returns:
cost - Tensor of the cost function
### START CODE HERE ### (1 line of code)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = Z3, labels = Y))
### END CODE HERE ###
return cost
tf.reset_default_graph()
with tf.Session() as sess:
np.random.seed(1)
X, Y = create_placeholders(64, 64, 3, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
cost = compute_cost(Z3, Y)
init = tf.global_variables_initializer()
sess.run(init)
a = sess.run(cost, {X: np.random.randn(4,64,64,3), Y: np.random.randn(4,6)})
print("cost = " + str(a))
Explanation: Expected Output:
<table>
<td>
Z3 =
</td>
<td>
[[-0.44670227 -1.57208765 -1.53049231 -2.31013036 -1.29104376 0.46852064] <br>
[-0.17601591 -1.57972014 -1.4737016 -2.61672091 -1.00810647 0.5747785 ]]
</td>
</table>
1.3 - Compute cost
Implement the compute cost function below. You might find these two functions helpful:
tf.nn.softmax_cross_entropy_with_logits(logits = Z3, labels = Y): computes the softmax entropy loss. This function both computes the softmax activation function as well as the resulting loss. You can check the full documentation here.
tf.reduce_mean: computes the mean of elements across dimensions of a tensor. Use this to sum the losses over all the examples to get the overall cost. You can check the full documentation here.
Exercise: Compute the cost below using the function above.
End of explanation
# GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.009,
num_epochs = 100, minibatch_size = 64, print_cost = True):
Implements a three-layer ConvNet in Tensorflow:
CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED
Arguments:
X_train -- training set, of shape (None, 64, 64, 3)
Y_train -- test set, of shape (None, n_y = 6)
X_test -- training set, of shape (None, 64, 64, 3)
Y_test -- test set, of shape (None, n_y = 6)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs
Returns:
train_accuracy -- real number, accuracy on the train set (X_train)
test_accuracy -- real number, testing accuracy on the test set (X_test)
parameters -- parameters learnt by the model. They can then be used to predict.
ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep results consistent (tensorflow seed)
seed = 3 # to keep results consistent (numpy seed)
(m, n_H0, n_W0, n_C0) = X_train.shape
n_y = Y_train.shape[1]
costs = [] # To keep track of the cost
# Create Placeholders of the correct shape
### START CODE HERE ### (1 line)
X, Y = create_placeholders(n_H0, n_W0, n_C0, n_y)
### END CODE HERE ###
# Initialize parameters
### START CODE HERE ### (1 line)
parameters = initialize_parameters()
### END CODE HERE ###
# Forward propagation: Build the forward propagation in the tensorflow graph
### START CODE HERE ### (1 line)
Z3 = forward_propagation(X, parameters)
### END CODE HERE ###
# Cost function: Add cost function to tensorflow graph
### START CODE HERE ### (1 line)
cost = compute_cost(Z3, Y)
### END CODE HERE ###
# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer that minimizes the cost.
### START CODE HERE ### (1 line)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
### END CODE HERE ###
# Initialize all the variables globally
init = tf.global_variables_initializer()
# Start the session to compute the tensorflow graph
with tf.Session() as sess:
# Run the initialization
sess.run(init)
# Do the training loop
for epoch in range(num_epochs):
minibatch_cost = 0.
num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
seed = seed + 1
minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# IMPORTANT: The line that runs the graph on a minibatch.
# Run the session to execute the optimizer and the cost, the feedict should contain a minibatch for (X,Y).
### START CODE HERE ### (1 line)
_ , temp_cost = sess.run([optimizer, cost], feed_dict = {X:minibatch_X, Y:minibatch_Y})
### END CODE HERE ###
minibatch_cost += temp_cost / num_minibatches
# Print the cost every epoch
if print_cost == True and epoch % 5 == 0:
print ("Cost after epoch %i: %f" % (epoch, minibatch_cost))
if print_cost == True and epoch % 1 == 0:
costs.append(minibatch_cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# Calculate the correct predictions
predict_op = tf.argmax(Z3, 1)
correct_prediction = tf.equal(predict_op, tf.argmax(Y, 1))
# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print(accuracy)
train_accuracy = accuracy.eval({X: X_train, Y: Y_train})
test_accuracy = accuracy.eval({X: X_test, Y: Y_test})
print("Train Accuracy:", train_accuracy)
print("Test Accuracy:", test_accuracy)
return train_accuracy, test_accuracy, parameters
Explanation: Expected Output:
<table>
<td>
cost =
</td>
<td>
2.91034
</td>
</table>
1.4 Model
Finally you will merge the helper functions you implemented above to build a model. You will train it on the SIGNS dataset.
You have implemented random_mini_batches() in the Optimization programming assignment of course 2. Remember that this function returns a list of mini-batches.
Exercise: Complete the function below.
The model below should:
create placeholders
initialize parameters
forward propagate
compute the cost
create an optimizer
Finally you will create a session and run a for loop for num_epochs, get the mini-batches, and then for each mini-batch you will optimize the function. Hint for initializing the variables
End of explanation
_, _, parameters = model(X_train, Y_train, X_test, Y_test)
Explanation: Run the following cell to train your model for 100 epochs. Check if your cost after epoch 0 and 5 matches our output. If not, stop the cell and go back to your code!
End of explanation
fname = "images/thumbs_up.jpg"
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(64,64))
plt.imshow(my_image)
Explanation: Expected output: although it may not match perfectly, your expected output should be close to ours and your cost value should decrease.
<table>
<tr>
<td>
**Cost after epoch 0 =**
</td>
<td>
1.917929
</td>
</tr>
<tr>
<td>
**Cost after epoch 5 =**
</td>
<td>
1.506757
</td>
</tr>
<tr>
<td>
**Train Accuracy =**
</td>
<td>
0.940741
</td>
</tr>
<tr>
<td>
**Test Accuracy =**
</td>
<td>
0.783333
</td>
</tr>
</table>
Congratulations! You have finised the assignment and built a model that recognizes SIGN language with almost 80% accuracy on the test set. If you wish, feel free to play around with this dataset further. You can actually improve its accuracy by spending more time tuning the hyperparameters, or using regularization (as this model clearly has a high variance).
Once again, here's a thumbs up for your work!
End of explanation |
3,245 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Network from Nielsen's Chapter 1
http
Step1: Set up Network
Step2: Train Network
Step3: Exercise | Python Code:
import mnist_loader
training_data, validation_data, test_data = mnist_loader.load_data_wrapper()
Explanation: Network from Nielsen's Chapter 1
http://neuralnetworksanddeeplearning.com/chap1.html#implementing_our_network_to_classify_digits
Load MNIST Data
End of explanation
import network
# 784 (28 x 28 pixel images) input neurons; 30 hidden neurons; 10 output neurons
net = network.Network([784, 30, 10])
Explanation: Set up Network
End of explanation
# Use stochastic gradient descent over 30 epochs, with mini-batch size of 10, learning rate of 3.0
net.SGD(training_data, 30, 10, 3.0, test_data=test_data)
Explanation: Train Network
End of explanation
two_layer_net = network.Network([784, 10])
two_layer_net.SGD(training_data, 10, 10, 1.0, test_data=test_data)
two_layer_net.SGD(training_data, 10, 10, 2.0, test_data=test_data)
two_layer_net.SGD(training_data, 10, 10, 3.0, test_data=test_data)
two_layer_net.SGD(training_data, 10, 10, 4.0, test_data=test_data)
two_layer_net.SGD(training_data, 20, 10, 3.0, test_data=test_data)
Explanation: Exercise: Create network with just two layers
End of explanation |
3,246 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sebastian Raschka, 2016
https
Step1: Bonus Material - Softmax Regression
Softmax Regression (synonyms
Step2: First, we want to encode the class labels into a format that we can more easily work with; we apply one-hot encoding
Step3: A sample that belongs to class 0 (the first row) has a 1 in the first cell, a sample that belongs to class 2 has a 1 in the second cell of its row, and so forth.
Next, let us define the feature matrix of our 4 training samples. Here, we assume that our dataset consists of 2 features; thus, we create a 4x2 dimensional matrix of our samples and features.
Similarly, we create a 2x3 dimensional weight matrix (one row per feature and one column for each class).
Step4: To compute the net input, we multiply the 4x2 matrix feature matrix X with the 2x3 (n_features x n_classes) weight matrix W, which yields a 4x3 output matrix (n_samples x n_classes) to which we then add the bias unit
Step5: Now, it's time to compute the softmax activation that we discussed earlier
Step6: As we can see, the values for each sample (row) nicely sum up to 1 now. E.g., we can say that the first sample
[ 0.29450637 0.34216758 0.36332605] has a 29.45% probability to belong to class 0.
Now, in order to turn these probabilities back into class labels, we could simply take the argmax-index position of each row
Step7: As we can see, our predictions are terribly wrong, since the correct class labels are [0, 1, 2, 2]. Now, in order to train our logistic model (e.g., via an optimization algorithm such as gradient descent), we need to define a cost function $J(\cdot)$ that we want to minimize
Step12: In order to learn our softmax model -- determining the weight coefficients -- via gradient descent, we then need to compute the derivative
$$\nabla \mathbf{w}_j \, J(\mathbf{W}; \mathbf{b}).$$
I don't want to walk through the tedious details here, but this cost derivative turns out to be simply
Step13: Example 1 - Gradient Descent
Step14: Predicting Class Labels
Step15: Predicting Class Probabilities
Step16: Example 2 - Stochastic Gradient Descent | Python Code:
%load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -v -p matplotlib,numpy,scipy
# to install watermark just uncomment the following line:
#%install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py
%matplotlib inline
Explanation: Sebastian Raschka, 2016
https://github.com/rasbt/python-machine-learning-book
Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
End of explanation
import numpy as np
y = np.array([0, 1, 2, 2])
Explanation: Bonus Material - Softmax Regression
Softmax Regression (synonyms: Multinomial Logistic, Maximum Entropy Classifier, or just Multi-class Logistic Regression) is a generalization of logistic regression that we can use for multi-class classification (under the assumption that the classes are mutually exclusive). In contrast, we use the (standard) Logistic Regression model in binary classification tasks.
Below is a schematic of a Logistic Regression model that we discussed in Chapter 3.
In Softmax Regression (SMR), we replace the sigmoid logistic function by the so-called softmax function $\phi_{softmax}(\cdot)$.
$$P(y=j \mid z^{(i)}) = \phi_{softmax}(z^{(i)}) = \frac{e^{z^{(i)}}}{\sum_{j=0}^{k} e^{z_{k}^{(i)}}},$$
where we define the net input z as
$$z = w_1x_1 + ... + w_mx_m + b= \sum_{l=0}^{m} w_l x_l + b= \mathbf{w}^T\mathbf{x} + b.$$
(w is the weight vector, $\mathbf{x}$ is the feature vector of 1 training sample, and $b$ is the bias unit.)
Now, this softmax function computes the probability that this training sample $\mathbf{x}^{(i)}$ belongs to class $j$ given the weight and net input $z^{(i)}$. So, we compute the probability $p(y = j \mid \mathbf{x^{(i)}; w}_j)$ for each class label in $j = 1, \ldots, k.$. Note the normalization term in the denominator which causes these class probabilities to sum up to one.
To illustrate the concept of softmax, let us walk through a concrete example. Let's assume we have a training set consisting of 4 samples from 3 different classes (0, 1, and 2)
$x_0 \rightarrow \text{class }0$
$x_1 \rightarrow \text{class }1$
$x_2 \rightarrow \text{class }2$
$x_3 \rightarrow \text{class }2$
End of explanation
y_enc = (np.arange(np.max(y) + 1) == y[:, None]).astype(float)
print('one-hot encoding:\n', y_enc)
Explanation: First, we want to encode the class labels into a format that we can more easily work with; we apply one-hot encoding:
End of explanation
X = np.array([[0.1, 0.5],
[1.1, 2.3],
[-1.1, -2.3],
[-1.5, -2.5]])
W = np.array([[0.1, 0.2, 0.3],
[0.1, 0.2, 0.3]])
bias = np.array([0.01, 0.1, 0.1])
print('Inputs X:\n', X)
print('\nWeights W:\n', W)
print('\nbias:\n', bias)
Explanation: A sample that belongs to class 0 (the first row) has a 1 in the first cell, a sample that belongs to class 2 has a 1 in the second cell of its row, and so forth.
Next, let us define the feature matrix of our 4 training samples. Here, we assume that our dataset consists of 2 features; thus, we create a 4x2 dimensional matrix of our samples and features.
Similarly, we create a 2x3 dimensional weight matrix (one row per feature and one column for each class).
End of explanation
X = np.array([[0.1, 0.5],
[1.1, 2.3],
[-1.1, -2.3],
[-1.5, -2.5]])
W = np.array([[0.1, 0.2, 0.3],
[0.1, 0.2, 0.3]])
bias = np.array([0.01, 0.1, 0.1])
print('Inputs X:\n', X)
print('\nWeights W:\n', W)
print('\nbias:\n', bias)
def net_input(X, W, b):
return (X.dot(W) + b)
net_in = net_input(X, W, bias)
print('net input:\n', net_in)
Explanation: To compute the net input, we multiply the 4x2 matrix feature matrix X with the 2x3 (n_features x n_classes) weight matrix W, which yields a 4x3 output matrix (n_samples x n_classes) to which we then add the bias unit:
$$\mathbf{Z} = \mathbf{X}\mathbf{W} + \mathbf{b}.$$
End of explanation
def softmax(z):
return (np.exp(z.T) / np.sum(np.exp(z), axis=1)).T
smax = softmax(net_in)
print('softmax:\n', smax)
Explanation: Now, it's time to compute the softmax activation that we discussed earlier:
$$P(y=j \mid z^{(i)}) = \phi_{softmax}(z^{(i)}) = \frac{e^{z^{(i)}}}{\sum_{j=0}^{k} e^{z_{k}^{(i)}}}.$$
End of explanation
def to_classlabel(z):
return z.argmax(axis=1)
print('predicted class labels: ', to_classlabel(smax))
Explanation: As we can see, the values for each sample (row) nicely sum up to 1 now. E.g., we can say that the first sample
[ 0.29450637 0.34216758 0.36332605] has a 29.45% probability to belong to class 0.
Now, in order to turn these probabilities back into class labels, we could simply take the argmax-index position of each row:
[[ 0.29450637 0.34216758 0.36332605] -> 2
[ 0.21290077 0.32728332 0.45981591] -> 2
[ 0.42860913 0.33380113 0.23758974] -> 0
[ 0.44941979 0.32962558 0.22095463]] -> 0
End of explanation
def cross_entropy(output, y_target):
return - np.sum(np.log(output) * (y_target), axis=1)
xent = cross_entropy(smax, y_enc)
print('Cross Entropy:', xent)
def cost(output, y_target):
return np.mean(cross_entropy(output, y_target))
J_cost = cost(smax, y_enc)
print('Cost: ', J_cost)
Explanation: As we can see, our predictions are terribly wrong, since the correct class labels are [0, 1, 2, 2]. Now, in order to train our logistic model (e.g., via an optimization algorithm such as gradient descent), we need to define a cost function $J(\cdot)$ that we want to minimize:
$$J(\mathbf{W}; \mathbf{b}) = \frac{1}{n} \sum_{i=1}^{n} H(T_i, O_i),$$
which is the average of all cross-entropies over our $n$ training samples. The cross-entropy function is defined as
$$H(T_i, O_i) = -\sum_m T_i \cdot log(O_i).$$
Here the $T$ stands for "target" (i.e., the true class labels) and the $O$ stands for output -- the computed probability via softmax; not the predicted class label.
End of explanation
# Sebastian Raschka 2016
# Implementation of the mulitnomial logistic regression algorithm for
# classification.
# Author: Sebastian Raschka <sebastianraschka.com>
#
# License: BSD 3 clause
import numpy as np
class SoftmaxRegression(object):
Softmax regression classifier.
Parameters
------------
eta : float (default: 0.01)
Learning rate (between 0.0 and 1.0)
epochs : int (default: 50)
Passes over the training dataset.
l2_lambda : float
Regularization parameter for L2 regularization.
No regularization if l2_lambda=0.0.
minibatches : int (default: 1)
Divide the training data into *k* minibatches
for accelerated stochastic gradient descent learning.
Gradient Descent Learning if `minibatches` = 1
Stochastic Gradient Descent learning if `minibatches` = len(y)
Minibatch learning if `minibatches` > 1
random_seed : int (default: None)
Set random state for shuffling and initializing the weights.
zero_init_weight : bool (default: False)
If True, weights are initialized to zero instead of small random
numbers following a standard normal distribution with mean=0 and
stddev=1.
Attributes
-----------
w_ : 2d-array, shape=[n_features, n_classes]
Weights after fitting.
cost_ : list
List of floats, the average cross_entropy for each epoch.
def __init__(self, eta=0.01, epochs=50,
l2_lambda=0.0, minibatches=1,
random_seed=None,
zero_init_weight=False,
print_progress=0):
self.random_seed = random_seed
self.eta = eta
self.epochs = epochs
self.l2_lambda = l2_lambda
self.minibatches = minibatches
self.zero_init_weight = zero_init_weight
def _one_hot(self, y, n_labels):
mat = np.zeros((len(y), n_labels))
for i, val in enumerate(y):
mat[i, val] = 1
return mat.astype(float)
def _net_input(self, X, W, b):
return (X.dot(W) + b)
def _softmax(self, z):
return (np.exp(z.T) / np.sum(np.exp(z), axis=1)).T
def _cross_entropy(self, output, y_target):
return - np.sum(np.log(output) * (y_target), axis=1)
def _cost(self, cross_entropy):
return np.mean(cross_entropy)
def _to_classlabels(self, z):
return z.argmax(axis=1)
def fit(self, X, y, init_weights=True, n_classes=None):
Learn weight coefficients from training data.
Parameters
----------
X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
init_weights : bool (default: True)
(Re)initializes weights to small random floats if True.
n_classes : int (default: None)
A positive integer to declare the number of class labels
if not all class labels are present in a partial training set.
Gets the number of class labels automatically if None.
Ignored if init_weights=False.
Returns
-------
self : object
if init_weights:
if n_classes:
self._n_classes = n_classes
else:
self._n_classes = np.max(y) + 1
self._n_features = X.shape[1]
self.w_ = self._init_weights(
shape=(self._n_features, self._n_classes),
zero_init_weight=self.zero_init_weight,
seed=self.random_seed)
self.b_ = self._init_weights(
shape=self._n_classes,
zero_init_weight=self.zero_init_weight,
seed=self.random_seed)
self.cost_ = []
n_idx = list(range(y.shape[0]))
y_enc = self._one_hot(y, self._n_classes)
# random seed for shuffling
if self.random_seed:
np.random.seed(self.random_seed)
for i in range(self.epochs):
if self.minibatches > 1:
n_idx = np.random.permutation(n_idx)
minis = np.array_split(n_idx, self.minibatches)
for idx in minis:
# givens:
# w_ -> n_feat x n_classes
# b_ -> n_classes
# net_input, softmax and diff -> n_samples x n_classes:
net = self._net_input(X[idx], self.w_, self.b_)
softm = self._softmax(net)
diff = softm - y_enc[idx]
# gradient -> n_features x n_classes
grad = np.dot(X[idx].T, diff)
# update in opp. direction of the cost gradient
self.w_ -= (self.eta * grad +
self.eta * self.l2_lambda * self.w_)
self.b_ -= np.mean(diff, axis=0)
# compute cost of the whole epoch
net = self._net_input(X, self.w_, self.b_)
softm = self._softmax(net)
cross_ent = self._cross_entropy(output=softm, y_target=y_enc)
cost = self._cost(cross_ent)
self.cost_.append(cost)
return self
def predict_proba(self, X):
Predict class probabilities of X from the net input.
Parameters
----------
X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
Returns
----------
Class probabilties : array-like, shape= [n_samples, n_classes]
net = self._net_input(X, self.w_, self.b_)
softm = self._softmax(net)
return softm
def predict(self, X):
Predict class labels of X.
Parameters
----------
X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
Returns
----------
class_labels : array-like, shape = [n_samples]
Predicted class labels.
probas = self.predict_proba(X)
return self._to_classlabels(probas)
Explanation: In order to learn our softmax model -- determining the weight coefficients -- via gradient descent, we then need to compute the derivative
$$\nabla \mathbf{w}_j \, J(\mathbf{W}; \mathbf{b}).$$
I don't want to walk through the tedious details here, but this cost derivative turns out to be simply:
$$\nabla \mathbf{w}j \, J(\mathbf{W}; \mathbf{b}) = \frac{1}{n} \sum^{n}{i=0} \big[\mathbf{x}^{(i)}\ \big(O_i - T_i \big) \big]$$
We can then use the cost derivate to update the weights in opposite direction of the cost gradient with learning rate $\eta$:
$$\mathbf{w}_j := \mathbf{w}_j - \eta \nabla \mathbf{w}_j \, J(\mathbf{W}; \mathbf{b})$$
for each class $$j \in {0, 1, ..., k}$$
(note that $\mathbf{w}_j$ is the weight vector for the class $y=j$), and we update the bias units
$$\mathbf{b}j := \mathbf{b}_j - \eta \bigg[ \frac{1}{n} \sum^{n}{i=0} \big(O_i - T_i \big) \bigg].$$
As a penalty against complexity, an approach to reduce the variance of our model and decrease the degree of overfitting by adding additional bias, we can further add a regularization term such as the L2 term with the regularization parameter $\lambda$:
L2: $\frac{\lambda}{2} ||\mathbf{w}||_{2}^{2}$,
where
$$||\mathbf{w}||{2}^{2} = \sum^{m}{l=0} \sum^{k}{j=0} w{i, j}$$
so that our cost function becomes
$$J(\mathbf{W}; \mathbf{b}) = \frac{1}{n} \sum_{i=1}^{n} H(T_i, O_i) + \frac{\lambda}{2} ||\mathbf{w}||_{2}^{2}$$
and we define the "regularized" weight update as
$$\mathbf{w}_j := \mathbf{w}_j - \eta \big[\nabla \mathbf{w}_j \, J(\mathbf{W}) + \lambda \mathbf{w}_j \big].$$
(Please note that we don't regularize the bias term.)
SoftmaxRegression Code
Bringing the concepts together, we could come up with an implementation as follows:
End of explanation
from mlxtend.data import iris_data
from mlxtend.evaluate import plot_decision_regions
from mlxtend.classifier import SoftmaxRegression
import matplotlib.pyplot as plt
# Loading Data
X, y = iris_data()
X = X[:, [0, 3]] # sepal length and petal width
# standardize
X[:,0] = (X[:,0] - X[:,0].mean()) / X[:,0].std()
X[:,1] = (X[:,1] - X[:,1].mean()) / X[:,1].std()
lr = SoftmaxRegression(eta=0.005, epochs=200, minibatches=1, random_seed=1)
lr.fit(X, y)
plot_decision_regions(X, y, clf=lr)
plt.title('Softmax Regression - Stochastic Gradient Descent')
plt.show()
plt.plot(range(len(lr.cost_)), lr.cost_)
plt.xlabel('Iterations')
plt.ylabel('Cost')
plt.show()
Explanation: Example 1 - Gradient Descent
End of explanation
y_pred = lr.predict(X)
print('Last 3 Class Labels: %s' % y_pred[-3:])
Explanation: Predicting Class Labels
End of explanation
y_pred = lr.predict_proba(X)
print('Last 3 Class Labels:\n %s' % y_pred[-3:])
Explanation: Predicting Class Probabilities
End of explanation
from mlxtend.data import iris_data
from mlxtend.evaluate import plot_decision_regions
from mlxtend.classifier import SoftmaxRegression
import matplotlib.pyplot as plt
# Loading Data
X, y = iris_data()
X = X[:, [0, 3]] # sepal length and petal width
# standardize
X[:,0] = (X[:,0] - X[:,0].mean()) / X[:,0].std()
X[:,1] = (X[:,1] - X[:,1].mean()) / X[:,1].std()
lr = SoftmaxRegression(eta=0.005, epochs=200, minibatches=len(y), random_seed=1)
lr.fit(X, y)
plot_decision_regions(X, y, clf=lr)
plt.title('Softmax Regression - Stochastic Gradient Descent')
plt.show()
plt.plot(range(len(lr.cost_)), lr.cost_)
plt.xlabel('Iterations')
plt.ylabel('Cost')
plt.show()
Explanation: Example 2 - Stochastic Gradient Descent
End of explanation |
3,247 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="../Pierian-Data-Logo.PNG">
<br>
<strong><center>Copyright 2019. Created by Jose Marcial Portilla.</center></strong>
Saving and Loading Trained Models
Refer back to this notebook as a refresher on saving and loading models.
Saving a trained model
Save a trained model to a file in case you want to come back later and feed new data through it.
To save a trained model called "model" to a file called "MyModel.pt"
Step1: To ensure the model has been trained before saving (assumes the variables "losses" and "epochs" have been defined)
Step2: Loading a saved model (starting from scratch)
We can load the trained weights and biases from a saved model. If we've just opened the notebook, we'll have to run standard imports and function definitions.
1. Perform standard imports
These will depend on the scope of the model, chosen displays, metrics, etc.
Step3: 2. Run the model definition
We'll introduce the model shown below in the next section.
Step4: 3. Instantiate the model, load parameters
First we instantiate the model, then we load the pre-trained weights & biases, and finally we set the model to "eval" mode to prevent any further backprops. | Python Code:
torch.save(model.state_dict(), 'MyModel.pt')
Explanation: <img src="../Pierian-Data-Logo.PNG">
<br>
<strong><center>Copyright 2019. Created by Jose Marcial Portilla.</center></strong>
Saving and Loading Trained Models
Refer back to this notebook as a refresher on saving and loading models.
Saving a trained model
Save a trained model to a file in case you want to come back later and feed new data through it.
To save a trained model called "model" to a file called "MyModel.pt":
End of explanation
if len(losses) == epochs:
torch.save(model.state_dict(), 'MyModel.pt')
else:
print('Model has not been trained. Consider loading a trained model instead.')
Explanation: To ensure the model has been trained before saving (assumes the variables "losses" and "epochs" have been defined):
End of explanation
# Perform standard imports
import torch
import torch.nn as nn
import numpy as np
import pandas as pd
Explanation: Loading a saved model (starting from scratch)
We can load the trained weights and biases from a saved model. If we've just opened the notebook, we'll have to run standard imports and function definitions.
1. Perform standard imports
These will depend on the scope of the model, chosen displays, metrics, etc.
End of explanation
class MultilayerPerceptron(nn.Module):
def __init__(self, in_sz=784, out_sz=10, layers=[120,84]):
super().__init__()
self.fc1 = nn.Linear(in_sz,layers[0])
self.fc2 = nn.Linear(layers[0],layers[1])
self.fc3 = nn.Linear(layers[1],out_sz)
def forward(self,X):
X = F.relu(self.fc1(X))
X = F.relu(self.fc2(X))
X = self.fc3(X)
return F.log_softmax(X, dim=1)
Explanation: 2. Run the model definition
We'll introduce the model shown below in the next section.
End of explanation
model2 = MultilayerPerceptron()
model2.load_state_dict(torch.load('MyModel.pt'));
model2.eval() # be sure to run this step!
Explanation: 3. Instantiate the model, load parameters
First we instantiate the model, then we load the pre-trained weights & biases, and finally we set the model to "eval" mode to prevent any further backprops.
End of explanation |
3,248 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ATTO550, ATT0647N specs
ATTO550
Step1: To obtain the absorption cross-section we need to normalize by the extinctyion coefficient
Step2: Absorption cross-section @ 532 nm | Python Code:
atto550_ext_coeff = 1.2*1e5 # 1 / ( mol cm )
atto647N_ext_coeff = 1.5*1e5 # 1 / ( mol cm )
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
a550 = pd.read_excel('ATTO550.xlsx', 'Tabelle1', index_col=None, na_values=['NA'])
a647N = pd.read_excel('ATTO647N.xlsx', 'Tabelle1', index_col=None, na_values=['NA'])
a550[:4]
a647N[:4]
a550.columns = ['wl_absorption', 'absorption', 'none', 'wl_emission', 'emission']
ab_550 = a550[['wl_absorption', 'absorption']][2:].set_index('wl_absorption').dropna()
em_550 = a550[['wl_emission', 'emission']][2:].set_index('wl_emission').dropna()
atto550 = pd.concat([ab_550, em_550], axis=1)
a647N.columns = ['wl_absorption', 'absorption', 'none', 'wl_emission', 'emission']
ab_647N = a647N[['wl_absorption', 'absorption']][2:].set_index('wl_absorption').dropna()
em_647N = a647N[['wl_emission', 'emission']][2:].set_index('wl_emission').dropna()
atto647N = pd.concat([ab_647N, em_647N], axis=1)
atto550.absorption.plot(label='ATTO550 abs')
atto550.emission.dropna().plot(style='--', ax=plt.gca(), label='ATTO550 em')
atto647N.absorption.plot(ax=plt.gca(), label='ATTO647N abs')
atto647N.emission.dropna().plot(style='--', ax=plt.gca(), label='ATTO647N em')
plt.legend()
plt.title('Absorption and emission spectra (normalized)');
Explanation: ATTO550, ATT0647N specs
ATTO550: home page, spectra in xls
ATTO647N: home page, spectra in xls
End of explanation
atto550.absorption *= atto550_ext_coeff
atto647N.absorption *= atto647N_ext_coeff
atto550.absorption.plot(label='ATTO550 abs', style='b')
atto647N.absorption.plot(ax=plt.gca(), label='ATTO647N abs', style='r')
plt.legend(loc='best')
plt.title('Absorption cross-section');
Explanation: To obtain the absorption cross-section we need to normalize by the extinctyion coefficient:
End of explanation
atto550_abs532 = atto550.absorption.loc[532]
atto647N_abs532 = atto647N.absorption.loc[532]
atto550_abs532, atto647N_abs532
ratio = atto647N_abs532 / atto550_abs532
ratio
with open('../results/Dyes - ATT0647N-ATTO550 abs X-section ratio at 532nm.csv', 'w') as f:
f.write(str(ratio))
Explanation: Absorption cross-section @ 532 nm
End of explanation |
3,249 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Transform EEG data using current source density (CSD)
This script shows an example of how to use CSD
Step1: Load sample subject data
Step2: Plot the raw data and CSD-transformed raw data
Step3: Also look at the power spectral densities
Step4: CSD can also be computed on Evoked (averaged) data.
Here we epoch and average the data so we can demonstrate that.
Step5: First let's look at how CSD affects scalp topography
Step6: CSD has parameters stiffness and lambda2 affecting smoothing and
spline flexibility, respectively. Let's see how they affect the solution | Python Code:
# Authors: Alex Rockhill <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
Explanation: Transform EEG data using current source density (CSD)
This script shows an example of how to use CSD
:footcitePerrinEtAl1987,PerrinEtAl1989,Cohen2014,KayserTenke2015.
CSD takes the spatial Laplacian of the sensor signal (derivative in both
x and y). It does what a planar gradiometer does in MEG. Computing these
spatial derivatives reduces point spread. CSD transformed data have a sharper
or more distinct topography, reducing the negative impact of volume conduction.
End of explanation
raw = mne.io.read_raw_fif(data_path + '/MEG/sample/sample_audvis_raw.fif')
raw = raw.pick_types(meg=False, eeg=True, eog=True, ecg=True, stim=True,
exclude=raw.info['bads']).load_data()
events = mne.find_events(raw)
raw.set_eeg_reference(projection=True).apply_proj()
Explanation: Load sample subject data
End of explanation
raw_csd = mne.preprocessing.compute_current_source_density(raw)
raw.plot()
raw_csd.plot()
Explanation: Plot the raw data and CSD-transformed raw data:
End of explanation
raw.plot_psd()
raw_csd.plot_psd()
Explanation: Also look at the power spectral densities:
End of explanation
event_id = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'smiley': 5, 'button': 32}
epochs = mne.Epochs(raw, events, event_id=event_id, tmin=-0.2, tmax=.5,
preload=True)
evoked = epochs['auditory'].average()
Explanation: CSD can also be computed on Evoked (averaged) data.
Here we epoch and average the data so we can demonstrate that.
End of explanation
times = np.array([-0.1, 0., 0.05, 0.1, 0.15])
evoked_csd = mne.preprocessing.compute_current_source_density(evoked)
evoked.plot_joint(title='Average Reference', show=False)
evoked_csd.plot_joint(title='Current Source Density')
Explanation: First let's look at how CSD affects scalp topography:
End of explanation
fig, ax = plt.subplots(4, 4)
fig.subplots_adjust(hspace=0.5)
fig.set_size_inches(10, 10)
for i, lambda2 in enumerate([0, 1e-7, 1e-5, 1e-3]):
for j, m in enumerate([5, 4, 3, 2]):
this_evoked_csd = mne.preprocessing.compute_current_source_density(
evoked, stiffness=m, lambda2=lambda2)
this_evoked_csd.plot_topomap(
0.1, axes=ax[i, j], outlines='skirt', contours=4, time_unit='s',
colorbar=False, show=False)
ax[i, j].set_title('stiffness=%i\nλ²=%s' % (m, lambda2))
Explanation: CSD has parameters stiffness and lambda2 affecting smoothing and
spline flexibility, respectively. Let's see how they affect the solution:
End of explanation |
3,250 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cams', 'sandbox-1', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: CAMS
Source ID: SANDBOX-1
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:43
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
3,251 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vectors
A vector is just a point in some finite-dimensional space.
In Python, we can represent a vector as a list of numbers
Step5: Now we will build functions to perform vector arithmetic. Vectors add componentwise meaning they must line up side-by-side and elements are added to adjacent elements.
Step8: Dot Product
Step10: We can now use the above to compute the distance between two vectors
Step13: Matrices
A matrix is a two-dimensional list of numbers. | Python Code:
height_weight_age = [70, # inches
170, # pounds
40] # years
grades = [95, # exam1
80, # exam2
75, # exam3
62] # exam4
Explanation: Vectors
A vector is just a point in some finite-dimensional space.
In Python, we can represent a vector as a list of numbers:
End of explanation
def vector_add(v, w):
adds two vectors together v + w
return [v_i + w_i for v_i, w_i in zip(v, w)]
vector_add(grades, grades)
def vector_subtract(v, w):
subtracts one vector from another v - w
return [v_i - w_i for v_i, w_i in zip(v, w)]
vector_subtract(grades, grades)
def vector_sum(vectors):
sums a list of vectors
vectors = list(vectors)
result = vectors[0]
for vector in vectors[1:]:
result = vector_add(result, vector)
return result
vector_sum([grades, grades, grades])
def scalar_multiply(c, v):
multiply elements in vector v by scalar c
return [c * v_i for v_i in v]
scalar_multiply(1.5, grades)
Explanation: Now we will build functions to perform vector arithmetic. Vectors add componentwise meaning they must line up side-by-side and elements are added to adjacent elements.
End of explanation
def dot(v, w):
v_1 * w_1 + ... v_n * w_n
return sum(v_i * w_i for v_i, w_i in zip(v, w))
dot(grades, grades)
def sum_of_squares(v):
v_1 * v_1 + ... + v_n * v_n
return dot(v, v)
sum_of_squares(grades)
import math
def magnitude(v):
return math.sqrt(sum_of_squares(v))
magnitude(grades)
Explanation: Dot Product: the sum of two vectors' componentwise products; measures how far vector v extends in the w direction.
End of explanation
def squared_distance(v, w):
(v_1 - w_1) ** 2 + ... + (v_n - w_n) ** 2
return sum_of_squares(vector_subtract(v, w))
def distance(v, w):
return math.sqrt(squared_distance(v, w))
distance(grades, [90, 75, 70, 60])
Explanation: We can now use the above to compute the distance between two vectors:
$$\sqrt{(v_i - w_1)^2 + ... + (v_n - w_n)^2}$$
End of explanation
# 2 x 3 matrix
A = [[1, 2, 3],
[4, 5, 6]]
# 3 x 2 matrix
B = [[1, 2],
[3, 4],
[5, 6]]
def shape(A):
num_rows = len(A)
num_cols = len(A[0]) if A else 0
return num_rows, num_cols
shape(A), shape(B)
def get_row(A, i):
return A[i]
def get_column(A, j):
return [A_i[j] for A_i in A]
get_row(A, 1), get_column(A, 1)
def make_matrix(num_rows, num_cols, entry_fn):
returns a num_rows x num_cols matrix
return [[entry_fn(i, j) for j in range(num_cols)] for i in range(num_rows)]
make_matrix(2, 2, lambda i, j: i + j)
def diagonal(i, j):
given row i and col j, returns 1 for diagonal, 0 otherwise
return 1 if i == j else 0
make_matrix(5, 5, diagonal)
Explanation: Matrices
A matrix is a two-dimensional list of numbers.
End of explanation |
3,252 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting Started
Magma is a hardware construction language written in Python 3. The central abstraction in Magma is a Circuit, which is analagous to a verilog module. A circuit is a set of functional units that are wired together.
Magma is designed to work with Mantle, a library of hardware building blocks including logic and arithmetic units, registers, memories, etc.
The Loam system builds upon the Magma Circuit abstraction to represent parts and boards. A board consists of a set of parts that are wired together. Loam makes it is easy to setup a board such as the Lattice IceStick.
Lattice IceStick
In this tutorial, we will be using the Lattice IceStick.
This breakout board contains a ICE40HX FPGA with 1K 4-input LUTs.
The board has several useful peripherals including an FTDI USB interface
with an integrated JTAG interface which is used to program the FPGA
and a USART which is used to communicate with the host.
The board also contains 5 LEDs,
a PMOD interface,
and 2 10-pin headers (J1 and J3).
The 10-pin headers bring out 8 GPIO pins,
as well as power and ground.
This board is inexpensive ($25), can be plugged into the USB port on your laptop,
and, best of all, can be
programmed using an open source software toolchain.
Additional information about the IceStick Board can be found in the
IceStick Programmers Guide
Blink
As a first example,
let's write a Magma program that blinks an LED on the Icestick Board.
First, we import Magma as the module m.
Next, we import Counter from Mantle.
Before doing the import we configure mantle to use the ICE40 as the target device.
Step1: The next step is to setup the IceStick board. We import the class IceStick from Loam.
We then create an instance of an IceStick.
This board instance has member variables
that store the configuration of all the parts on the board.
The blink program will use the Clock and the LED D5.
Turning on the Clock and the LED D5 sets up the build environment
to use the associated ICE40 GPIO pins.
Step2: Now that the IceStick setup is done,
we create a main program that runs on the Lattice ICE40 FPGA.
This main program becomes the top level module.
We create a simple circuit inside main.
The circuit has a a 22-bit counter wired to D5.
The crystal connected to the ICE40 has a frequency of 12 Mhz.
so the counter will increment at that rate.
Wiring the most-significant bit of the counter to D5
will cause the LED to blink roughly 3 times per second.
D5 is accessible via main.
In a similar way, the output of the counter is accesible via counter.O,
and since this an array of bits we can access the MSB using Python's standard list indexing syntax.
Step3: We then compile the program to verilog. This step also creates a PCF (physical constraints file).
Step4: Now we run the open source tools for the Lattice ICE40.
yosys synthesizes the input verilog file (blink.v)
to produce an output netlist (blink.blif).
arachne-pnr runs the place and router and generates the bitstream as a text file.
icepack creates a binary bitstream file that can be downloaded to the FPGA. iceprog uploads the bitstream to the device. Once the device has been programmed, you should see the center, green LED blinking.
Step5: You can view the verilog file generated by Magma.
Step6: Notice that the top-level module contains two arguments (ports),
D5 and CLKIN.
D5 has been configured as an output,
and CLKIN as an input.
The mapping from these named arguments to pins is contained in the
PCF (physical constraint file). | Python Code:
import magma as m
m.set_mantle_target("ice40")
Explanation: Getting Started
Magma is a hardware construction language written in Python 3. The central abstraction in Magma is a Circuit, which is analagous to a verilog module. A circuit is a set of functional units that are wired together.
Magma is designed to work with Mantle, a library of hardware building blocks including logic and arithmetic units, registers, memories, etc.
The Loam system builds upon the Magma Circuit abstraction to represent parts and boards. A board consists of a set of parts that are wired together. Loam makes it is easy to setup a board such as the Lattice IceStick.
Lattice IceStick
In this tutorial, we will be using the Lattice IceStick.
This breakout board contains a ICE40HX FPGA with 1K 4-input LUTs.
The board has several useful peripherals including an FTDI USB interface
with an integrated JTAG interface which is used to program the FPGA
and a USART which is used to communicate with the host.
The board also contains 5 LEDs,
a PMOD interface,
and 2 10-pin headers (J1 and J3).
The 10-pin headers bring out 8 GPIO pins,
as well as power and ground.
This board is inexpensive ($25), can be plugged into the USB port on your laptop,
and, best of all, can be
programmed using an open source software toolchain.
Additional information about the IceStick Board can be found in the
IceStick Programmers Guide
Blink
As a first example,
let's write a Magma program that blinks an LED on the Icestick Board.
First, we import Magma as the module m.
Next, we import Counter from Mantle.
Before doing the import we configure mantle to use the ICE40 as the target device.
End of explanation
from loam.boards.icestick import IceStick
# Create an instance of an IceStick board
icestick = IceStick()
# Turn on the Clock
# The clock must turned on because we are using a synchronous counter
icestick.Clock.on()
# Turn on the LED D5
icestick.D5.on();
Explanation: The next step is to setup the IceStick board. We import the class IceStick from Loam.
We then create an instance of an IceStick.
This board instance has member variables
that store the configuration of all the parts on the board.
The blink program will use the Clock and the LED D5.
Turning on the Clock and the LED D5 sets up the build environment
to use the associated ICE40 GPIO pins.
End of explanation
from mantle import Counter
N = 22
# Define the main Magma Circuit on the FPGA on the IceStick
main = icestick.DefineMain()
# Instance a 22-bit counter
counter = Counter(N)
# Wire bit 21 of the counter's output to D5.
main.D5 <= counter.O[N-1]
# End main
m.EndDefine()
Explanation: Now that the IceStick setup is done,
we create a main program that runs on the Lattice ICE40 FPGA.
This main program becomes the top level module.
We create a simple circuit inside main.
The circuit has a a 22-bit counter wired to D5.
The crystal connected to the ICE40 has a frequency of 12 Mhz.
so the counter will increment at that rate.
Wiring the most-significant bit of the counter to D5
will cause the LED to blink roughly 3 times per second.
D5 is accessible via main.
In a similar way, the output of the counter is accesible via counter.O,
and since this an array of bits we can access the MSB using Python's standard list indexing syntax.
End of explanation
m.compile('build/blink', main)
Explanation: We then compile the program to verilog. This step also creates a PCF (physical constraints file).
End of explanation
%%bash
cd build
yosys -q -p 'synth_ice40 -top main -blif blink.blif' blink.v
arachne-pnr -q -d 1k -o blink.txt -p blink.pcf blink.blif
icepack blink.txt blink.bin
#iceprog blink.bin
Explanation: Now we run the open source tools for the Lattice ICE40.
yosys synthesizes the input verilog file (blink.v)
to produce an output netlist (blink.blif).
arachne-pnr runs the place and router and generates the bitstream as a text file.
icepack creates a binary bitstream file that can be downloaded to the FPGA. iceprog uploads the bitstream to the device. Once the device has been programmed, you should see the center, green LED blinking.
End of explanation
%cat build/blink.v
Explanation: You can view the verilog file generated by Magma.
End of explanation
%cat build/blink.pcf
Explanation: Notice that the top-level module contains two arguments (ports),
D5 and CLKIN.
D5 has been configured as an output,
and CLKIN as an input.
The mapping from these named arguments to pins is contained in the
PCF (physical constraint file).
End of explanation |
3,253 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
APS 5 - Questões com auxílio do Pandas
Nome
Step1: Liste as primeiras linhas do DataFrame
Step2: Q1 - Manipulando o DataFrame
Crie uma coluna chamada Hemisfério baseada na Latitude
A regra de formação é a seguinte
Step3: Q2 - Fit e Histograma
Faça o Histograma da Magnitude. Interprete.
Step4: Faça o fit de uma distribuição exponencial sobre os dados da Magnitude, achando os valores de loc e scale. Interprete loc e scale no caso da exponencial.
Documentação
Step5: Q3 - Tabela cruzada
Faça uma tabela de cruzamento das variáveis Hemisfério e Type
Sua tabela deve ser <font color=red> normalizada</font>
Step6: Q3.1 - Qual a probabilidade de ocorrer um terremoto no hemisfério norte?
Adicione na célula abaixo o cálculo
Step7: Explique o seu raciocínio
O cálculo da probabilidade nesse caso se baseia na análise dos casos que ocorrem no Norte em comparação com os casos totais de terremoto. Portanto para saber a probabilidade de ocorrer um terremoto no hemisfério Norte basta dividir esse valor, apresentado no crosstab, pela probabilidade total.
Q3.2 - Dado que aconteceu no Norte, qual a probabilidade de ele ter sido Nuclear Explosion?
Calcule a resposta abaixo, ou explique como a encontrou
Se for cálculo preencha a célula a seguir
Step8: Se conseguir obter a resposta sem calcular, insira a resposta abaixo
Step9: Calcule a correlação entre as variáveis Magnitude Error e Depth
Step10: Explique o que significa o valor da correlação calculada acima?
A correlação apresentada acima mostra uma espécie de dependência entre as duas variáveis, no caso Magnitude Error e Depth, observando o gráfico mostrado acima os valores são bem distantes, mas é justamente isso e o valor da correlação mostrado, que é baixo, que mostra uma alta dependência entre as duas variáveis, não há grande discrepância entre os valores. O fato de ser negativo justificaria uma reta descrescente.
Q5 - Describe e boxplot
Faça o describe e o boxplot da Latitude e da Longitude. Explique os valores
Step11: Q6 - Tirando conclusões com base nos dados
Em um certo lugar já ocorreram abalos com Magnitude Type MB e Type Nuclear Explosion.
Responda | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import expon
from numpy import arange
import scipy.stats as stats
#Abrir o arquivo
df = pd.read_csv('earthquake.csv')
#listar colunas
print(list(df))
Explanation: APS 5 - Questões com auxílio do Pandas
Nome: <font color=blue> Gabriel Heusi Pereira Bueno de Camargo </font>
APS INDIVIDUAL
Data de Entrega: 26/Set até às 23h59 via GitHub.
Vamos trabalhar com dados do USGS (United States Geological Survey) para tentar determinar se os abalos detectados no hemisfério Norte têm grande probabilidade de serem testes nucleares.
End of explanation
df.head()
Explanation: Liste as primeiras linhas do DataFrame
End of explanation
df.loc[(df.Latitude >=0), "Hemisfério"] = "Norte"
df.loc[(df.Latitude <0), "Hemisfério"] = "Sul"
df.head()
df.Magnitude.describe()
Explanation: Q1 - Manipulando o DataFrame
Crie uma coluna chamada Hemisfério baseada na Latitude
A regra de formação é a seguinte:
Valor | Critério
---|---
Norte | Latitude positiva
Sul | Latitude negativa
End of explanation
f = plt.figure(figsize=(11,5))
faixas = arange(5,9,0.65)
plot = df.Magnitude.plot.hist(bins=faixas , title="Histograma de Magnitude",normed=1,alpha = 0.9,color="g")
plt.xlabel("Magnitude")
plt.ylabel("Densidade")
plt.show()
Explanation: Q2 - Fit e Histograma
Faça o Histograma da Magnitude. Interprete.
End of explanation
mu = df.Magnitude.mean()
dp = df.Magnitude.std()
fig = plt.figure(figsize=(11, 5))
plot= df.Magnitude.plot.hist(bins = faixas, title='HISTOGRAMA Magnitude ', normed=1, alpha=0.9,color = 'r')
a = sorted(df.Magnitude)
plt.plot(a, stats.norm.pdf(a, loc = mu, scale = dp))
plt.title('Histograma X Pdf')
Explanation: Faça o fit de uma distribuição exponencial sobre os dados da Magnitude, achando os valores de loc e scale. Interprete loc e scale no caso da exponencial.
Documentação: https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.expon.html
Refaça o Histograma plotando a fdp (função densidade de probabilidade) da exponencial com os parâmetros achados no fit em cima. Cuidado com o domínio utilizado. Interprete.
End of explanation
ct = pd.crosstab(df.Hemisfério,df.Type,margins=True,normalize = True)
ct
Explanation: Q3 - Tabela cruzada
Faça uma tabela de cruzamento das variáveis Hemisfério e Type
Sua tabela deve ser <font color=red> normalizada</font>
End of explanation
probNorte = ct.Earthquake.Norte/ct.Earthquake.All
print(probNorte)
Explanation: Q3.1 - Qual a probabilidade de ocorrer um terremoto no hemisfério norte?
Adicione na célula abaixo o cálculo:
End of explanation
probNuclear = ct["Nuclear Explosion"]["Norte"]/ct.All.Norte
print(probNuclear)
Explanation: Explique o seu raciocínio
O cálculo da probabilidade nesse caso se baseia na análise dos casos que ocorrem no Norte em comparação com os casos totais de terremoto. Portanto para saber a probabilidade de ocorrer um terremoto no hemisfério Norte basta dividir esse valor, apresentado no crosstab, pela probabilidade total.
Q3.2 - Dado que aconteceu no Norte, qual a probabilidade de ele ter sido Nuclear Explosion?
Calcule a resposta abaixo, ou explique como a encontrou
Se for cálculo preencha a célula a seguir:
End of explanation
plt.scatter(x = df['Magnitude Error'],
y = df['Depth'])
plt.show()
Explanation: Se conseguir obter a resposta sem calcular, insira a resposta abaixo:
A probabilidade de ter sido Nuclear Explosion é ...
Q4 - Análise bivariada
Faça o plot de dispersão (scatter plot) entre as variáveis Magnitude Error e Depth
End of explanation
df["Depth"].corr(df["Magnitude Error"])
Explanation: Calcule a correlação entre as variáveis Magnitude Error e Depth
End of explanation
Lat = df["Latitude"].describe()
Long = df["Longitude"].describe()
print(Lat,Long)
df.boxplot(column = ["Latitude","Longitude"])
plt.show()
Explanation: Explique o que significa o valor da correlação calculada acima?
A correlação apresentada acima mostra uma espécie de dependência entre as duas variáveis, no caso Magnitude Error e Depth, observando o gráfico mostrado acima os valores são bem distantes, mas é justamente isso e o valor da correlação mostrado, que é baixo, que mostra uma alta dependência entre as duas variáveis, não há grande discrepância entre os valores. O fato de ser negativo justificaria uma reta descrescente.
Q5 - Describe e boxplot
Faça o describe e o boxplot da Latitude e da Longitude. Explique os valores
End of explanation
df.loc[(df.Type=="Nuclear Explosion")&(df["Magnitude Type"]=="MB")&(df["Hemisfério"]=="Sul"),"Hemis"]="Sul"
df.loc[(df.Type=="Nuclear Explosion")&(df["Magnitude Type"]=="MB")&(df["Hemisfério"]=="Norte"),"Hemis"]="Norte"
sul=df["Hemis"].value_counts("Sul")
sul
Explanation: Q6 - Tirando conclusões com base nos dados
Em um certo lugar já ocorreram abalos com Magnitude Type MB e Type Nuclear Explosion.
Responda:
* É mais provável que tenha sido no norte ou no sul?
Assuma que os Magnitude Type e Type são independentes
End of explanation |
3,254 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<span style="color
Step7: Vhanilla RNN class and functions
Step8: Placeholder and initializers
Step9: Models
Step10: Dataset Preparation | Python Code:
import numpy as np
import tensorflow as tf
from sklearn.datasets import load_digits
from sklearn.cross_validation import train_test_split
import pylab as pl
from IPython import display
import sys
%matplotlib inline
Explanation: <span style="color:green"> VANILLA RNN ON 8*8 MNIST DATASET TO PREDICT TEN CLASS
<span style="color:blue">Its a dynamic sequence and batch vhanilla rnn. This is created with tensorflow scan and map higher ops!!!!
<span style="color:blue">This is a base rnn which can be used to create GRU, LSTM, Neural Stack Machine, Neural Turing Machine and RNN-EM and so on!
Importing Libraries
End of explanation
class RNN_cell(object):
RNN cell object which takes 3 arguments for initialization.
input_size = Input Vector size
hidden_layer_size = Hidden layer size
target_size = Output vector size
def __init__(self, input_size, hidden_layer_size, target_size):
# Initialization of given values
self.input_size = input_size
self.hidden_layer_size = hidden_layer_size
self.target_size = target_size
# Weights and Bias for input and hidden tensor
self.Wx = tf.Variable(tf.zeros(
[self.input_size, self.hidden_layer_size]))
self.Wh = tf.Variable(tf.zeros(
[self.hidden_layer_size, self.hidden_layer_size]))
self.bi = tf.Variable(tf.zeros([self.hidden_layer_size]))
# Weights for output layers
self.Wo = tf.Variable(tf.truncated_normal(
[self.hidden_layer_size, self.target_size],mean=0,stddev=.01))
self.bo = tf.Variable(tf.truncated_normal([self.target_size],mean=0,stddev=.01))
# Placeholder for input vector with shape[batch, seq, embeddings]
self._inputs = tf.placeholder(tf.float32,
shape=[None, None, self.input_size],
name='inputs')
# Processing inputs to work with scan function
self.processed_input = process_batch_input_for_RNN(self._inputs)
'''
Initial hidden state's shape is [1,self.hidden_layer_size]
In First time stamp, we are doing dot product with weights to
get the shape of [batch_size, self.hidden_layer_size].
For this dot product tensorflow use broadcasting. But during
Back propagation a low level error occurs.
So to solve the problem it was needed to initialize initial
hiddden state of size [batch_size, self.hidden_layer_size].
So here is a little hack !!!! Getting the same shaped
initial hidden state of zeros.
'''
self.initial_hidden = self._inputs[:, 0, :]
self.initial_hidden = tf.matmul(
self.initial_hidden, tf.zeros([input_size, hidden_layer_size]))
# Function for vhanilla RNN.
def vanilla_rnn(self, previous_hidden_state, x):
This function takes previous hidden state and input and
outputs current hidden state.
current_hidden_state = tf.tanh(
tf.matmul(previous_hidden_state, self.Wh) +
tf.matmul(x, self.Wx) + self.bi)
return current_hidden_state
# Function for getting all hidden state.
def get_states(self):
Iterates through time/ sequence to get all hidden state
# Getting all hidden state throuh time
all_hidden_states = tf.scan(self.vanilla_rnn,
self.processed_input,
initializer=self.initial_hidden,
name='states')
return all_hidden_states
# Function to get output from a hidden layer
def get_output(self, hidden_state):
This function takes hidden state and returns output
output = tf.nn.relu(tf.matmul(hidden_state, self.Wo) + self.bo)
return output
# Function for getting all output layers
def get_outputs(self):
Iterating through hidden states to get outputs for all timestamp
all_hidden_states = self.get_states()
all_outputs = tf.map_fn(self.get_output, all_hidden_states)
return all_outputs
# Function to convert batch input data to use scan ops of tensorflow.
def process_batch_input_for_RNN(batch_input):
Process tensor of size [5,3,2] to [3,5,2]
batch_input_ = tf.transpose(batch_input, perm=[2, 0, 1])
X = tf.transpose(batch_input_)
return X
Explanation: Vhanilla RNN class and functions
End of explanation
hidden_layer_size = 110
input_size = 8
target_size = 10
y = tf.placeholder(tf.float32, shape=[None, target_size],name='inputs')
Explanation: Placeholder and initializers
End of explanation
#Initializing rnn object
rnn=RNN_cell( input_size, hidden_layer_size, target_size)
#Getting all outputs from rnn
outputs = rnn.get_outputs()
#Getting final output through indexing after reversing
last_output = outputs[-1]
#As rnn model output the final layer through Relu activation softmax is used for final output.
output=tf.nn.softmax(last_output)
#Computing the Cross Entropy loss
cross_entropy = -tf.reduce_sum(y * tf.log(output))
# Trainning with Adadelta Optimizer
train_step = tf.train.AdamOptimizer().minimize(cross_entropy)
#Calculatio of correct prediction and accuracy
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(output,1))
accuracy = (tf.reduce_mean(tf.cast(correct_prediction, tf.float32)))*100
Explanation: Models
End of explanation
sess=tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
#Using Sklearn MNIST dataset.
digits = load_digits()
X=digits.images
Y_=digits.target
# One hot encoding
Y = sess.run(tf.one_hot(indices=Y_, depth=target_size))
#Getting Train and test Dataset
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.22, random_state=42)
#Cuttting for simple iteration
X_train=X_train[:1400]
y_train=y_train[:1400]
#Iterations to do trainning
for epoch in range(120):
start=0
end=100
for i in range(14):
X=X_train[start:end]
Y=y_train[start:end]
start=end
end=start+100
sess.run(train_step,feed_dict={rnn._inputs:X, y:Y})
Loss=str(sess.run(cross_entropy,feed_dict={rnn._inputs:X, y:Y}))
Train_accuracy=str(sess.run(accuracy,feed_dict={rnn._inputs:X_train, y:y_train}))
Test_accuracy=str(sess.run(accuracy,feed_dict={rnn._inputs:X_test, y:y_test}))
pl.plot([epoch],Loss,'b.',)
pl.plot([epoch],Train_accuracy,'r*',)
pl.plot([epoch],Test_accuracy,'g+')
display.clear_output(wait=True)
display.display(pl.gcf())
sys.stdout.flush()
print("\rIteration: %s Loss: %s Train Accuracy: %s Test Accuracy: %s"%(epoch,Loss,Train_accuracy,Test_accuracy)),
sys.stdout.flush()
Explanation: Dataset Preparation
End of explanation |
3,255 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a id='beginning'></a> <!--\label{beginning}-->
* Outline
* Glossary
* 4. The Visibility Space
* Previous
Step1: Import section specific modules
Step2: 4.5.1 UV coverage
Step3: Let's express the corresponding physical baseline in ENU coordinates.
Step4: Let's place the interferometer at a latitude $L_a=+45^\circ00'00''$.
Step5: Figure 4.5.1
Step6: 4.5.1.1.3 Computing of the projected baselines in ($u$,$v$,$w$) coordinates as a function of time
As seen previously, we convert the baseline coordinates using the previous matrix transformation.
Step7: As the $u$, $v$, $w$ coordinates explicitly depend on $H$, we must evaluate them for each observational time step. We will use the equations defined in $\S$ 4.2.2 ➞
Step8: We now have everything that describes the $uvw$-track of the baseline (over an 8-hour observational period). It is hard to predict which locus the $uvw$ track traverses given only the three mathematical equations from above. Let's plot it in $uvw$ space and its projection in $uv$ space.
Step9: Figure 4.5.2
Step10: Figure 4.5.3
Step11: Let's compute the $uv$ tracks of an observation of the NCP ($\delta=90^\circ$)
Step12: Let's compute the uv tracks when observing a source at $\delta=30^\circ$
Step13: Figure 4.5.4
Step14: Figure 4.5.5
Step15: <span style="background-color
Step16: We then convert the ($\alpha$,$\delta$) to $l,m$
Step17: The source and phase centre coordinates are now given in degrees.
Step18: Figure 4.5.6
Step19: We create the dimensions of our visibility plane.
Step20: We create our fully-filled visibility plane. With a "perfect" interferometer, we could sample the entire $uv$-plane. Since we only have a finite amount of antennas, this is never possible in practice. Recall that our sky brightness $I(l,m)$ is related to our visibilites $V(u,v)$ via the Fourier transform. For a bunch of point sources we can therefore write
Step21: Below we sample our visibility plane on the $uv$-track derived in the first section, i.e. $V(u_t,v_t)$.
Step22: Figure 4.5.7
Step23: Figure 4.5.8
Step24: Figure 4.5.9 | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
Explanation: <a id='beginning'></a> <!--\label{beginning}-->
* Outline
* Glossary
* 4. The Visibility Space
* Previous: 4.4 The Visibility Function
* Next: 4.5.2 UV Coverage: Improving Your Coverage
Import standard modules:
End of explanation
from mpl_toolkits.mplot3d import Axes3D
import plotBL
HTML('../style/code_toggle.html')
Explanation: Import section specific modules:
End of explanation
ant1 = np.array([-500e3,500e3,0]) # in m
ant2 = np.array([500e3,-500e3,+10]) # in m
Explanation: 4.5.1 UV coverage : UV tracks
The objective of $\S$ 4.5.1 ⤵ and $\S$ 4.5.2 ➞ is to give you a glimpse into the process of aperture synthesis. <span style="background-color:cyan">TLG:GM: Check if the italic words are in the glossary. </span> An interferometer measures components of the Fourier Transform of the sky by sampling the visibility function, $\mathcal{V}$. This collection of samples lives in ($u$, $v$, $w$) space, and are often projected onto the so-called $uv$-plane.
In $\S$ 4.5.1 ⤵, we will focus on the way the visibility function is sampled. This sampling is a function of the interferometer's configuration, the direction of the source and the observation time.
In $\S$ 4.5.2 ➞, we will see how this sampling can be improved by using certain observing techniques.
4.5.1.1 The projected baseline with time: the $uv$ track
A projected baseline depends on a baseline's coordinates, and the direction being observed in the sky. It corresponds to the baseline as seen from the source. The projected baseline is what determines the spatial frequency of the sky that the baseline will measure. As the Earth rotates, the projected baseline and its corresponding spatial frequency (defined by the baseline's ($u$, $v$)-coordinates) vary slowly in time, generating a path in the $uv$-plane.
We will now generate test cases to see what locus the path takes, and how it can be predicted depending on the baseline's geometry.
4.5.1.1.1 Baseline projection as seen from the source
Let's generate one baseline from two antennas Ant$_1$ and Ant$_2$.
End of explanation
b_ENU = ant2-ant1 # baseline
D = np.sqrt(np.sum((b_ENU)**2)) # |b|
print(str(D/1000)+" km")
Explanation: Let's express the corresponding physical baseline in ENU coordinates.
End of explanation
L = (np.pi/180)*(45+0./60+0./3600) # Latitude in radians
A = np.arctan2(b_ENU[0],b_ENU[1])
print("Baseline Azimuth="+str(np.degrees(A))+"°")
E = np.arcsin(b_ENU[2]/D)
print("Baseline Elevation="+str(np.degrees(E))+"°")
%matplotlib nbagg
plotBL.sphere(ant1,ant2,A,E,D,L)
Explanation: Let's place the interferometer at a latitude $L_a=+45^\circ00'00''$.
End of explanation
# Observation parameters
c = 3e8 # Speed of light
f = 1420e9 # Frequency
lam = c/f # Wavelength
dec = (np.pi/180)*(-30-43.0/60-17.34/3600) # Declination
time_steps = 600 # Time Steps
h = np.linspace(-4,4,num=time_steps)*np.pi/12 # Hour angle window
Explanation: Figure 4.5.1: A baseline located at +45$^\circ$ as seen from the sky. This plot is interactive and can be rotated in 3D to see different baseline projections, depending on the position of the source w.r.t. the physical baseline.
On the interactive plot above, we represent a baseline located at +45$^\circ$. It is aligned with the local south-west/north-east axis, as seen from the sky frame of reference. By rotating the sphere westward, you can simulate the variation of the projected baseline as seen from a source in apparent motion on the celestial sphere.
4.5.1.1.2 Coordinates of the baseline in the ($u$,$v$,$w$) plane
We will now simulate an observation to study how a projected baseline will change with time. We will position this baseline at a South African latitude. We first need the expression of the physical baseline in a convenient reference frame, attached to the source in the sky.
In $\S$ 4.2 ➞, we linked the equatorial coordinates of the baseline to the ($u$,$v$,$w$) coordinates through the transformation matrix:
\begin{equation}
\begin{pmatrix}
u\
v\
w
\end{pmatrix}
=
\frac{1}{\lambda}
\begin{pmatrix}
\sin H_0 & \cos H_0 & 0\
-\sin \delta_0 \cos H_0 & \sin\delta_0\sin H_0 & \cos\delta_0\
\cos \delta_0 \cos H_0 & -\cos\delta_0\sin H_0 & \sin\delta_0\
\end{pmatrix}
\begin{pmatrix}
X\
Y\
Z
\end{pmatrix}
\end{equation}
<a id="vis:eq:451"></a> <!---\label{vis:eq:451}--->
\begin{equation}
\begin{bmatrix}
X\
Y\
Z
\end{bmatrix}
=|\mathbf{b}|
\begin{bmatrix}
\cos L_a \sin \mathcal{E} - \sin L_a \cos \mathcal{E} \cos \mathcal{A}\nonumber\
\cos \mathcal{E} \sin \mathcal{A} \nonumber\
\sin L_a \sin \mathcal{E} + \cos L_a \cos \mathcal{E} \cos \mathcal{A}\
\end{bmatrix}
\end{equation}
Equation 4.5.1
This expression of $\mathbf{b}$ is a function of ($\mathcal{A}$,$\mathcal{E}$), and therefore of ($X$,$Y$,$Z$) in the equatorial frame of reference.
4.5.1.1.2 Observation parameters
Let's define an arbitrary set of observation parameters to mimic a real observation.
Latitude of the baseline: $L_a=-30^\circ43'17.34''$
Declination of the observation: $\delta=-74^\circ39'37.481''$
Duration of the observation: $\Delta \text{HA}=[-4^\text{h},4^\text{h}]$
Time steps: 600
Frequency: 1420 MHz
End of explanation
ant1 = np.array([25.095,-9.095,0.045])
ant2 = np.array([90.284,26.380,-0.226])
b_ENU = ant2-ant1
D = np.sqrt(np.sum((b_ENU)**2))
L = (np.pi/180)*(-30-43.0/60-17.34/3600)
A=np.arctan2(b_ENU[0],b_ENU[1])
print("Azimuth=",A*(180/np.pi))
E=np.arcsin(b_ENU[2]/D)
print("Elevation=",E*(180/np.pi))
X = D*(np.cos(L)*np.sin(E)-np.sin(L)*np.cos(E)*np.cos(A))
Y = D*np.cos(E)*np.sin(A)
Z = D*(np.sin(L)*np.sin(E)+np.cos(L)*np.cos(E)*np.cos(A))
Explanation: 4.5.1.1.3 Computing of the projected baselines in ($u$,$v$,$w$) coordinates as a function of time
As seen previously, we convert the baseline coordinates using the previous matrix transformation.
End of explanation
u = lam**(-1)*(np.sin(h)*X+np.cos(h)*Y)/1e3
v = lam**(-1)*(-np.sin(dec)*np.cos(h)*X+np.sin(dec)*np.sin(h)*Y+np.cos(dec)*Z)/1e3
w = lam**(-1)*(np.cos(dec)*np.cos(h)*X-np.cos(dec)*np.sin(h)*Y+np.sin(dec)*Z)/1e3
Explanation: As the $u$, $v$, $w$ coordinates explicitly depend on $H$, we must evaluate them for each observational time step. We will use the equations defined in $\S$ 4.2.2 ➞:
$\lambda u = X \sin H + Y \cos H$
$\lambda v= -X \sin \delta \cos H + Y \sin\delta\sin H + Z \cos\delta$
$\lambda w= X \cos \delta \cos H -Y \cos\delta\sin H + Z \sin\delta$
End of explanation
%matplotlib nbagg
plotBL.UV(u,v,w)
Explanation: We now have everything that describes the $uvw$-track of the baseline (over an 8-hour observational period). It is hard to predict which locus the $uvw$ track traverses given only the three mathematical equations from above. Let's plot it in $uvw$ space and its projection in $uv$ space.
End of explanation
%matplotlib inline
from matplotlib.patches import Ellipse
# parameters of the UVtrack as an ellipse
a=np.sqrt(X**2+Y**2)/lam/1e3 # major axis
b=a*np.sin(dec) # minor axis
v0=Z/lam*np.cos(dec)/1e3 # center of ellipse
plotBL.UVellipse(u,v,w,a,b,v0)
Explanation: Figure 4.5.2: $uvw$ track derived from the simulation and projection in the $uv$-plane.
The track in $uvw$ space are curves and the projection in the $uv$ plane are arcs. Let us focus on the track's projection in this plane. To get observation-independent knowledge of the track we can try to combine the three equations of $u$, $v$ and $w$, the aim being to eliminate $H$ from the equation. We end up with an equation linking $u$, $v$, $X$ and $Y$ (the full derivation can be found in $\S$ A.3 ➞):
$$\boxed{u^2 + \left[ \frac{v -\frac{Z}{\lambda} \cos \delta}{\sin \delta} \right]^2 = \left[ \frac{X}{\lambda} \right]^2 + \left[ \frac{Y}{\lambda} \right]^2}$$
One can note that in this particular case, the $uv$ track takes on the form of an ellipse.
<span style="background-color:cyan">TLG:GM: Check if the italic words are in the glossary. </span>
This ellipse is centered at $(0,\frac{Z}{\lambda} \cos \delta)$ in the ($u$,$v$) plane.
The major axis is $a=\frac{\sqrt{X^2 + Y^2}}{\lambda}$.
The minor axis (along the axis $v$) will be a function of $Z$, $\delta$ and $a$.
We can check this by plotting the theoretical ellipse over the observed portion of the track. (You can fall back to the duration of the observation to see that the track is mapping this ellipse exactly).
End of explanation
L=np.radians(90.)
ant1 = np.array([25.095,-9.095,0.045])
ant2 = np.array([90.284,26.380,-0.226])
b_ENU = ant2-ant1
D = np.sqrt(np.sum((b_ENU)**2))
A=np.arctan2(b_ENU[0],b_ENU[1])
print("Azimuth=",A*(180/np.pi))
E=np.arcsin(b_ENU[2]/D)
print("Elevation=",E*(180/np.pi))
X = D*(np.cos(L)*np.sin(E)-np.sin(L)*np.cos(E)*np.cos(A))
Y = D*np.cos(E)*np.sin(A)
Z = D*(np.sin(L)*np.sin(E)+np.cos(L)*np.cos(E)*np.cos(A))
Explanation: Figure 4.5.3: The blue (resp. the red) curve is the $uv$ track of the baseline $\mathbf{b}{12}$ (resp. $\mathbf{b}{21}$). As $I_\nu$ is real, the real part of the visibility $\mathcal{V}$ is even and the imaginary part is odd making $\mathcal{V}(-u,-v)=\mathcal{V}^*$. It implies that one baseline automatically provides a measurement of a visibility and its complex conjugate at ($-u$,$-v$).
4.5.1.2 Special cases
4.5.1.2.1 The Polar interferometer
Let settle one baseline at the North pole. The local zenith corresponds to the North Celestial Pole (NCP) at $\delta=90^\circ$. As seen from the NCP, the baseline will rotate and the projected baseline will correspond to the physical baseline. This configuration is the only case where this happens.
If $\mathbf{b}$ rotates, we can guess that the $uv$ tracks will be perfect circles. Let's check:
End of explanation
dec=np.radians(90.)
uNCP = lam**(-1)*(np.sin(h)*X+np.cos(h)*Y)/1e3
vNCP = lam**(-1)*(-np.sin(dec)*np.cos(h)*X+np.sin(dec)*np.sin(h)*Y+np.cos(dec)*Z)/1e3
wNCP = lam**(-1)*(np.cos(dec)*np.cos(h)*X-np.cos(dec)*np.sin(h)*Y+np.sin(dec)*Z)/1e3
# parameters of the UVtrack as an ellipse
aNCP=np.sqrt(X**2+Y**2)/lam/1e3 # major axis
bNCP=aNCP*np.sin(dec) # minor axi
v0NCP=Z/lam*np.cos(dec)/1e3 # center of ellipse
Explanation: Let's compute the $uv$ tracks of an observation of the NCP ($\delta=90^\circ$):
End of explanation
dec=np.radians(30.)
u30 = lam**(-1)*(np.sin(h)*X+np.cos(h)*Y)/1e3
v30 = lam**(-1)*(-np.sin(dec)*np.cos(h)*X+np.sin(dec)*np.sin(h)*Y+np.cos(dec)*Z)/1e3
w30 = lam**(-1)*(np.cos(dec)*np.cos(h)*X-np.cos(dec)*np.sin(h)*Y+np.sin(dec)*Z)/1e3
a30=np.sqrt(X**2+Y**2)/lam/1e3 # major axis
b30=a*np.sin(dec) # minor axi
v030=Z/lam*np.cos(dec)/1e3 # center of ellipse
%matplotlib inline
plotBL.UVellipse(u30,v30,w30,a30,b30,v030)
plotBL.UVellipse(uNCP,vNCP,wNCP,aNCP,bNCP,v0NCP)
Explanation: Let's compute the uv tracks when observing a source at $\delta=30^\circ$:
End of explanation
L=np.radians(90.)
X = D*(np.cos(L)*np.sin(E)-np.sin(L)*np.cos(E)*np.cos(A))
Y = D*np.cos(E)*np.sin(A)
Z = D*(np.sin(L)*np.sin(E)+np.cos(L)*np.cos(E)*np.cos(A))
# At local zenith == Celestial Equator
dec=np.radians(0.)
uEQ = lam**(-1)*(np.sin(h)*X+np.cos(h)*Y)/1e3
vEQ = lam**(-1)*(-np.sin(dec)*np.cos(h)*X+np.sin(dec)*np.sin(h)*Y+np.cos(dec)*Z)/1e3
wEQ = lam**(-1)*(np.cos(dec)*np.cos(h)*X-np.cos(dec)*np.sin(h)*Y+np.sin(dec)*Z)/1e3
# parameters of the UVtrack as an ellipse
aEQ=np.sqrt(X**2+Y**2)/lam/1e3 # major axis
bEQ=aEQ*np.sin(dec) # minor axi
v0EQ=Z/lam*np.cos(dec)/1e3 # center of ellipse
# Close to Zenith
dec=np.radians(10.)
u10 = lam**(-1)*(np.sin(h)*X+np.cos(h)*Y)/1e3
v10 = lam**(-1)*(-np.sin(dec)*np.cos(h)*X+np.sin(dec)*np.sin(h)*Y+np.cos(dec)*Z)/1e3
w10 = lam**(-1)*(np.cos(dec)*np.cos(h)*X-np.cos(dec)*np.sin(h)*Y+np.sin(dec)*Z)/1e3
a10=np.sqrt(X**2+Y**2)/lam/1e3 # major axis
b10=a*np.sin(dec) # minor axi
v010=Z/lam*np.cos(dec)/1e3 # center of ellipse
%matplotlib inline
plotBL.UVellipse(u10,v10,w10,a10,b10,v010)
plotBL.UVellipse(uEQ,vEQ,wEQ,aEQ,bEQ,v0EQ)
Explanation: Figure 4.5.4: $uv$ track for a baseline at the pole observing at $\delta=90^\circ$ (NCP) and at $\delta=30^\circ$ with the same color conventions as the previous figure.
When observing a source at declination $\delta$, we still have an elliptical shape but centered at (0,0). In the case of a polar interferometer, the full $uv$ track can be covered in 12 hours only due to the symmetry of the baseline.
4.5.1.2.2 The Equatorial interferometer
Let's consider the other extreme scenario: this time, we position the interferometer at the equator. The local zenith is crossed by the Celestial Equator at $\delta=0^\circ$. As seen from the celestial equator, the baseline will not rotate and the projected baseline will no longer correspond to the physical baseline. This configuration is the only case where this happens.
If $\mathbf{b}$ is not rotating, we can intuitively guess that the $uv$ tracks will be straight lines.
End of explanation
H = np.linspace(-6,6,600)*(np.pi/12) #Hour angle in radians
d = 100 #We assume that we have already divided by wavelength
delta = 60*(np.pi/180) #Declination in degrees
u_60 = d*np.cos(H)
v_60 = d*np.sin(H)*np.sin(delta)
Explanation: Figure 4.5.5: $uv$ track for a baseline at the equator observing at $\delta=0^\circ$ and at $\delta=10^\circ$, with the same color conventions as the previous figure.
An equatorial interferometer observing its zenith will see radio sources crossing the sky on straight, linear paths. Therefore, they will produce straight $uv$ coordinates.
4.5.1.1.3 The East-West array <a id='vis:sec:ew'></a> <!--\label{vis:sec:ew}-->
The East-West array is the special case of an interferometer with physical baselines aligned with the East-West direction in the ground-based frame of reference. They have the convenient property of giving a $uv$ coverage which lies entirely on a plane.
If the baseline is aligned with the East-West direction, then the Elevation $\mathcal{E}$ of the baseline is zero and the Azimuth $\mathcal{A}$ is $\frac{\pi}{2}$. Eq. 4.5.1 ⤵ then simplifies considerably:
The only non-zero component of the baseline will be its $Y$-component.
\begin{equation}
\frac{1}{\lambda}
\begin{bmatrix}
X\
Y\
Z
\end{bmatrix}
=
|\mathbf{b_\lambda}|
\begin{bmatrix}
\cos L_a \sin 0 - \sin L_a \cos 0 \cos \frac{\pi}{2}\nonumber\
\cos 0 \sin \frac{\pi}{2} \nonumber\
\sin L_a \sin 0 + \cos L_a \cos 0 \cos \frac{\pi}{2}\
\end{bmatrix}
=
\begin{bmatrix}
0\
|\mathbf{b_\lambda}|\
0 \
\end{bmatrix}
\end{equation}
If we observe a source at declination $\delta_0$ with varying Hour Angle, $H$, we obtain:
\begin{equation}
\begin{pmatrix}
u\
v\
w\
\end{pmatrix}
=
\begin{pmatrix}
\sin H & \cos H & 0\
-\sin \delta_0 \cos H & \sin\delta_0\sin H & \cos\delta_0\
\cos \delta_0 \cos H & -\cos\delta_0\sin H & \sin\delta_0\
\end{pmatrix}
\begin{pmatrix}
0\
|\mathbf{b_\lambda}| \
0
\end{pmatrix}
\end{equation}
\begin{equation}
\begin{pmatrix}
u\
v\
w\
\end{pmatrix}
=
\begin{pmatrix}
|\mathbf{b_\lambda}| \cos H \
|\mathbf{b_\lambda}| \sin\delta_0 \sin H\
-|\mathbf{b_\lambda}|\cos\delta_0\sin H\
\end{pmatrix}
\end{equation}
when $H = 6^\text{h}$ (West)
\begin{equation}
\begin{pmatrix}
u\
v\
w\
\end{pmatrix}
=
\begin{pmatrix}
0 \
|\mathbf{b_\lambda}|\sin\delta_0\
|\mathbf{b_\lambda}|\cos\delta_0\
\end{pmatrix}
\end{equation}
when $H = 0^\text{h}$ (South)
\begin{equation}
\begin{pmatrix}
u\
v\
w\
\end{pmatrix}
=
\begin{pmatrix}
|\mathbf{b_\lambda}| \
0\
0\
\end{pmatrix}
\end{equation}
when $H = -6^\text{h}$ (East)
\begin{equation}
\begin{pmatrix}
u\
v\
w\
\end{pmatrix}
=
\begin{pmatrix}
0 \
-|\mathbf{b_\lambda}|\sin\delta_0\
-|\mathbf{b_\lambda}|\cos\delta_0
\end{pmatrix}
\end{equation}
In this case, one can notice that we always have a relationship between $u$, $v$ and $|\mathbf{b_\lambda}|$:
$$ u^2+\left( \frac{v}{\sin\delta_0}\right) ^2=|\mathbf{b_\lambda}|^2$$
<div class=warn>
<b>Warning:</b> The $\sin\delta_0$ factor, appearing in the previous equation, can be interpreted as a compression factor.
</div>
4.5.1.3 Sampling the visibility plane with $uv$-tracks
4.5.1.3.1 Simulating a baseline
When we have an EW baseline, some equations simplify.
Firstly, $XYZ = [0~d~0]^T$, where $d$ is the baseline length measured in wavelengths.
Secondly, we have the following relationships: $u = d\cos(H)$, $v = d\sin(H)\sin(\delta)$,
where $H$ is the hour angle of the field center and $\delta$ its declination.
In this section, we will plot the $uv$-coverage of an EW-baseline whose field center is at two different declinations.
End of explanation
RA_sources = np.array([5+30.0/60,5+32.0/60+0.4/3600,5+36.0/60+12.8/3600,5+40.0/60+45.5/3600])
DEC_sources = np.array([60,60+17.0/60+57.0/3600,61+12.0/60+6.9/3600,61+56.0/60+34.0/3600])
Flux_sources_labels = np.array(["","1 Jy","0.5 Jy","0.2 Jy"])
Flux_sources = np.array([1,0.5,0.1]) #in Jy
step_size = 200
print("Phase center Source 1 Source 2 Source3")
print(repr("RA="+str(RA_sources)).ljust(2))
print("DEC="+str(DEC_sources))
Explanation: <span style="background-color:red">TLG:AC: Add the following figures. This is specifically for an EW array. They will add some more insight. </span>
<img src='figures/EW_1_d.svg' width=40%>
<img src='figures/EW_2_d.svg' width=40%>
<img src='figures/EW_3_d.svg' width=40%>
4.5.1.3.2 Simulating the sky
Let us populate our sky with three sources, with positions given in RA ($\alpha$) and DEC ($\delta$):
* Source 1: (5h 32m 0.4s,60$^{\circ}$-17' 57'') - 1 Jy
* Source 2: (5h 36m 12.8s,-61$^{\circ}$ 12' 6.9'') - 0.5 Jy
* Source 3: (5h 40m 45.5s,-61$^{\circ}$ 56' 34'') - 0.2 Jy
We place the field center at $(\alpha_0,\delta_0) = $ (5h 30m,60$^{\circ}$).
End of explanation
RA_rad = np.array(RA_sources)*(np.pi/12)
DEC_rad = np.array(DEC_sources)*(np.pi/180)
RA_delta_rad = RA_rad-RA_rad[0]
l = np.cos(DEC_rad)*np.sin(RA_delta_rad)
m = (np.sin(DEC_rad)*np.cos(DEC_rad[0])-np.cos(DEC_rad)*np.sin(DEC_rad[0])*np.cos(RA_delta_rad))
print("l=",l*(180/np.pi))
print("m=",m*(180/np.pi))
point_sources = np.zeros((len(RA_sources)-1,3))
point_sources[:,0] = Flux_sources
point_sources[:,1] = l[1:]
point_sources[:,2] = m[1:]
Explanation: We then convert the ($\alpha$,$\delta$) to $l,m$: <span style="background-color:red">TLG:AC:Point to Chapter 3.</span>
* $l = \cos \delta \sin \Delta \alpha$
* $m = \sin \delta\cos\delta_0 -\cos \delta\sin\delta_0\cos\Delta \alpha$
* $\Delta \alpha = \alpha - \alpha_0$
End of explanation
%matplotlib inline
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
plt.xlim([-4,4])
plt.ylim([-4,4])
plt.xlabel("$l$ [degrees]")
plt.ylabel("$m$ [degrees]")
plt.plot(l[0],m[0],"bx")
plt.plot(l[1:]*(180/np.pi),m[1:]*(180/np.pi),"ro")
counter = 1
for xy in zip(l[1:]*(180/np.pi)+0.25, m[1:]*(180/np.pi)+0.25):
ax.annotate(Flux_sources_labels[counter], xy=xy, textcoords='offset points',horizontalalignment='right',
verticalalignment='bottom')
counter = counter + 1
plt.grid()
Explanation: The source and phase centre coordinates are now given in degrees.
End of explanation
u = np.linspace(-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10, num=step_size, endpoint=True)
v = np.linspace(-1*(np.amax(abs(v_60)))-10, np.amax(abs(v_60))+10, num=step_size, endpoint=True)
uu, vv = np.meshgrid(u, v)
zz = np.zeros(uu.shape).astype(complex)
Explanation: Figure 4.5.6: Distribution of the simulated sky in the $l$,$m$ plane.
4.5.1.3.3 Simulating an observation
We will now create a fully-filled $uv$-plane, and sample it using the EW-baseline track we created in the first section. We will be ignoring the $w$-term for the sake of simplicity.
End of explanation
s = point_sources.shape
for counter in range(1, s[0]+1):
A_i = point_sources[counter-1,0]
l_i = point_sources[counter-1,1]
m_i = point_sources[counter-1,2]
zz += A_i*np.exp(-2*np.pi*1j*(uu*l_i+vv*m_i))
zz = zz[:,::-1]
Explanation: We create the dimensions of our visibility plane.
End of explanation
u_track = u_60
v_track = v_60
z = np.zeros(u_track.shape).astype(complex)
s = point_sources.shape
for counter in range(1, s[0]+1):
A_i = point_sources[counter-1,0]
l_i = point_sources[counter-1,1]
m_i = point_sources[counter-1,2]
z += A_i*np.exp(-1*2*np.pi*1j*(u_track*l_i+v_track*m_i))
Explanation: We create our fully-filled visibility plane. With a "perfect" interferometer, we could sample the entire $uv$-plane. Since we only have a finite amount of antennas, this is never possible in practice. Recall that our sky brightness $I(l,m)$ is related to our visibilites $V(u,v)$ via the Fourier transform. For a bunch of point sources we can therefore write:
$$V(u,v)=\mathcal{F}{I(l,m)} = \mathcal{F}{\sum_k A_k \delta(l-l_k,m-m_k)} = \sum_k A_k e^{-2\pi i (ul_i+vm_i)}$$
Let's compute the total visibilities for our simulated sky.
End of explanation
plt.figure(figsize=(12,6))
plt.subplot(121)
plt.imshow(zz.real,extent=[-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10,-1*(np.amax(abs(v_60)))-10, \
np.amax(abs(v_60))+10])
plt.plot(u_60,v_60,"k")
plt.xlim([-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10])
plt.ylim(-1*(np.amax(abs(v_60)))-10, np.amax(abs(v_60))+10)
plt.xlabel("u")
plt.ylabel("v")
plt.title("Real part of visibilities")
plt.subplot(122)
plt.imshow(zz.imag,extent=[-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10,-1*(np.amax(abs(v_60)))-10, \
np.amax(abs(v_60))+10])
plt.plot(u_60,v_60,"k")
plt.xlim([-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10])
plt.ylim(-1*(np.amax(abs(v_60)))-10, np.amax(abs(v_60))+10)
plt.xlabel("u")
plt.ylabel("v")
plt.title("Imaginary part of visibilities")
Explanation: Below we sample our visibility plane on the $uv$-track derived in the first section, i.e. $V(u_t,v_t)$.
End of explanation
plt.figure(figsize=(12,6))
plt.subplot(121)
plt.plot(z.real)
plt.xlabel("Timeslots")
plt.ylabel("Jy")
plt.title("Real: sampled visibilities")
plt.subplot(122)
plt.plot(z.imag)
plt.xlabel("Timeslots")
plt.ylabel("Jy")
plt.title("Imag: sampled visibilities")
Explanation: Figure 4.5.7: Real and imaginary parts of the visibility function. The black curve is the portion of the $uv$ track crossing the visibility.
We now plot the sampled visibilites as a function of time-slots, i.e $V(u_t(t_s),v_t(t_s))$.
End of explanation
plt.figure(figsize=(12,6))
plt.subplot(121)
plt.imshow(abs(zz),
extent=[-1*(np.amax(np.abs(u_60)))-10,
np.amax(np.abs(u_60))+10,
-1*(np.amax(abs(v_60)))-10,
np.amax(abs(v_60))+10])
plt.plot(u_60,v_60,"k")
plt.xlim([-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10])
plt.ylim(-1*(np.amax(abs(v_60)))-10, np.amax(abs(v_60))+10)
plt.xlabel("u")
plt.ylabel("v")
plt.title("Amplitude of visibilities")
plt.subplot(122)
plt.imshow(np.angle(zz),
extent=[-1*(np.amax(np.abs(u_60)))-10,
np.amax(np.abs(u_60))+10,
-1*(np.amax(abs(v_60)))-10,
np.amax(abs(v_60))+10])
plt.plot(u_60,v_60,"k")
plt.xlim([-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10])
plt.ylim(-1*(np.amax(abs(v_60)))-10, np.amax(abs(v_60))+10)
plt.xlabel("u")
plt.ylabel("v")
plt.title("Phase of visibilities")
Explanation: Figure 4.5.8: Real and imaginary parts of the visibility sampled by the black curve in Fig. 4.5.7, plotted as a function of time.
End of explanation
plt.figure(figsize=(12,6))
plt.subplot(121)
plt.plot(abs(z))
plt.xlabel("Timeslots")
plt.ylabel("Jy")
plt.title("Abs: sampled visibilities")
plt.subplot(122)
plt.plot(np.angle(z))
plt.xlabel("Timeslots")
plt.ylabel("Jy")
plt.title("Phase: sampled visibilities")
Explanation: Figure 4.5.9: Amplitude and Phase of the visibility function. The black curve is the portion of the $uv$ track crossing the visibility.
End of explanation |
3,256 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
K Nearest Neighbors Classifiers
So far we've covered learning via probability (naive Bayes) and learning via errors (regression). Here we'll cover learning via similarity. This means we look for the datapoints that are most similar to the observation we are trying to predict.
Let's start by the simplest example
Step1: The simplest form of a similarity model is the Nearest Neighbor model. This works quite simply
Step2: It's as simple as that. Looks like our model is predicting that 24 loudness, 190 second long song is not jazz. All it takes to train the model is a dataframe of independent variables and a dataframe of dependent outcomes.
You'll note that for this example, we used the KNeighborsClassifier method from SKLearn. This is because Nearest Neighbor is a simplification of K-Nearest Neighbors. The jump, however, isn't that far.
K-Nearest Neighbors
K-Nearest Neighbors (or "KNN") is the logical extension of Nearest Neighbor. Instead of looking at just the single nearest datapoint to predict an outcome, we look at several of the nearest neighbors, with $k$ representing the number of neighbors we choose to look at. Each of the $k$ neighbors gets to vote on what the predicted outcome should be.
This does a couple of valuable things. Firstly, it smooths out the predictions. If only one neighbor gets to influence the outcome, the model explicitly overfits to the training data. Any single outlier can create pockets of one category prediction surrounded by a sea of the other category.
This also means instead of just predicting classes, we get implicit probabilities. If each of the $k$ neighbors gets a vote on the outcome, then the probability of the test example being from any given class $i$ is
Step3: Now our test prediction has changed. In using the five nearest neighbors it appears that there were two votes for rock and three for jazz, so it was classified as a jazz song. This is different than our simpler Nearest Neighbors model. While the closest observation was in fact rock, there are more jazz songs in the nearest $k$ neighbors than rock.
We can visualize our decision bounds with something called a mesh. This allows us to generate a prediction over the whole space. Read the code below and make sure you can pull out what the individual lines do, consulting the documentation for unfamiliar methods if necessary.
Step4: Looking at the visualization above, any new point that fell within a blue area would be predicted to be jazz, and any point that fell within a brown area would be predicted to be rock.
The boundaries above are strangly jagged here, and well get into that in more detail in the next lesson.
Also note that the visualization isn't completely continuous. There are an infinite number of points in this space, and we can't calculate the value for each one. That's where the mesh comes in. We set our mesh size (h = 4.0) to 4.0 above, which means we calculate the value for each point in a grid where the points are spaced 4.0 away from each other.
You can make the mesh size smaller to get a more continuous visualization, but at the cost of a more computationally demanding calculation. In the cell below, recreate the plot above with a mesh size of 10.0. Then reduce the mesh size until you get a plot that looks good but still renders in a reasonable amount of time. When do you get a visualization that looks acceptably continuous? When do you start to get a noticible delay?
Step5: Now you've built a KNN model!
Challenge | Python Code:
music = pd.DataFrame()
# Some data to play with.
music['duration'] = [184, 134, 243, 186, 122, 197, 294, 382, 102, 264,
205, 110, 307, 110, 397, 153, 190, 192, 210, 403,
164, 198, 204, 253, 234, 190, 182, 401, 376, 102]
music['loudness'] = [18, 34, 43, 36, 22, 9, 29, 22, 10, 24,
20, 10, 17, 51, 7, 13, 19, 12, 21, 22,
16, 18, 4, 23, 34, 19, 14, 11, 37, 42]
# We know whether the songs in our training data are jazz or not.
music['jazz'] = [ 1, 0, 0, 0, 1, 1, 0, 1, 1, 0,
0, 1, 1, 0, 1, 1, 0, 1, 1, 1,
1, 1, 1, 1, 0, 0, 1, 1, 0, 0]
# Look at our data.
plt.scatter(
music[music['jazz'] == 1].duration,
music[music['jazz'] == 1].loudness,
color='red'
)
plt.scatter(
music[music['jazz'] == 0].duration,
music[music['jazz'] == 0].loudness,
color='blue'
)
plt.legend(['Jazz', 'Rock'])
plt.title('Jazz and Rock Characteristics')
plt.xlabel('Duration')
plt.ylabel('Loudness')
plt.show()
Explanation: K Nearest Neighbors Classifiers
So far we've covered learning via probability (naive Bayes) and learning via errors (regression). Here we'll cover learning via similarity. This means we look for the datapoints that are most similar to the observation we are trying to predict.
Let's start by the simplest example: Nearest Neighbor.
Nearest Neighbor
Let's use this example: classifying a song as either "rock" or "jazz". For this data we have measures of duration in seconds and loudness in loudness units (we're not going to be using decibels since that isn't a linear measure, which would create some problems we'll get into later).
End of explanation
from sklearn.neighbors import KNeighborsClassifier
neighbors = KNeighborsClassifier(n_neighbors=1)
X = music[['loudness', 'duration']]
Y = music.jazz
neighbors.fit(X,Y)
## Predict for a song with 24 loudness that's 190 seconds long.
neighbors.predict([[24, 190]])
Explanation: The simplest form of a similarity model is the Nearest Neighbor model. This works quite simply: when trying to predict an observation, we find the closest (or nearest) known observation in our training data and use that value to make our prediction. Here we'll use the model as a classifier, the outcome of interest will be a category.
To find which observation is "nearest" we need some kind of way to measure distance. Typically we use Euclidean distance, the standard distance measure that you're familiar with from geometry. With one observation in n-dimensions $(x_1, x_2, ...,x_n)$ and the other $(w_1, w_2,...,w_n)$:
$$ \sqrt{(x_1-w_1)^2 + (x_2-w_2)^2+...+(x_n-w_n)^2} $$
You might recognize this formula, (taking distances, squaring them, adding the squares together, and taking the root) as a generalization of the Pythagorean theorem into n-dimensions. You can technically define any distance measure you want, and there are times where this customization may be valuable. As a general standard, however, we'll use Euclidean distance.
Now that we have a distance measure from each point in our training data to the point we're trying to predict the model can find the datapoint with the smallest distance and then apply that category to our prediction.
Let's try running this model, using the SKLearn package.
End of explanation
neighbors = KNeighborsClassifier(n_neighbors=5)
X = music[['loudness', 'duration']]
Y = music.jazz
neighbors.fit(X,Y)
## Predict for a 24 loudness, 190 seconds long song.
print(neighbors.predict([[24, 190]]))
print(neighbors.predict_proba([[24, 190]]))
Explanation: It's as simple as that. Looks like our model is predicting that 24 loudness, 190 second long song is not jazz. All it takes to train the model is a dataframe of independent variables and a dataframe of dependent outcomes.
You'll note that for this example, we used the KNeighborsClassifier method from SKLearn. This is because Nearest Neighbor is a simplification of K-Nearest Neighbors. The jump, however, isn't that far.
K-Nearest Neighbors
K-Nearest Neighbors (or "KNN") is the logical extension of Nearest Neighbor. Instead of looking at just the single nearest datapoint to predict an outcome, we look at several of the nearest neighbors, with $k$ representing the number of neighbors we choose to look at. Each of the $k$ neighbors gets to vote on what the predicted outcome should be.
This does a couple of valuable things. Firstly, it smooths out the predictions. If only one neighbor gets to influence the outcome, the model explicitly overfits to the training data. Any single outlier can create pockets of one category prediction surrounded by a sea of the other category.
This also means instead of just predicting classes, we get implicit probabilities. If each of the $k$ neighbors gets a vote on the outcome, then the probability of the test example being from any given class $i$ is:
$$ \frac{votes_i}{k} $$
And this applies for all classes present in the training set. Our example only has two classes, but this model can accommodate as many classes as the data set necessitates. To come up with a classifier prediction it simply takes the class for which that fraction is maximized.
Let's expand our initial nearest neighbors model from above to a KNN with a $k$ of 5.
End of explanation
# Our data. Converting from data frames to arrays for the mesh.
X = np.array(X)
Y = np.array(Y)
# Mesh size.
h = 4.0
# Plot the decision boundary. We asign a color to each point in the mesh.
x_min = X[:, 0].min() - .5
x_max = X[:, 0].max() + .5
y_min = X[:, 1].min() - .5
y_max = X[:, 1].max() + .5
xx, yy = np.meshgrid(
np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h)
)
Z = neighbors.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot.
Z = Z.reshape(xx.shape)
plt.figure(1, figsize=(6, 4))
plt.set_cmap(plt.cm.Paired)
plt.pcolormesh(xx, yy, Z)
# Add the training points to the plot.
plt.scatter(X[:, 0], X[:, 1], c=Y)
plt.xlabel('Loudness')
plt.ylabel('Duration')
plt.title('Mesh visualization')
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.show()
Explanation: Now our test prediction has changed. In using the five nearest neighbors it appears that there were two votes for rock and three for jazz, so it was classified as a jazz song. This is different than our simpler Nearest Neighbors model. While the closest observation was in fact rock, there are more jazz songs in the nearest $k$ neighbors than rock.
We can visualize our decision bounds with something called a mesh. This allows us to generate a prediction over the whole space. Read the code below and make sure you can pull out what the individual lines do, consulting the documentation for unfamiliar methods if necessary.
End of explanation
# Play with different mesh sizes here.
# Our data. Converting from data frames to arrays for the mesh.
X = np.array(X)
Y = np.array(Y)
# Mesh size.
h = 0.5
# Plot the decision boundary. We asign a color to each point in the mesh.
x_min = X[:, 0].min() - .5
x_max = X[:, 0].max() + .5
y_min = X[:, 1].min() - .5
y_max = X[:, 1].max() + .5
xx, yy = np.meshgrid(
np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h)
)
Z = neighbors.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot.
Z = Z.reshape(xx.shape)
plt.figure(1, figsize=(6, 4))
plt.set_cmap(plt.cm.Paired)
plt.pcolormesh(xx, yy, Z)
# Add the training points to the plot.
plt.scatter(X[:, 0], X[:, 1], c=Y)
plt.xlabel('Loudness')
plt.ylabel('Duration')
plt.title('Mesh visualization')
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.show()
Explanation: Looking at the visualization above, any new point that fell within a blue area would be predicted to be jazz, and any point that fell within a brown area would be predicted to be rock.
The boundaries above are strangly jagged here, and well get into that in more detail in the next lesson.
Also note that the visualization isn't completely continuous. There are an infinite number of points in this space, and we can't calculate the value for each one. That's where the mesh comes in. We set our mesh size (h = 4.0) to 4.0 above, which means we calculate the value for each point in a grid where the points are spaced 4.0 away from each other.
You can make the mesh size smaller to get a more continuous visualization, but at the cost of a more computationally demanding calculation. In the cell below, recreate the plot above with a mesh size of 10.0. Then reduce the mesh size until you get a plot that looks good but still renders in a reasonable amount of time. When do you get a visualization that looks acceptably continuous? When do you start to get a noticible delay?
End of explanation
from heapq import nsmallest
#first, find the nearest neighbors
def nearest_neighbors (k, currentPoint):
predictionSet = list()
# identfiy the k nearest neighbor
distances = list()
for x in X:
distance = np.sqrt((x[0]-currentPoint[0])**2 + (x[1]-currentPoint[1])**2)
distances.append(distance)
# Choose the k smallest distances
kneighbor_distances = nsmallest(k, distances)
for i in range(k):
this_neighbor = distances.index(kneighbor_distances[i])
predictionSet.append(Y[this_neighbor])
# identify the ratios of target within k neighbords
predictionProb = sum(predictionSet)/len(predictionSet)
# identfiy highest probabilty prediction
if predictionProb >= 0.50:
return 1
elif predictionProb < 0.50:
return 0
#Run the Code! Try your own parameters
nearest_neighbors(7, [30,90])
Explanation: Now you've built a KNN model!
Challenge: Implement the Nearest Neighbor algorithm
The Nearest Neighbor algorithm is extremely simple. So simple, in fact, that you should be able to build it yourself from scratch using the Python you already know. Code a Nearest Neighbors algorithm that works for two dimensional data. You can use either arrays or dataframes to do this. Test it against the SKLearn package on the music dataset from above to ensure that it's correct. The goal here is to confirm your understanding of the model and continue to practice your Python skills. We're just expecting a brute force method here. After doing this, look up "ball tree" methods to see a more performant algorithm design.
End of explanation |
3,257 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework #4
These problem sets focus on list comprehensions, string operations and regular expressions.
Problem set #1
Step1: In the following cell, complete the code with an expression that evaluates to a list of integers derived from the raw numbers in numbers_str, assigning the value of this expression to a variable numbers. If you do everything correctly, executing the cell should produce the output 985 (not '985').
Step2: Great! We'll be using the numbers list you created above in the next few problems.
In the cell below, fill in the square brackets so that the expression evaluates to a list of the ten largest values in numbers. Expected output
Step3: In the cell below, write an expression that evaluates to a list of the integers from numbers that are evenly divisible by three, sorted in numerical order. Expected output
Step4: Okay. You're doing great. Now, in the cell below, write an expression that evaluates to a list of the square roots of all the integers in numbers that are less than 100. In order to do this, you'll need to use the sqrt function from the math module, which I've already imported for you. Expected output
Step5: Problem set #2
Step6: Now, in the cell below, write a list comprehension that evaluates to a list of names of the planets that have a diameter greater than four earth radii. Expected output
Step7: In the cell below, write a single expression that evaluates to the sum of the mass of all planets in the solar system. Expected output
Step8: Good work. Last one with the planets. Write an expression that evaluates to the names of the planets that have the word giant anywhere in the value for their type key. Expected output
Step9: EXTREME BONUS ROUND
Step10: Problem set #3
Step11: In the cell above, I defined a variable poem_lines which has a list of lines in the poem, and imported the re library.
In the cell below, write a list comprehension (using re.search()) that evaluates to a list of lines that contain two words next to each other (separated by a space) that have exactly four characters. (Hint
Step12: Good! Now, in the following cell, write a list comprehension that evaluates to a list of lines in the poem that end with a five-letter word, regardless of whether or not there is punctuation following the word at the end of the line. (Hint
Step13: Okay, now a slightly trickier one. In the cell below, I've created a string all_lines which evaluates to the entire text of the poem in one string. Execute this cell.
Step14: Now, write an expression that evaluates to all of the words in the poem that follow the word 'I'. (The strings in the resulting list should not include the I.) Hint
Step15: Finally, something super tricky. Here's a list of strings that contains a restaurant menu. Your job is to wrangle this plain text, slightly-structured data into a list of dictionaries.
Step16: You'll need to pull out the name of the dish and the price of the dish. The v after the hyphen indicates that the dish is vegetarian---you'll need to include that information in your dictionary as well. I've included the basic framework; you just need to fill in the contents of the for loop.
Expected output | Python Code:
numbers_str = '496,258,332,550,506,699,7,985,171,581,436,804,736,528,65,855,68,279,721,120'
Explanation: Homework #4
These problem sets focus on list comprehensions, string operations and regular expressions.
Problem set #1: List slices and list comprehensions
Let's start with some data. The following cell contains a string with comma-separated integers, assigned to a variable called numbers_str:
End of explanation
numbers = [int(i) for i in numbers_str.split(',')] # replace 'None' with an expression, as described above
max(numbers)
Explanation: In the following cell, complete the code with an expression that evaluates to a list of integers derived from the raw numbers in numbers_str, assigning the value of this expression to a variable numbers. If you do everything correctly, executing the cell should produce the output 985 (not '985').
End of explanation
sorted(numbers)[-10:]
Explanation: Great! We'll be using the numbers list you created above in the next few problems.
In the cell below, fill in the square brackets so that the expression evaluates to a list of the ten largest values in numbers. Expected output:
[506, 528, 550, 581, 699, 721, 736, 804, 855, 985]
(Hint: use a slice.)
End of explanation
sorted([i for i in numbers if i % 3 == 0])
Explanation: In the cell below, write an expression that evaluates to a list of the integers from numbers that are evenly divisible by three, sorted in numerical order. Expected output:
[120, 171, 258, 279, 528, 699, 804, 855]
End of explanation
from math import sqrt
[sqrt(i) for i in numbers if i < 100]
Explanation: Okay. You're doing great. Now, in the cell below, write an expression that evaluates to a list of the square roots of all the integers in numbers that are less than 100. In order to do this, you'll need to use the sqrt function from the math module, which I've already imported for you. Expected output:
[2.6457513110645907, 8.06225774829855, 8.246211251235321]
(These outputs might vary slightly depending on your platform.)
End of explanation
planets = [
{'diameter': 0.382,
'mass': 0.06,
'moons': 0,
'name': 'Mercury',
'orbital_period': 0.24,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 0.949,
'mass': 0.82,
'moons': 0,
'name': 'Venus',
'orbital_period': 0.62,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 1.00,
'mass': 1.00,
'moons': 1,
'name': 'Earth',
'orbital_period': 1.00,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 0.532,
'mass': 0.11,
'moons': 2,
'name': 'Mars',
'orbital_period': 1.88,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 11.209,
'mass': 317.8,
'moons': 67,
'name': 'Jupiter',
'orbital_period': 11.86,
'rings': 'yes',
'type': 'gas giant'},
{'diameter': 9.449,
'mass': 95.2,
'moons': 62,
'name': 'Saturn',
'orbital_period': 29.46,
'rings': 'yes',
'type': 'gas giant'},
{'diameter': 4.007,
'mass': 14.6,
'moons': 27,
'name': 'Uranus',
'orbital_period': 84.01,
'rings': 'yes',
'type': 'ice giant'},
{'diameter': 3.883,
'mass': 17.2,
'moons': 14,
'name': 'Neptune',
'orbital_period': 164.8,
'rings': 'yes',
'type': 'ice giant'}]
Explanation: Problem set #2: Still more list comprehensions
Still looking good. Let's do a few more with some different data. In the cell below, I've defined a data structure and assigned it to a variable planets. It's a list of dictionaries, with each dictionary describing the characteristics of a planet in the solar system. Make sure to run the cell before you proceed.
End of explanation
[i['name'] for i in planets if i['diameter'] > 4]
Explanation: Now, in the cell below, write a list comprehension that evaluates to a list of names of the planets that have a diameter greater than four earth radii. Expected output:
['Jupiter', 'Saturn', 'Uranus']
End of explanation
sum([i['mass'] for i in planets])
Explanation: In the cell below, write a single expression that evaluates to the sum of the mass of all planets in the solar system. Expected output: 446.79
End of explanation
[i['name'] for i in planets if 'giant' in i['type']]
Explanation: Good work. Last one with the planets. Write an expression that evaluates to the names of the planets that have the word giant anywhere in the value for their type key. Expected output:
['Jupiter', 'Saturn', 'Uranus', 'Neptune']
End of explanation
[i['name'] for i in sorted(planets, key=lambda planet: planet['moons'])]
Explanation: EXTREME BONUS ROUND: Write an expression below that evaluates to a list of the names of the planets in ascending order by their number of moons. (The easiest way to do this involves using the key parameter of the sorted function, which we haven't yet discussed in class! That's why this is an EXTREME BONUS question.) Expected output:
['Mercury', 'Venus', 'Earth', 'Mars', 'Neptune', 'Uranus', 'Saturn', 'Jupiter']
End of explanation
import re
poem_lines = ['Two roads diverged in a yellow wood,',
'And sorry I could not travel both',
'And be one traveler, long I stood',
'And looked down one as far as I could',
'To where it bent in the undergrowth;',
'',
'Then took the other, as just as fair,',
'And having perhaps the better claim,',
'Because it was grassy and wanted wear;',
'Though as for that the passing there',
'Had worn them really about the same,',
'',
'And both that morning equally lay',
'In leaves no step had trodden black.',
'Oh, I kept the first for another day!',
'Yet knowing how way leads on to way,',
'I doubted if I should ever come back.',
'',
'I shall be telling this with a sigh',
'Somewhere ages and ages hence:',
'Two roads diverged in a wood, and I---',
'I took the one less travelled by,',
'And that has made all the difference.']
Explanation: Problem set #3: Regular expressions
In the following section, we're going to do a bit of digital humanities. (I guess this could also be journalism if you were... writing an investigative piece about... early 20th century American poetry?) We'll be working with the following text, Robert Frost's The Road Not Taken. Make sure to run the following cell before you proceed.
End of explanation
[line for line in poem_lines if re.search(r'\b\w{4}\s\w{4}\b', line)]
Explanation: In the cell above, I defined a variable poem_lines which has a list of lines in the poem, and imported the re library.
In the cell below, write a list comprehension (using re.search()) that evaluates to a list of lines that contain two words next to each other (separated by a space) that have exactly four characters. (Hint: use the \b anchor. Don't overthink the "two words in a row" requirement.)
Expected result:
['Then took the other, as just as fair,',
'Had worn them really about the same,',
'And both that morning equally lay',
'I doubted if I should ever come back.',
'I shall be telling this with a sigh']
End of explanation
[line for line in poem_lines if re.search('\w{5}\.{0,1}$', line)]
Explanation: Good! Now, in the following cell, write a list comprehension that evaluates to a list of lines in the poem that end with a five-letter word, regardless of whether or not there is punctuation following the word at the end of the line. (Hint: Try using the ? quantifier. Is there an existing character class, or a way to write a character class, that matches non-alphanumeric characters?) Expected output:
['And be one traveler, long I stood',
'And looked down one as far as I could',
'And having perhaps the better claim,',
'Though as for that the passing there',
'In leaves no step had trodden black.',
'Somewhere ages and ages hence:']
End of explanation
all_lines = " ".join(poem_lines)
Explanation: Okay, now a slightly trickier one. In the cell below, I've created a string all_lines which evaluates to the entire text of the poem in one string. Execute this cell.
End of explanation
re.findall('I\s(.*?)\s', all_lines)
Explanation: Now, write an expression that evaluates to all of the words in the poem that follow the word 'I'. (The strings in the resulting list should not include the I.) Hint: Use re.findall() and grouping! Expected output:
['could', 'stood', 'could', 'kept', 'doubted', 'should', 'shall', 'took']
End of explanation
entrees = [
"Yam, Rosemary and Chicken Bowl with Hot Sauce $10.95",
"Lavender and Pepperoni Sandwich $8.49",
"Water Chestnuts and Peas Power Lunch (with mayonnaise) $12.95 - v",
"Artichoke, Mustard Green and Arugula with Sesame Oil over noodles $9.95 - v",
"Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce $19.95",
"Rutabaga And Cucumber Wrap $8.49 - v"
]
Explanation: Finally, something super tricky. Here's a list of strings that contains a restaurant menu. Your job is to wrangle this plain text, slightly-structured data into a list of dictionaries.
End of explanation
menu = []
for item in entrees:
dictitem = {}
dictitem['name'] = re.search('(.*)\s\$', item).group(1) # why 1? 0 = whole match?
dictitem['price'] = float(re.search('\d{1,2}\.\d{2}', item).group())
dictitem['vegetarian'] = bool(re.match('.*v$', item))
menu.append(dictitem)
menu
Explanation: You'll need to pull out the name of the dish and the price of the dish. The v after the hyphen indicates that the dish is vegetarian---you'll need to include that information in your dictionary as well. I've included the basic framework; you just need to fill in the contents of the for loop.
Expected output:
[{'name': 'Yam, Rosemary and Chicken Bowl with Hot Sauce ',
'price': 10.95,
'vegetarian': False},
{'name': 'Lavender and Pepperoni Sandwich ',
'price': 8.49,
'vegetarian': False},
{'name': 'Water Chestnuts and Peas Power Lunch (with mayonnaise) ',
'price': 12.95,
'vegetarian': True},
{'name': 'Artichoke, Mustard Green and Arugula with Sesame Oil over noodles ',
'price': 9.95,
'vegetarian': True},
{'name': 'Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce ',
'price': 19.95,
'vegetarian': False},
{'name': 'Rutabaga And Cucumber Wrap ', 'price': 8.49, 'vegetarian': True}]
End of explanation |
3,258 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mri', 'sandbox-1', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: MRI
Source ID: SANDBOX-1
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:19
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation |
3,259 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Python Jupyter
Welcome to Jupyter, through this interface I will be showing you the following
Step1: Cleaning Up The Full Text
In order for better results from our analysis later we need to clean up the full text.
This could be a project within itself and will differ item to item so for the intial run I have just set the full text to get lowered so 'Canada' and 'canada' aren't considered two different words.
I've also provided a basic regex that you can uncomment to strip everything other than words from the full text
Step2: Basic Analysis
So now we have the item's full text we are going to use the Natural Language Toolkit to perform some analysis on it using NLTK.
NLTK is a Python Library for working with written language data. It is free and very well documented. Many areas we'll be covering are treated in more detail in the NLTK Book, available for free online from here.
Note
Step3: Exploring Vocabulary
NLTK makes it really easy to get basic information about the size of a text and the complexity of its vocabulary.
len gives the number of symbols or 'tokens' in your text. This is the total number of words and items of punctuation.
set gives you a list of all the tokens in the text, without the duplicates.
Hence, len(set(fullText)) will give you the total number unique tokens. Remember this still includes punctuation.
sorted() places items in the list into alphabetical order, with punctuation symbols and capitalised words first.
Number of characters
Step4: Number of unique characters
Step5: List of unique characters
Step6: Get token count (words + symbols)
For our analysis, we want to break up the full text into words and punctuation, this step is called tokenization
Step7: Unique Token Count
Step8: Average number of times a word is used
We can investigate the lexical richness of a text. For example, by dividing the total number of words by the number of unique words, we can see the average number of times each word is used.
Step9: Number of times a specific word is used
Step10: Percentage of text that is a specific word
Step11: Exploring Text
Concordance
Step12: Words used similarly??
Step13: Common contexts
Common contexts allow us to examine just the contexts that are shared by two or more words, such as valley and river.
Step14: Longest words in the text
It is possible to select the longest words in a text, which may tell you something about its vocabulary and style
Step15: Collocations
We can also find words that typically occur together, which tend to be very specific to a text or genre of texts.
Step16: Graphing Data
Single Dispersion Plot
Step17: Multiple Dispersion Plot
Step18: Frequency distributions | Python Code:
import json
import requests
apiResponse = requests.get('https://oc-index.library.ubc.ca/collections/bcbooks/items/1.0059569').json()
item = apiResponse['data']
fullText = item['FullText'][0]['value']
print(fullText)
Explanation: Introduction to Python Jupyter
Welcome to Jupyter, through this interface I will be showing you the following:
Python - A programming language that lets you work quickly. - Documentation
NLTK - Natural Languge Toolkit - a Python Library for working with written language data. - Documentation
Open Collections API - Our "Application Programming Interface" which will allow you to import full text. - Documentation
Python is a great language for data analysis, more experienced programmers might want to use R, but Python is a nice entry point for everyone.
If you don't know Python, or any programming for that matter, please remain calm you won't need to do any programming throughout this talk, however if you do know Python you can feel free to edit any of the code and have your notebook update accordingly.
Getting the Full Text
To begin with we are just going to get one item from the Open Collections API and perform some analysis on that. Later on we will look at getting entire collections, and performing searches via the API.
For our first item I have chosen:
https://open.library.ubc.ca/collections/bcbooks/items/1.0059569
The Open Collections API URL is:
https://oc-index.library.ubc.ca
So to access the item I have chosen via the API we would need to GET the data from:
https://oc-index.library.ubc.ca/collections/bcbooks/items/1.0059569
End of explanation
import re, string;
fullTextLower = fullText.lower()
cleanFullText = fullTextLower
### To strip everything other than words uncomment below ###
pattern = re.compile('[\W_]+')
cleanFullText = pattern.sub(' ', cleanFullText)
print(cleanFullText)
Explanation: Cleaning Up The Full Text
In order for better results from our analysis later we need to clean up the full text.
This could be a project within itself and will differ item to item so for the intial run I have just set the full text to get lowered so 'Canada' and 'canada' aren't considered two different words.
I've also provided a basic regex that you can uncomment to strip everything other than words from the full text
End of explanation
import nltk # imports all the nltk basics
nltk.download("punkt") # Word tokenizer
nltk.download("stopwords") # Stop words
from nltk import word_tokenize
Explanation: Basic Analysis
So now we have the item's full text we are going to use the Natural Language Toolkit to perform some analysis on it using NLTK.
NLTK is a Python Library for working with written language data. It is free and very well documented. Many areas we'll be covering are treated in more detail in the NLTK Book, available for free online from here.
Note: NLTK provides tools for tasks ranging from very simple (counting words in a text) to very complex (writing and training parsers, etc.). Many advanced tasks are beyond the scope of this talk, but by the time we're done, you should understand Python and NLTK well enough to perform these tasks on your own!
Firstly, we will need to import NLTK.
End of explanation
len(fullText)
Explanation: Exploring Vocabulary
NLTK makes it really easy to get basic information about the size of a text and the complexity of its vocabulary.
len gives the number of symbols or 'tokens' in your text. This is the total number of words and items of punctuation.
set gives you a list of all the tokens in the text, without the duplicates.
Hence, len(set(fullText)) will give you the total number unique tokens. Remember this still includes punctuation.
sorted() places items in the list into alphabetical order, with punctuation symbols and capitalised words first.
Number of characters
End of explanation
len(set(fullText))
Explanation: Number of unique characters
End of explanation
sorted(set(fullText))[:50] # Limited to 50
Explanation: List of unique characters
End of explanation
tokens = word_tokenize(cleanFullText)
len(tokens)
Explanation: Get token count (words + symbols)
For our analysis, we want to break up the full text into words and punctuation, this step is called tokenization
End of explanation
len(set(tokens))
Explanation: Unique Token Count
End of explanation
len(tokens)/len(set(tokens))
Explanation: Average number of times a word is used
We can investigate the lexical richness of a text. For example, by dividing the total number of words by the number of unique words, we can see the average number of times each word is used.
End of explanation
cleanFullText.count("vancouver")
Explanation: Number of times a specific word is used
End of explanation
100.0*fullText.count("and")/len(fullText)
Explanation: Percentage of text that is a specific word
End of explanation
text = nltk.Text(tokens)
text.concordance("vancouver")
Explanation: Exploring Text
Concordance
End of explanation
text.similar("miles")
Explanation: Words used similarly??
End of explanation
text.common_contexts(["valley", "river"])
Explanation: Common contexts
Common contexts allow us to examine just the contexts that are shared by two or more words, such as valley and river.
End of explanation
v = set(text)
long_words = [word for word in v if len(word) > 15]
sorted(long_words)
Explanation: Longest words in the text
It is possible to select the longest words in a text, which may tell you something about its vocabulary and style
End of explanation
text.collocations()
Explanation: Collocations
We can also find words that typically occur together, which tend to be very specific to a text or genre of texts.
End of explanation
import numpy
# allow visuals to show up in this interface-
% matplotlib inline
text.dispersion_plot(["river"])
Explanation: Graphing Data
Single Dispersion Plot
End of explanation
text.dispersion_plot(["miles", "sea", "lake", "land", "rest"])
Explanation: Multiple Dispersion Plot
End of explanation
from nltk import FreqDist
fdist = FreqDist(text)
fdist.most_common(50)
fdist.plot(25)
Explanation: Frequency distributions
End of explanation |
3,260 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
view_sentence_range[1]
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
#print(text)
counts = Counter(text)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 0)}
int_to_vocab = {ii: word for ii, word in enumerate(vocab, 0)}
print('int_to_vocab size:', len(int_to_vocab))
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
punctuation_to_token = {}
punctuation_to_token['.'] = '||period||'
punctuation_to_token[','] = '||comma||'
punctuation_to_token['"'] = '||quotation||'
punctuation_to_token[';'] = '||semicolon||'
punctuation_to_token['!'] = '||exclamation||'
punctuation_to_token['?'] = '||question||'
punctuation_to_token['('] = '||l-parentheses||'
punctuation_to_token[')'] = '||r-parentheses||'
punctuation_to_token['--'] = '||dash||'
punctuation_to_token['\n'] = '||return||'
return punctuation_to_token
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
print(len(int_to_vocab))
print(int_to_vocab[6778])
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
input = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
return input, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following the tuple (Input, Targets, LearingRate)
End of explanation
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([lstm] * 2)
#drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=0.5)
#lstm_layers = 1
#cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.int32)
initial_state = tf.identity(initial_state, name="initial_state")
return cell, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
#embedding = tf.Variable(tf.random_uniform((vocab_size+1, embed_dim), -1, 1))
embedding = tf.Variable(tf.truncated_normal((vocab_size+1, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
print("vocab_size:", vocab_size)
print("embed.shape:", embed.shape)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
print("inputs.shape:", inputs.shape)
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32) #need to specify dtype instead of initial_state
final_state = tf.identity(final_state, name="final_state")
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
#embed_dim = 300
#embed = get_embed(input_data, vocab_size, embed_dim)
embed = get_embed(input_data, vocab_size, rnn_size)
outputs, final_state = build_rnn(cell, embed)
#logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=tf.nn.relu)
logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None,
weights_initializer=tf.truncated_normal_initializer(stddev=0.01),
biases_initializer=tf.zeros_initializer())
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
data = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
tmp = []
tmp = [[data[0:2]], data[2:4]]
print(tmp)
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
#print(int_text)
#print(batch_size, seq_length)
batches = []
num_of_batches = len(int_text) // (batch_size*seq_length)
print("num_of_batches:", num_of_batches)
for i in range(0, num_of_batches):
batch_of_input = []
batch_of_output = []
for j in range(0, batch_size):
top = i*seq_length + j*seq_length*num_of_batches
batch_of_input.append(int_text[top : top+seq_length])
batch_of_output.append(int_text[top+1 :top+1+seq_length])
batch = [batch_of_input, batch_of_output]
#print('batch', i, 'input:')
#print(batch_of_input)
#print('batch', i, 'output:')
#print(batch_of_output)
batches.append(batch)
return np.array(batches)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
#get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
# Number of Epochs
num_epochs = 200
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 256
# Sequence Length
seq_length = 10
# Learning Rate
learning_rate = 0.002
# Show stats for every n number of batches
show_every_n_batches = 53
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
input_tensor = loaded_graph.get_tensor_by_name('input:0')
Initial_state_tensor = loaded_graph.get_tensor_by_name('initial_state:0')
final_state_tensor = loaded_graph.get_tensor_by_name('final_state:0')
probs_tensor = loaded_graph.get_tensor_by_name('probs:0')
return input_tensor, Initial_state_tensor, final_state_tensor, probs_tensor
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
#print(probabilities)
#print(int_to_vocab)
index = np.argmax(probabilities)
word = int_to_vocab[index]
#word = int_to_vocab.get(probabilities.argmax(axis=0))
return word
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
3,261 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting Started
This is an Notebook containing the examples from the Getting Started section in the documentation. Refer to the documentation for very verbose description of this code.
Optimizing a Policy
Step1: Lets take a look at what happened during the run. For this we can access the monitor and generate some plots.
Step2: Configuration
Step3: After changing these values, please run the cell which invokes optimizer.optimize again to see what happens.
Benchmark | Python Code:
# import the classes we need
from SafeRLBench.envs import LinearCar
from SafeRLBench.policy import LinearPolicy
from SafeRLBench.algo import PolicyGradient
# get an instance of `LinearCar` with the default arguments.
linear_car = LinearCar()
# we need a policy which maps R^2 to R
policy = LinearPolicy(2, 1)
# setup parameters
policy.parameters = [-1, -1, 1]
# plug the environment and policy into the algorithm
optimizer = PolicyGradient(linear_car, policy, estimator='central_fd')
# run optimization
optimizer.optimize()
Explanation: Getting Started
This is an Notebook containing the examples from the Getting Started section in the documentation. Refer to the documentation for very verbose description of this code.
Optimizing a Policy
End of explanation
import matplotlib.pyplot as plt
y = optimizer.monitor.rewards
plt.plot(range(len(y)), y)
plt.show()
Explanation: Lets take a look at what happened during the run. For this we can access the monitor and generate some plots.
End of explanation
# import the configuration object
from SafeRLBench import config
# setup stream handler
config.logger_add_stream_handler()
# setup logger level
config.logger_set_level(config.DEBUG)
# raise monitor verbosity
config.monitor_set_verbosity(2)
Explanation: Configuration
End of explanation
# import the best performance measure
from SafeRLBench.measure import BestPerformance
# import the Bench and BenchConfig
from SafeRLBench import Bench, BenchConfig
# define environment configuration.
envs = [[(LinearCar, {'horizon': 100})]]
# define algorithms configuration.
algs = [[
(PolicyGradient, [{
'policy': LinearPolicy(2, 1, par=[-1, -1, 1]),
'estimator': 'central_fd',
'var': var
} for var in [1, 1.5, 2, 2.5]])
]]
# instantiate BenchConfig
config = BenchConfig(algs, envs)
# instantiate the bench
bench = Bench(config, BestPerformance())
# configure to run in parallel
config.jobs_set(4)
bench()
bench.measures[0]
best_run = bench.measures[0].result[0][0]
monitor = best_run.get_alg_monitor()
best_trace = monitor.traces[monitor.rewards.index(max(monitor.rewards))]
y = [t[1][0] for t in best_trace]
x = range(len(y))
import matplotlib.pyplot as plt
plt.plot(x, y)
plt.show()
Explanation: After changing these values, please run the cell which invokes optimizer.optimize again to see what happens.
Benchmark
End of explanation |
3,262 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basics
Thunder provides data structures, read/write patterns, and simple processing of spatial and temporal data. All operations in Thunder are designed to scale to very large data sets through the distributed comptuing engine Spark, but also run on local data backed by numpy with an identical API.
We'll walk through a very simple example here as an introduction. You don't need Spark to run this example. First, we'll loading some toy example time series data (this requires an internet connection).
Step1: data is a Series object, which is a generic collection of one-dimensional array data sharing a common index. We can inspect it to see its shape, dtype, and the fact that it's currently in local mode.
Step2: If we had instead loaded using data = td.series.fromexample('fish', engine=sc) where sc is a SparkContext, it would be loaded in distributed 'spark' mode and all operations would be parallelized.
A Series object is just a wrapper for an n-dimensional array, where the final axis is an indexed one-dimensional array (typically a time series). First, we'll extract a random subset of records, after first filtering for standard deviation, and normalizing by a baseline, and then convert to a local numpy array and plot. Here and elsewhere, we'll use seaborn for styling figures, but this is entirely optional.
Step3: We can compute statistics on series data; here we compute the fourier amplitude and phase and plot a phase histogram.
Step4: For this Series, since the initial dimensions correspond to spatial coordinates, we can compute a statistic on each series, convert to a local array, and look at it as an image. Here, we compute the mean of each series.
Step5: To look at this array as an image, we'll use a helper function from the showit package.
Step6: The other primary data type in Thunder is images. Here we'll load an example of these data.
Step7: An Images object is also a wrapper for an n-dimensional array, where the first dimension indexes the images, the remaining dimensions are the images (if 2d) or volumes (if 3d).
Although images is not an array, we can index into it as though it was one. We can also pass it to functions that expect arrays, like plotting functions, and it'll automatically be converted to one; above we explicitly converted to an array, but here we'll skip that. Let's look at the first image.
Step8: We can apply image filtering operations to image image. If data are distributed, this is a great way to apply filtering in parallel over a large data set. | Python Code:
import thunder as td
series = td.series.fromexample('fish')
Explanation: Basics
Thunder provides data structures, read/write patterns, and simple processing of spatial and temporal data. All operations in Thunder are designed to scale to very large data sets through the distributed comptuing engine Spark, but also run on local data backed by numpy with an identical API.
We'll walk through a very simple example here as an introduction. You don't need Spark to run this example. First, we'll loading some toy example time series data (this requires an internet connection).
End of explanation
series
Explanation: data is a Series object, which is a generic collection of one-dimensional array data sharing a common index. We can inspect it to see its shape, dtype, and the fact that it's currently in local mode.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('darkgrid')
sns.set_context('notebook')
examples = series.filter(lambda x: x.std() > 6).normalize().sample(100).toarray()
plt.plot(series.index, examples.T);
Explanation: If we had instead loaded using data = td.series.fromexample('fish', engine=sc) where sc is a SparkContext, it would be loaded in distributed 'spark' mode and all operations would be parallelized.
A Series object is just a wrapper for an n-dimensional array, where the final axis is an indexed one-dimensional array (typically a time series). First, we'll extract a random subset of records, after first filtering for standard deviation, and normalizing by a baseline, and then convert to a local numpy array and plot. Here and elsewhere, we'll use seaborn for styling figures, but this is entirely optional.
End of explanation
phases = series.filter(lambda x: x.std() > 6).flatten().fourier(freq=1)[:,1].toarray()
plt.hist(phases);
Explanation: We can compute statistics on series data; here we compute the fourier amplitude and phase and plot a phase histogram.
End of explanation
statistic = series.map(lambda x: x.mean()).toarray()
statistic.shape
Explanation: For this Series, since the initial dimensions correspond to spatial coordinates, we can compute a statistic on each series, convert to a local array, and look at it as an image. Here, we compute the mean of each series.
End of explanation
from showit import tile
tile(statistic, axis=2);
Explanation: To look at this array as an image, we'll use a helper function from the showit package.
End of explanation
images = td.images.fromexample('mouse')
images
Explanation: The other primary data type in Thunder is images. Here we'll load an example of these data.
End of explanation
from showit import image
single = images[0, :, :]
image(single);
Explanation: An Images object is also a wrapper for an n-dimensional array, where the first dimension indexes the images, the remaining dimensions are the images (if 2d) or volumes (if 3d).
Although images is not an array, we can index into it as though it was one. We can also pass it to functions that expect arrays, like plotting functions, and it'll automatically be converted to one; above we explicitly converted to an array, but here we'll skip that. Let's look at the first image.
End of explanation
filtered = images.gaussian_filter(3).subsample(3)[0, :, :]
image(filtered);
Explanation: We can apply image filtering operations to image image. If data are distributed, this is a great way to apply filtering in parallel over a large data set.
End of explanation |
3,263 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Running pyqz I
A) Installing and importing pyqz
Installing pyqz is best done via pip. You should then be able to import the package and check its version from within any Python shell
Step1: From v0.8.0 onwards, the plotting functions have been placed in a distinct module, which must be import separately, if you wish to exploit them.
Step2: B) Accessing MAPPINGS line ratio diagnostic grids
pyqz gives you easy access to specific MAPPINGS strong nebular line ratio diagnostic diagrams using pyqz.get_grid()
Step3: The main parameters of the MAPPINGS simulations can be specified via the following keywords
Step4: If you want to check how a given line ratio diagnostic diagram looks (and e.g. check whether the MAPPINGS grid is flat, or wrapped) for line ratios of your choice, you can use pyqz_plots.plot_grid()
Step5: You can check which version of MAPPINGS was used to generate the grids currently inside pyqz as follows
Step6: An important feature of pyqz is the auto-detection of wraps in the diagnostic grids, marked with red segments in the diagram, and returned as an array by the function pyqz.check_grid().
The <i>default</i> MAPPINGS grids shipped with pyqz are coarse. For various reasons better explained elsewhere (see the MAPPINGS documentation), only a few abundance values have matching stellar tracks <b>AND</b> stellar atmospheres. Hence, only a few abundance points can be simulated in a consistent fashion.
Rather than 1) interpolating between stellar tracks and stellar atmospheres in the abundance space and 2) running extra MAPPINGS models (which would use inconsistent & interpolated input), pyqz can directly <b>resample</b> each diagnostic grid (using the function pyqzt.refine_MVphotogrid(), see the docs for more info). The resampling is performed in the {LogQ and Tot[O+12] vs line ratio} space for all line ratios returned by MAPPINGS using Akima splines. Resampled grids can be accessed via the sampling keyword. Diagnostic grids resampled 2x2 times are shipped in the default pyqz package and are directly accessible, e.g.
Step7: C) Deriving <code>LogQ</code> and <code>Tot[O+12]</code> for a given set of line ratios
At the core of pyqz lies pyqz.interp_qz(), which is the basic routine used to interpolate a given line ratio diagnostic grid. The function is being fed by line ratios stored inside numpy arrays, and will only return a value for line ratios landing <b> on valid and un-wrapped</b> regions of the grid
Step8: Of course, one usually wants to compute both LogQ and Tot[O+12]or gas[O+12] for a large set of strong emission line fluxes, combining the estimates from different line ratio diagnostics diagrams. This is exactly what the function pyqz.get_global_qz() allows you to do.
The function is being fed the individual line fluxes and associated errors in the form of numpy arrays and lists. ID tags for each dataset can also be given to the function (these are then used if/when saving the different diagrams to files).
Step9: By default, all line flux errors are assumed to be gaussian, where the input std value is the 1 standard deviation. Alternatively, line fluxes can be tagged as upper-limits by setting their errors to -1.
The outcome of get_global_qz() can be visualized using pyqz_plots.plot_global_qz(), but only if KDE_pickle_loc is set in the first one. This keyword defines the location in which to save a pickle file that contains all the relevant pieces of information associated with a given function call, i.e.
Step10: Users less keen on using Python extensively can alternatively feed their data to pyqz via an <b>appropriately structured</b> .csv file and receive another .csv file in return (as well as a numpy array)
Step11: The first line of the input file must contain the name of each column, following the pyqz convention. The order itself does not matter, e.g. | Python Code:
%matplotlib inline
import pyqz
import numpy as np
Explanation: Running pyqz I
A) Installing and importing pyqz
Installing pyqz is best done via pip. You should then be able to import the package and check its version from within any Python shell:
End of explanation
import pyqz.pyqz_plots as pyqzp
Explanation: From v0.8.0 onwards, the plotting functions have been placed in a distinct module, which must be import separately, if you wish to exploit them.
End of explanation
a_grid = pyqz.get_grid('[NII]/[SII]+;[OIII]/[SII]+', sampling=1)
Explanation: B) Accessing MAPPINGS line ratio diagnostic grids
pyqz gives you easy access to specific MAPPINGS strong nebular line ratio diagnostic diagrams using pyqz.get_grid():
End of explanation
a_grid = pyqz.get_grid('[NII]/[SII]+;[OIII]/[SII]+', struct = 'pp', Pk = 5, kappa = 'inf')
Explanation: The main parameters of the MAPPINGS simulations can be specified via the following keywords:
- Pk let's you define the pressure of the simulated HII regions,
- struct allows you to choose between plane-parallel ('pp') and spherical ('sph') HII regions, and
- kappa lets you define the value of $\kappa$ (from the so-called $\kappa$-distribution).
All these values must match an existing set of MAPPINGS simulations inside the pyqz.pyqzm.pyqz_grid_dir folder, or pyqz will issue an error. In other words, pyqz will never be running new MAPPINGS simulations for you.
So, if one wanted to access the MAPPINGS simulations for plane-parallel HII regions, with Maxwell-Boltzmann electron density distribution, Pk =5.0 (these are the default parameters), one should type:
End of explanation
pyqzp.plot_grid('[NII]/[OII]+;[OIII]/[OII]+', struct = 'pp', Pk = 5, kappa = 'inf')
Explanation: If you want to check how a given line ratio diagnostic diagram looks (and e.g. check whether the MAPPINGS grid is flat, or wrapped) for line ratios of your choice, you can use pyqz_plots.plot_grid():
End of explanation
fn = pyqz.pyqz_tools.get_MVphotogrid_fn(Pk = 5.0, calibs = 'GCZO', kappa = np.inf, struct = 'pp', sampling = 1)
info = pyqz.pyqz_tools.get_MVphotogrid_metadata(fn)
print 'MAPPINGS id: %s' % info['MV_id']
print 'Model created: %s' % info['date']
print 'Model parameters: %s' % info['params'].split(': ')[1]
Explanation: You can check which version of MAPPINGS was used to generate the grids currently inside pyqz as follows:
End of explanation
pyqzp.plot_grid('[NII]/[OII]+;[OIII]/[OII]+', struct = 'pp', Pk = 5, kappa = 'inf', sampling=2)
Explanation: An important feature of pyqz is the auto-detection of wraps in the diagnostic grids, marked with red segments in the diagram, and returned as an array by the function pyqz.check_grid().
The <i>default</i> MAPPINGS grids shipped with pyqz are coarse. For various reasons better explained elsewhere (see the MAPPINGS documentation), only a few abundance values have matching stellar tracks <b>AND</b> stellar atmospheres. Hence, only a few abundance points can be simulated in a consistent fashion.
Rather than 1) interpolating between stellar tracks and stellar atmospheres in the abundance space and 2) running extra MAPPINGS models (which would use inconsistent & interpolated input), pyqz can directly <b>resample</b> each diagnostic grid (using the function pyqzt.refine_MVphotogrid(), see the docs for more info). The resampling is performed in the {LogQ and Tot[O+12] vs line ratio} space for all line ratios returned by MAPPINGS using Akima splines. Resampled grids can be accessed via the sampling keyword. Diagnostic grids resampled 2x2 times are shipped in the default pyqz package and are directly accessible, e.g.:
End of explanation
niioii = np.array([-0.65])
oiiisii = np.array([-0.1])
z = pyqz.interp_qz('Tot[O]+12',[niioii, oiiisii],'[NII]/[OII]+;[OIII]/[SII]+',
sampling=1,struct='pp')
print 'Tot[O]+12 = %.2f' % z
# The result can be visualized using pyqz_plots.plot_grid()
pyqzp.plot_grid('[NII]/[OII]+;[OIII]/[SII]+',sampling = 1, struct='pp', data = [niioii,oiiisii], interp_data=z)
Explanation: C) Deriving <code>LogQ</code> and <code>Tot[O+12]</code> for a given set of line ratios
At the core of pyqz lies pyqz.interp_qz(), which is the basic routine used to interpolate a given line ratio diagnostic grid. The function is being fed by line ratios stored inside numpy arrays, and will only return a value for line ratios landing <b> on valid and un-wrapped</b> regions of the grid:
End of explanation
pyqz.get_global_qz(np.array([[ 1.00e+00, 5.00e-02, 2.38e+00, 1.19e-01, 5.07e+00, 2.53e-01,
5.67e-01, 2.84e-02, 5.11e-01, 2.55e-02, 2.88e+00, 1.44e-01]]),
['Hb','stdHb','[OIII]','std[OIII]','[OII]+','std[OII]+',
'[NII]','std[NII]','[SII]+','std[SII]+','Ha','stdHa'],
['[NII]/[SII]+;[OIII]/Hb','[NII]/[OII]+;[OIII]/[SII]+'],
ids = ['NGC_1234'],
KDE_method = 'multiv',
KDE_qz_sampling = 201j,
struct = 'pp',
sampling = 1,
verbose = True)
Explanation: Of course, one usually wants to compute both LogQ and Tot[O+12]or gas[O+12] for a large set of strong emission line fluxes, combining the estimates from different line ratio diagnostics diagrams. This is exactly what the function pyqz.get_global_qz() allows you to do.
The function is being fed the individual line fluxes and associated errors in the form of numpy arrays and lists. ID tags for each dataset can also be given to the function (these are then used if/when saving the different diagrams to files).
End of explanation
out = pyqz.get_global_qz(np.array([[ 1.00e+00, 5.00e-02, 2.38e+00, 1.19e-01, 5.07e+00, 2.53e-01,
5.67e-01, 2.84e-02, 5.11e-01, 2.55e-02, 2.88e+00, 1.44e-01]]),
['Hb','stdHb','[OIII]','std[OIII]','[OII]+','std[OII]+',
'[NII]','std[NII]','[SII]+','std[SII]+','Ha','stdHa'],
['[NII]/[SII]+;[OIII]/Hb','[NII]/[OII]+;[OIII]/[SII]+'],
ids = ['NGC_1234'],
KDE_method = 'multiv',
KDE_qz_sampling = 201j,
KDE_pickle_loc = './examples/',
struct = 'pp',
sampling = 1,
verbose = True)
import glob
fn = glob.glob('./examples/*NGC_1234*.pkl')
# pyqz_plots.get_global_qz() takes the pickle filename as argument.
pyqzp.plot_global_qz(fn[0], show_plots=True, save_loc = './examples/', do_all_diags=True)
Explanation: By default, all line flux errors are assumed to be gaussian, where the input std value is the 1 standard deviation. Alternatively, line fluxes can be tagged as upper-limits by setting their errors to -1.
The outcome of get_global_qz() can be visualized using pyqz_plots.plot_global_qz(), but only if KDE_pickle_loc is set in the first one. This keyword defines the location in which to save a pickle file that contains all the relevant pieces of information associated with a given function call, i.e.: the single and global KDE, the srs random realizations of the line fluxes, etc ...
End of explanation
import os
# The example file is shipped with pyqz, and stored here:
example_csv_file = os.path.join(pyqz.pyqzm.pyqz_dir,'tests','test_arena','example_input.csv')
print example_csv_file
# Now, feed it to the code
pyqz.get_global_qz_ff(example_csv_file,
['[NII]/[SII]+;[OIII]/Hb','[NII]/[OII]+;[OIII]/[SII]+'],
struct='pp',
KDE_method='multiv',
KDE_qz_sampling = 201j,
sampling=1)
Explanation: Users less keen on using Python extensively can alternatively feed their data to pyqz via an <b>appropriately structured</b> .csv file and receive another .csv file in return (as well as a numpy array):
End of explanation
pyqz.pyqzm.diagnostics.keys()
Explanation: The first line of the input file must contain the name of each column, following the pyqz convention. The order itself does not matter, e.g.:
Id,[OII]+,std[OII]+,Hb,stdHb,[OIII],std[OIII],[OI],std[OI],Ha,stdHa,[NII],std[NII],[SII]+,std[SII]+
The Id (optional) can be used to add a tag (i.e. a string) to each set of line fluxes. This tag will be used in the filenames of the diagrams (if some are saved) and in the output .csv file as well.
Commented line begin with #, missing values are marked with $$$ (set with the missing_values keyword), and the decimal precision in the output file is set with decimals (default=5).
At this point, it must be stressed that pyqz.get_global_qz() can only exploit a finite set of diagnostic grids, namely:
End of explanation |
3,264 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Adding new passbands to PHOEBE
In this tutorial we will show you how to add your own passband to PHOEBE. Adding a custom passband involves
Step1: If you plan on computing model atmosphere intensities (as opposed to only blackbody intensities), you will need to download atmosphere tables and unpack them into a local directory of your choice. Keep in mind that this will take a long time. Plan to go for lunch or leave it overnight. The good news is that this needs to be done only once. For the purpose of this document, we will use a local tables/ directory and assume that we are computing intensities for all available model atmospheres
Step2: Getting started
Let us start by importing phoebe, numpy and matplotlib
Step3: Passband transmission function
The passband transmission function is typically a user-provided two-column file. The first column is wavelength, and the second column is passband transmission. For the purposes of this tutorial, we will simulate the passband as a uniform box.
Step4: Let us plot this mock passband transmission function to see what it looks like
Step5: Let us now save these data in a file that we will use to register a new passband.
Step6: Registering a passband
The first step in introducing a new passband into PHOEBE is registering it with the system. We use the Passband class for that.
Step7: The first argument, ptf, is the passband transmission file we just created. Of course, you would provide an actual passband transmission function that comes from a respectable source rather than this silly tutorial.
The next two arguments, pbset and pbname, should be taken in unison. The way PHOEBE refers to passbands is a pbset
Step8: Since we have not computed any tables yet, the list is empty for now. Blackbody functions for computing the lookup tables are built into PHOEBE and you do not need any auxiliary files to generate them. The lookup tables are defined for effective temperatures between 300K and 500,000K. To compute the blackbody response, issue
Step9: Checking the content property again shows that the table has been successfully computed
Step10: We can now test-drive the blackbody lookup table we just created. For this we will use a low-level class method that computes normal emergent passband intensity, Inorm(). For the sake of simplicity, we will turn off limb darkening by setting ld_func to 'linear' and ld_coeffs to '[0.0]'
Step11: Let us now plot a range of temperatures, to make sure that normal emergent passband intensities do what they are supposed to do. While at it, let us compare what we get for the Johnson
Step12: This makes perfect sense
Step13: Note, of course, that you will need to change the path to point to the directory where you unpacked the ck2004 tables. The verbosity parameter verbose will report on the progress as computation is being done. Depending on your computer speed, this step will take up to a minute to complete. We can now check the passband's content attribute again
Step14: Let us now use the same low-level function as before to compare normal emergent passband intensity for our custom passband for blackbody and ck2004 model atmospheres. One other complication is that, unlike blackbody model that depends only on the temperature, the ck2004 model depends on surface gravity (log g) and heavy metal abundances as well, so we need to pass those arrays.
Step15: Quite a difference. That is why using model atmospheres is superior when accuracy is of importance. Next, we need to compute direction-dependent intensities for all our limb darkening and boosting needs. This is a step that takes a long time; depending on your computer speed, it can take a few minutes to complete.
Step16: This step will allow PHOEBE to compute all direction-dependent intensities on the fly, including the interpolation of the limb darkening coefficients that is model-independent. When limb darkening models are preferred (for example, when you don't quite trust direction-dependent intensities from the model atmosphere), we need to calculate two more tables
Step17: This completes the computation of Castelli & Kurucz auxiliary tables.
Computing PHOENIX response
PHOENIX is a 3-D model atmosphere code. Because of that, it is more complex and better behaved for cooler stars (down to ~2300K). The steps to compute PHOENIX intensity tables are analogous to the ones we used for ck2004; so we can do all of them in a single step
Step18: There is one extra step that we need to do for phoenix atmospheres
Step19: Now we can compare all three model atmospheres
Step20: We see that, as temperature increases, model atmosphere intensities can differ quite a bit. That explains why the choice of a model atmosphere is quite important and should be given proper consideration.
Importing Wilson-Devinney response
PHOEBE no longer shares any codebase with the WD code, but for comparison purposes it is sometimes useful to use the same atmosphere tables. If the passband you are registering with PHOEBE has been defined in WD's atmcof.dat and atmcofplanck.dat files, PHOEBE can import those coefficients and use them to compute intensities.
To import a set of WD atmospheric coefficients, you need to know the corresponding index of the passband (you can look it up in the WD user manual available at ftp
Step21: We can consult the content attribute to see the entire set of supported tables, and plot different atmosphere models for comparison purposes
Step22: Still an appreciable difference.
Saving the passband table
The final step of all this (computer's) hard work is to save the passband file so that these steps do not need to be ever repeated. From now on you will be able to load the passband file explicitly and PHOEBE will have full access to all of its tables. Your new passband will be identified as 'Custom | Python Code:
#!pip install -I "phoebe>=2.4,<2.5"
Explanation: Adding new passbands to PHOEBE
In this tutorial we will show you how to add your own passband to PHOEBE. Adding a custom passband involves:
downloading and setting up model atmosphere tables;
providing a passband transmission function;
defining and registering passband parameters;
computing blackbody response for the passband;
[optional] computing Castelli & Kurucz (2004) passband tables;
[optional] computing Husser et al. (2013) PHOENIX passband tables;
[optional] if the passband is one of the passbands included in the Wilson-Devinney code, importing the WD response; and
saving the generated passband file.
<!-- * \[optional\] computing Werner et al. (2012) TMAP passband tables; -->
Let's first make sure we have the correct version of PHOEBE installed. Uncomment the following line if running in an online notebook session such as colab.
End of explanation
import phoebe
from phoebe import u
# Register a passband:
pb = phoebe.atmospheres.passbands.Passband(
ptf='my_passband.ptf',
pbset='Custom',
pbname='mypb',
effwl=330,
wlunits=u.nm,
calibrated=True,
reference='A completely made-up passband published in Nowhere (2017)',
version=1.0,
comments='This is my first custom passband'
)
# Blackbody response:
pb.compute_blackbody_response()
# CK2004 response:
pb.compute_ck2004_response(path='tables/ck2004')
pb.compute_ck2004_intensities(path='tables/ck2004')
pb.compute_ck2004_ldcoeffs()
pb.compute_ck2004_ldints()
# PHOENIX response:
pb.compute_phoenix_response(path='tables/phoenix')
pb.compute_phoenix_intensities(path='tables/phoenix')
pb.compute_phoenix_ldcoeffs()
pb.compute_phoenix_ldints()
# Impute missing values from the PHOENIX model atmospheres:
pb.impute_atmosphere_grid(pb._phoenix_energy_grid)
pb.impute_atmosphere_grid(pb._phoenix_photon_grid)
pb.impute_atmosphere_grid(pb._phoenix_ld_energy_grid)
pb.impute_atmosphere_grid(pb._phoenix_ld_photon_grid)
pb.impute_atmosphere_grid(pb._phoenix_ldint_energy_grid)
pb.impute_atmosphere_grid(pb._phoenix_ldint_photon_grid)
for i in range(len(pb._phoenix_intensity_axes[3])):
pb.impute_atmosphere_grid(pb._phoenix_Imu_energy_grid[:,:,:,i,:])
pb.impute_atmosphere_grid(pb._phoenix_Imu_photon_grid[:,:,:,i,:])
# Wilson-Devinney response:
pb.import_wd_atmcof('atmcofplanck.dat', 'atmcof.dat', 22)
# Save the passband:
pb.save('my_passband.fits')
Explanation: If you plan on computing model atmosphere intensities (as opposed to only blackbody intensities), you will need to download atmosphere tables and unpack them into a local directory of your choice. Keep in mind that this will take a long time. Plan to go for lunch or leave it overnight. The good news is that this needs to be done only once. For the purpose of this document, we will use a local tables/ directory and assume that we are computing intensities for all available model atmospheres:
mkdir tables
cd tables
wget http://phoebe-project.org/static/atms/ck2004.tgz
wget http://phoebe-project.org/static/atms/phoenix.tgz
<!-- wget http://phoebe-project.org/static/atms/tmap.tgz -->
Once the data are downloaded, unpack the archives:
tar xvzf ck2004.tgz
tar xvzf phoenix.tgz
<!-- tar xvzf tmap.tgz -->
That should leave you with the following directory structure:
tables
|____ck2004
| |____TxxxxxGxxPxx.fits (3800 files)
|____phoenix
| |____ltexxxxx-x.xx-x.x.PHOENIX-ACES-AGSS-COND-SPECINT-2011.fits (7260 files)
I don't care about the details, just show/remind me how it's done
Makes sense, and we don't judge: you want to get to science. Provided that you have the passband transmission file available and the atmosphere tables already downloaded, the sequence that will generate/register a new passband is:
End of explanation
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger(clevel='WARNING')
Explanation: Getting started
Let us start by importing phoebe, numpy and matplotlib:
End of explanation
wl = np.linspace(300, 360, 61)
ptf = np.zeros(len(wl))
ptf[(wl>=320) & (wl<=340)] = 1.0
Explanation: Passband transmission function
The passband transmission function is typically a user-provided two-column file. The first column is wavelength, and the second column is passband transmission. For the purposes of this tutorial, we will simulate the passband as a uniform box.
End of explanation
plt.xlabel('Wavelength [nm]')
plt.ylabel('Passband transmission')
plt.plot(wl, ptf, 'b-')
plt.show()
Explanation: Let us plot this mock passband transmission function to see what it looks like:
End of explanation
np.savetxt('my_passband.ptf', np.vstack((wl, ptf)).T)
Explanation: Let us now save these data in a file that we will use to register a new passband.
End of explanation
pb = phoebe.atmospheres.passbands.Passband(
ptf='my_passband.ptf',
pbset='Custom',
pbname='mypb',
effwl=330.,
wlunits=u.nm,
calibrated=True,
reference='A completely made-up passband published in Nowhere (2017)',
version=1.0,
comments='This is my first custom passband')
Explanation: Registering a passband
The first step in introducing a new passband into PHOEBE is registering it with the system. We use the Passband class for that.
End of explanation
pb.content
Explanation: The first argument, ptf, is the passband transmission file we just created. Of course, you would provide an actual passband transmission function that comes from a respectable source rather than this silly tutorial.
The next two arguments, pbset and pbname, should be taken in unison. The way PHOEBE refers to passbands is a pbset:pbname string, for example Johnson:V, Cousins:Rc, etc. Thus, our fake passband will be Custom:mypb.
The following two arguments, effwl and wlunits, also come as a pair. PHOEBE uses effective wavelength to apply zero-level passband corrections when better options (such as model atmospheres) are unavailable. Effective wavelength is a transmission-weighted average wavelength in the units given by wlunits.
The calibrated parameter instructs PHOEBE whether to take the transmission function as calibrated, i.e. the flux through the passband is absolutely calibrated. If set to True, PHOEBE will assume that absolute intensities computed using the passband transmission function do not need further calibration. If False, the intensities are considered as scaled rather than absolute, i.e. correct to a scaling constant. Most modern passbands provided in the recent literature are calibrated.
The reference parameter holds a reference string to the literature from which the transmission function was taken from. It is common that updated transmission functions become available, which is the point of the version parameter. If there are multiple versions of the transmission function, PHOEBE will by default take the largest value, or the value that is explicitly requested in the filter string, i.e. Johnson:V:1.0 or Johnson:V:2.0.
Finally, the comments parameter is a convenience parameter to store any additional pertinent information.
Computing blackbody response
To significantly speed up calculations, passband intensities are stored in lookup tables instead of computing them over and over again on the fly. Computed passband tables are tagged in the content property of the class:
End of explanation
pb.compute_blackbody_response()
Explanation: Since we have not computed any tables yet, the list is empty for now. Blackbody functions for computing the lookup tables are built into PHOEBE and you do not need any auxiliary files to generate them. The lookup tables are defined for effective temperatures between 300K and 500,000K. To compute the blackbody response, issue:
End of explanation
pb.content
Explanation: Checking the content property again shows that the table has been successfully computed:
End of explanation
pb.Inorm(Teff=5772, atm='blackbody', ld_func='linear', ld_coeffs=[0.0])
Explanation: We can now test-drive the blackbody lookup table we just created. For this we will use a low-level class method that computes normal emergent passband intensity, Inorm(). For the sake of simplicity, we will turn off limb darkening by setting ld_func to 'linear' and ld_coeffs to '[0.0]':
End of explanation
jV = phoebe.get_passband('Johnson:V')
teffs = np.linspace(5000, 8000, 100)
plt.xlabel('Temperature [K]')
plt.ylabel('Inorm [W/m^3]')
plt.plot(teffs, pb.Inorm(teffs, atm='blackbody', ld_func='linear', ld_coeffs=[0.0]), label='mypb')
plt.plot(teffs, jV.Inorm(teffs, atm='blackbody', ld_func='linear', ld_coeffs=[0.0]), label='jV')
plt.legend(loc='lower right')
plt.show()
Explanation: Let us now plot a range of temperatures, to make sure that normal emergent passband intensities do what they are supposed to do. While at it, let us compare what we get for the Johnson:V passband.
End of explanation
pb.compute_ck2004_response(path='tables/ck2004', verbose=False)
Explanation: This makes perfect sense: Johnson V transmission function is wider than our boxed transmission function, so intensity in the V band is larger the lower temperatures. However, for the hotter temperatures the contribution to the UV flux increases and our box passband with a perfect transmission of 1 takes over.
Computing Castelli & Kurucz (2004) response
For any real science you will want to generate model atmosphere tables. The default choice in PHOEBE are the models computed by Fiorella Castelli and Bob Kurucz (website, paper) that feature new opacity distribution functions. In principle, you can generate PHOEBE-compatible tables for any model atmospheres, but that would require a bit of book-keeping legwork in the PHOEBE backend. Contact us to discuss an extension to other model atmospheres.
To compute Castelli & Kurucz (2004) passband tables, we will use the previously downloaded model atmospheres. We start with the ck2004 normal intensities:
End of explanation
pb.content
Explanation: Note, of course, that you will need to change the path to point to the directory where you unpacked the ck2004 tables. The verbosity parameter verbose will report on the progress as computation is being done. Depending on your computer speed, this step will take up to a minute to complete. We can now check the passband's content attribute again:
End of explanation
loggs = np.ones(len(teffs))*4.43
abuns = np.zeros(len(teffs))
plt.xlabel('Temperature [K]')
plt.ylabel('Inorm [W/m^3]')
plt.plot(teffs, pb.Inorm(teffs, atm='blackbody', ld_func='linear', ld_coeffs=[0.0]), label='blackbody')
plt.plot(teffs, pb.Inorm(teffs, loggs, abuns, atm='ck2004', ld_func='linear', ld_coeffs=[0.0]), label='ck2004')
plt.legend(loc='lower right')
plt.show()
Explanation: Let us now use the same low-level function as before to compare normal emergent passband intensity for our custom passband for blackbody and ck2004 model atmospheres. One other complication is that, unlike blackbody model that depends only on the temperature, the ck2004 model depends on surface gravity (log g) and heavy metal abundances as well, so we need to pass those arrays.
End of explanation
pb.compute_ck2004_intensities(path='tables/ck2004', verbose=False)
Explanation: Quite a difference. That is why using model atmospheres is superior when accuracy is of importance. Next, we need to compute direction-dependent intensities for all our limb darkening and boosting needs. This is a step that takes a long time; depending on your computer speed, it can take a few minutes to complete.
End of explanation
pb.compute_ck2004_ldcoeffs()
pb.compute_ck2004_ldints()
Explanation: This step will allow PHOEBE to compute all direction-dependent intensities on the fly, including the interpolation of the limb darkening coefficients that is model-independent. When limb darkening models are preferred (for example, when you don't quite trust direction-dependent intensities from the model atmosphere), we need to calculate two more tables: one for limb darkening coefficients and the other for the integrated limb darkening. That is done by two methods that can take a couple of minutes to complete:
End of explanation
pb.compute_phoenix_response(path='tables/phoenix', verbose=False)
pb.compute_phoenix_intensities(path='tables/phoenix', verbose=False)
pb.compute_phoenix_ldcoeffs()
pb.compute_phoenix_ldints()
print(pb.content)
Explanation: This completes the computation of Castelli & Kurucz auxiliary tables.
Computing PHOENIX response
PHOENIX is a 3-D model atmosphere code. Because of that, it is more complex and better behaved for cooler stars (down to ~2300K). The steps to compute PHOENIX intensity tables are analogous to the ones we used for ck2004; so we can do all of them in a single step:
End of explanation
pb.impute_atmosphere_grid(pb._phoenix_energy_grid)
pb.impute_atmosphere_grid(pb._phoenix_photon_grid)
pb.impute_atmosphere_grid(pb._phoenix_ld_energy_grid)
pb.impute_atmosphere_grid(pb._phoenix_ld_photon_grid)
pb.impute_atmosphere_grid(pb._phoenix_ldint_energy_grid)
pb.impute_atmosphere_grid(pb._phoenix_ldint_photon_grid)
for i in range(len(pb._phoenix_intensity_axes[3])):
pb.impute_atmosphere_grid(pb._phoenix_Imu_energy_grid[:,:,:,i,:])
pb.impute_atmosphere_grid(pb._phoenix_Imu_photon_grid[:,:,:,i,:])
Explanation: There is one extra step that we need to do for phoenix atmospheres: because there are gaps in the coverage of atmospheric parameters, we need to impute those values in order to allow for seamless interpolation. This is achieved by the call to impute_atmosphere_grid(). It is a computationally intensive step that can take 10+ minutes.
End of explanation
plt.xlabel('Temperature [K]')
plt.ylabel('Inorm [W/m^3]')
plt.plot(teffs, pb.Inorm(teffs, atm='blackbody', ldatm='ck2004', ld_func='linear', ld_coeffs=[0.0]), label='blackbody')
plt.plot(teffs, pb.Inorm(teffs, loggs, abuns, atm='ck2004', ldatm='ck2004', ld_func='linear', ld_coeffs=[0.0]), label='ck2004')
plt.plot(teffs, pb.Inorm(teffs, loggs, abuns, atm='phoenix', ldatm='phoenix', ld_func='linear', ld_coeffs=[0.0]), label='phoenix')
plt.legend(loc='lower right')
plt.show()
Explanation: Now we can compare all three model atmospheres:
End of explanation
pb.import_wd_atmcof('atmcofplanck.dat', 'atmcof.dat', 22)
Explanation: We see that, as temperature increases, model atmosphere intensities can differ quite a bit. That explains why the choice of a model atmosphere is quite important and should be given proper consideration.
Importing Wilson-Devinney response
PHOEBE no longer shares any codebase with the WD code, but for comparison purposes it is sometimes useful to use the same atmosphere tables. If the passband you are registering with PHOEBE has been defined in WD's atmcof.dat and atmcofplanck.dat files, PHOEBE can import those coefficients and use them to compute intensities.
To import a set of WD atmospheric coefficients, you need to know the corresponding index of the passband (you can look it up in the WD user manual available at ftp://ftp.astro.ufl.edu/pub/wilson/lcdc2003/ebdoc2003.2feb2004.pdf.gz) and you need to grab the files ftp://ftp.astro.ufl.edu/pub/wilson/lcdc2003/atmcofplanck.dat.gz and ftp://ftp.astro.ufl.edu/pub/wilson/lcdc2003/atmcof.dat.gz from Bob Wilson's webpage. For this particular passband the index is 22. To import, issue:
End of explanation
pb.content
plt.xlabel('Temperature [K]')
plt.ylabel('Inorm [W/m^3]')
plt.plot(teffs, pb.Inorm(teffs, atm='blackbody', ldatm='ck2004', ld_func='linear', ld_coeffs=[0.0]), label='blackbody')
plt.plot(teffs, pb.Inorm(teffs, loggs, abuns, atm='ck2004', ldatm='ck2004', ld_func='linear', ld_coeffs=[0.0]), label='ck2004')
plt.plot(teffs, pb.Inorm(teffs, loggs, abuns, atm='phoenix', ldatm='phoenix', ld_func='linear', ld_coeffs=[0.0]), label='phoenix')
plt.plot(teffs, pb.Inorm(teffs, loggs, abuns, atm='extern_atmx', ldatm='phoenix', ld_func='linear', ld_coeffs=[0.0]), label='wd_atmx')
plt.legend(loc='lower right')
plt.show()
Explanation: We can consult the content attribute to see the entire set of supported tables, and plot different atmosphere models for comparison purposes:
End of explanation
pb.save('~/.phoebe/atmospheres/tables/passbands/my_passband.fits')
Explanation: Still an appreciable difference.
Saving the passband table
The final step of all this (computer's) hard work is to save the passband file so that these steps do not need to be ever repeated. From now on you will be able to load the passband file explicitly and PHOEBE will have full access to all of its tables. Your new passband will be identified as 'Custom:mypb'.
To make PHOEBE automatically load the passband, it needs to be added to one of the passband directories that PHOEBE recognizes. If there are no proprietary aspects that hinder the dissemination of the tables, please consider contributing them to PHOEBE so that other users can use them.
End of explanation |
3,265 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center">Logistic Regression in TensorFlow</h1>
In this notebook, we illustrate the basics of Logistic Regression using TensorFlow, on the <a href="https
Step1: Feature Information
For Each Attribute
Step2: As always, check for class imbalance.
Step3: Next, we set aside 20 positive and 20 negative cases as our test set. We'll use the rest of the data as our training set.
Step5: Next, we scale the training set, so all features have zero mean and unit variance.
Step6: Once the training loop is complete, we plot our loss function as a function of number of training steps.
Step7: Finally, calculate accuracy of the predictions on the test set. | Python Code:
import numpy as np
import pandas as pd
%pylab inline
pylab.style.use('ggplot')
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/pima-indians-diabetes/pima-indians-diabetes.data'
pima_df = pd.read_csv(url, header=None)
Explanation: <h1 align="center">Logistic Regression in TensorFlow</h1>
In this notebook, we illustrate the basics of Logistic Regression using TensorFlow, on the <a href="https://archive.ics.uci.edu/ml/datasets/pima+indians+diabetes">Pima Indian Diabetes dataset</a> from UCI Machine Learning Archive.
End of explanation
pima_df.columns = ['n_pregnant', 'glucose_conc', 'bp',
'skin_fold_thickness', 'serum_insulin', 'bmi', 'diabetes_ped_func',
'age', 'has_diabetes']
pima_df.head()
# Map the values in the target column: 1 -> 'yes', 0 -> 'no'
pima_df = pima_df.assign(has_diabetes=pima_df.has_diabetes.map(lambda v: 'yes' if v == 1 else 'no'))
pima_df.head()
Explanation: Feature Information
For Each Attribute: (all numeric-valued)
Number of times pregnant
Plasma glucose concentration a 2 hours in an oral glucose tolerance test
Diastolic blood pressure (mm Hg)
Triceps skin fold thickness (mm)
2-Hour serum insulin (mu U/ml)
Body mass index (weight in kg/(height in m)^2)
Diabetes pedigree function
Age (years)
Class variable (0 or 1)
First, we read from the UCI archive into a DataFrame.
End of explanation
pima_df.has_diabetes.value_counts().plot(kind='bar')
len(pima_df)
Explanation: As always, check for class imbalance.
End of explanation
test_set = pima_df.groupby(pima_df.has_diabetes).apply(lambda g: g.sample(20))
# Groupby creates a multi-index with the label name as the first level
test_set.index = test_set.index.droplevel(0)
train_set = pima_df.loc[pima_df.index.difference(test_set.index)]
print(len(test_set), len(train_set))
Explanation: Next, we set aside 20 positive and 20 negative cases as our test set. We'll use the rest of the data as our training set.
End of explanation
from sklearn.preprocessing import StandardScaler
def build_input(f_key, l_key, df):
Return a `feed_dict` suitable for tensorflow consumption.
features = df.drop('has_diabetes', axis=1).astype(np.float64).values
scaled_features = StandardScaler().fit_transform(features)
labels = df.has_diabetes.map(lambda v: 1 if v == 'yes' else 0).astype(np.int64).values
return {f_key: scaled_features, l_key: labels}
import tensorflow as tf
from IPython.display import display
import ipywidgets
n = 2000
pg = ipywidgets.FloatProgress(min=1, max=n, description='training...')
display(pg)
tf.reset_default_graph()
weights = tf.get_variable(dtype=np.float64,
name='weights',
shape=(8, 1),
initializer=tf.truncated_normal_initializer(mean=0.0, stddev=1.0))
features = tf.placeholder(shape=(None, 8), dtype=np.float64, name='features')
labels = tf.placeholder(shape=None, dtype=np.int64, name='labels')
# dot(w, X)
w_times_f = tf.matmul(features, weights, name='w_dot_x')
# \hat{P(X)} = 1 / 1 + exp(dot(w, X))
probs = tf.squeeze(1.0 / (1.0 + tf.exp(w_times_f)), name='logistic_function')
# 1-\hat{P(X)}
one = tf.constant(1.0, dtype=np.float64)
one_minus_probs = tf.subtract(one, probs, name='one_minus_probs')
one_minus_labels = tf.subtract(one, tf.cast(labels, np.float64), name='one_minus_y')
# Cross-enropy loss = - [ P(X) * ln \hat{P(X)} + (1-P(X)) * ln \hat{1-P(X)} ]
cross_entropy_loss = -tf.add(tf.multiply(tf.cast(labels, np.float64), tf.log(probs)),
tf.multiply(one_minus_labels, tf.log(one_minus_probs)),
name='cross_entropy_loss')
loss_function = tf.reduce_mean(cross_entropy_loss, name='loss_function')
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.02)
train_op = optimizer.minimize(loss_function, name='loss_minimizer')
losses = []
with tf.Session() as s:
s.run(tf.global_variables_initializer())
train_input = build_input(features, labels, train_set)
for i in range(1, n+1):
_, current_loss = s.run([train_op, loss_function], feed_dict=train_input)
pg.value += 1
losses.append(current_loss)
pg.bar_style = 'success'
pg.description = 'done.'
# Evaluate our trained model on the test set
test_input = build_input(features, labels, test_set)
test_probs = probs.eval(session=s, feed_dict=test_input)
Explanation: Next, we scale the training set, so all features have zero mean and unit variance.
End of explanation
loss_vals = pd.Series(losses)
loss_vals.rolling(10).mean().plot()
Explanation: Once the training loop is complete, we plot our loss function as a function of number of training steps.
End of explanation
from sklearn.metrics import accuracy_score
test_preds = np.where(test_probs > 0.5, 1.0, 0.0).astype(np.int64)
test_labels = test_input[labels]
accuracy_score(test_labels, test_preds)
Explanation: Finally, calculate accuracy of the predictions on the test set.
End of explanation |
3,266 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
20. 자연어처리
1) 워드 클라우드
단어의 크기를 단어의 빈도 수에 비례하도록 하여 단어를 아름답게 배치
Step2: 아주 멋있어 보이기는 하지만, 딱히 어떤 정보를 제공하지는 않는다.
단어가 구인 광고에 등장하는 빈도를 가로축,
단어가 이력서에 등장하는 빈도를 세로축
Step3: 2) n-gram 모델
Step4: bigram
Step5: trigram을 사용하면 다음 단어를 생성하는 각 단계에서 선택할 수 있는 단어의 수가 bigram을 사용할 때마다 훨씬 적어졌고, 선택할 수 있는 단어가 딱 하나만 존재하는 경우도 많았을 것이다.
즉, 이미 어떤 문서상에 존재했던 문장(또는 긴문구)하나를 그대로 생성했을 가능성도 있다.
이는 데이터 과학에 대한 더 많은 에세이들을 모으고, 이를 토대로 n-gram 모델을 구축하는 것을 의미!
<p><span style="color
Step6: ~~~
['_S']
['_NP','_VP']
['_N','_VP']
['Python','_VP']
['Python','_V','_NP']
['Python','trains','_NP']
['Python','trains','_A','_NP','_P','_A','_N']
['Python','trains','logistic','_NP','_P','_A','_N']
['Python','trains','logistic','_N','_P','_A','_N']
['Python','trains','logistic','data science','_P','_A','_N']
['Python','trains','logistic','data science','about','_A', '_N']
['Python','trains','logistic','data science','about','logistic','_N']
['Python','trains','logistic','data science','about','logistic','Python']
~~~
Step7: <p><span style="color
Step11: ~~~
결국, weight가 [1,1,3] 이라면
1/5의 확룔로 0,
1/5의 확률로 1,
3/5의 확률로 2를 반환
~~~ | Python Code:
import math, random, re
from collections import defaultdict, Counter
from bs4 import BeautifulSoup
import requests
import matplotlib.pyplot as plt
#데이터 과학 관련 키워드목록, 빈도 0~100
data = [ ("big data", 100, 15), ("Hadoop", 95, 25), ("Python", 75, 50),
("R", 50, 40), ("machine learning", 80, 20), ("statistics", 20, 60),
("data science", 60, 70), ("analytics", 90, 3),
("team player", 85, 85), ("dynamic", 2, 90), ("synergies", 70, 0),
("actionable insights", 40, 30), ("think out of the box", 45, 10),
("self-starter", 30, 50), ("customer focus", 65, 15),
("thought leadership", 35, 35)]
Explanation: 20. 자연어처리
1) 워드 클라우드
단어의 크기를 단어의 빈도 수에 비례하도록 하여 단어를 아름답게 배치
End of explanation
def text_size(total):
equals 8 if total is 0, 28 if total is 200
return 8 + total / 200 * 20
for word, job_popularity, resume_popularity in data:
plt.text(job_popularity, resume_popularity, word,
ha='center', va='center',
size=text_size(job_popularity + resume_popularity))
plt.xlabel("Popularity on Job Postings")
plt.ylabel("Popularity on Resumes")
plt.axis([0, 100, 0, 100])
plt.show()
Explanation: 아주 멋있어 보이기는 하지만, 딱히 어떤 정보를 제공하지는 않는다.
단어가 구인 광고에 등장하는 빈도를 가로축,
단어가 이력서에 등장하는 빈도를 세로축
End of explanation
#유니코드 따옴표를 일반 아스키 따옴표로 변환
def fix_unicode(text):
return text.replace(u"\u2019", "'")
def get_document():
url = "http://radar.oreilly.com/2010/06/what-is-data-science.html"
html = requests.get(url).text
soup = BeautifulSoup(html, 'html5lib')
#content = soup.find("div", "entry-content") # NoneType Error
content = soup.find("div", "article-body") # find article-body div
regex = r"[\w']+|[\.]" # 단어나 마침표에 해당하는 문자열
document = []
for paragraph in content("p"):
words = re.findall(regex, fix_unicode(paragraph.text))
document.extend(words)
return document
document = get_document()
#document
###+순차적으로 등장하는 단어들에 대한 정보를 얻기 위함?
a = ["We've",'all','heard', 'it']
b = ["We've",'all','heard', 'it']
list(zip(a,b))
bigrams = list(zip(document, document[1:]))
transitions = defaultdict(list)
for prev, current in bigrams:
transitions[prev].append(current)
#transitions
transitions
transitions['.']
#시작 단어를 선택해야 하는데,, 마침표 다음에 등장하는 단어들중 임의로 하나를 선택하는것도 방법.
def generate_using_bigrams(transitions):
current = "." # 다음단어가 문장의 시작이라는 것을 의미
result = []
while True:
next_word_candidates = transitions[current] # bigrams (current, _)
current = random.choice(next_word_candidates) # choose one at random
result.append(current) # append it to results
if current == ".": return " ".join(result) # if "." 종료
random.seed(0)
print("bigram sentences")
for i in range(10):
print(i, generate_using_bigrams(transitions))
print()
#터무니 없는 문장이지만, 데이터 과학과 관련되어 보일법한 웹사이트를 만들때 사용할 만한 것들이기도 하다...?
Explanation: 2) n-gram 모델
End of explanation
###+순차적으로 등장하는 단어들에 대한 정보를 얻기 위함?
a = ["We've",'all','heard', 'it']
b = ["We've",'all','heard', 'it']
b = ["We've",'all','heard', 'it']
list(zip(a,b))
#trigrams : 직전 두개의 단어에 의해 다음 단어가 결정됨
trigrams = list(zip(document, document[1:], document[2:]))
trigram_transitions = defaultdict(list)
starts = []
for prev, current, next in trigrams:
if prev == ".": # 만약 이전단어가 마침표 였다면
starts.append(current) # 이제 새로운 단어의 시작을 의미
trigram_transitions[(prev, current)].append(next)
#운장은 앞서 바이그램과 비슷한 방식으로 생성할 수 있다
def generate_using_trigrams(starts, trigram_transitions):
current = random.choice(starts) # choose a random starting word
prev = "." # and precede it with a '.'
result = [current]
while True:
next_word_candidates = trigram_transitions[(prev, current)]
next = random.choice(next_word_candidates)
prev, current = current, next
result.append(current)
if current == ".":
return " ".join(result)
print("trigram sentences")
for i in range(10):
print(i, generate_using_trigrams(starts, trigram_transitions))
print()
#조금 더 괜찮은 문장..
Explanation: bigram : 두개의 연속적인 단어
trigram : 3개의 연속적인 단어를 보는..(n-gram도 있디만 3개 정도만 봐도 충분..)
End of explanation
#항목 앞에 밑줄이 있으면 더 확장할 수 있는 규칙이고, 나머지는 종결어 라고하자.
# 예, '_s'는 문장(sentence) 규칙을 의미, '_NP'는 명사구(noun phrase), '_VP'는 동사구
grammar = {
"_S" : ["_NP _VP"],
"_NP" : ["_N",
"_A _NP _P _A _N"],
"_VP" : ["_V",
"_V _NP"],
"_N" : ["data science", "Python", "regression"],
"_A" : ["big", "linear", "logistic"],
"_P" : ["about", "near"],
"_V" : ["learns", "trains", "tests", "is"]
}
Explanation: trigram을 사용하면 다음 단어를 생성하는 각 단계에서 선택할 수 있는 단어의 수가 bigram을 사용할 때마다 훨씬 적어졌고, 선택할 수 있는 단어가 딱 하나만 존재하는 경우도 많았을 것이다.
즉, 이미 어떤 문서상에 존재했던 문장(또는 긴문구)하나를 그대로 생성했을 가능성도 있다.
이는 데이터 과학에 대한 더 많은 에세이들을 모으고, 이를 토대로 n-gram 모델을 구축하는 것을 의미!
<p><span style="color:blue">**3) 문법**</span></p>
문법에 기반하여 말이 되는 문장을 생성하는 것
품사란 무엇이며, 그것들을 어떻게 조합하면 문장이 되는지..
명사 다음에는 항상 동사가 따른다...는 방식
End of explanation
# 특정 항목이 종결어인지 아닌지?
def is_terminal(token):
return token[0] != "_"
# 각 항목을 대체 가능한 다른 항목 또는 항목들로 변환시키는 함수
def expand(grammar, tokens):
for i, token in enumerate(tokens):
# 종결어는 건너뜀
if is_terminal(token): continue
# 종결어가 아닌 단어는 대체할 수 있는 항목을 임의로 선택
replacement = random.choice(grammar[token])
if is_terminal(replacement):
tokens[i] = replacement
else:
tokens = tokens[:i] + replacement.split() + tokens[(i+1):]
# 새로운 단어의 list에 expand를 적용
return expand(grammar, tokens)
# 이제 모든 단어가 종결어 이기때문에 종료
return tokens
def generate_sentence(grammar):
return expand(grammar, ["_S"])
print("grammar sentences")
for i in range(10):
print(i, " ".join(generate_sentence(grammar)))
print()
Explanation: ~~~
['_S']
['_NP','_VP']
['_N','_VP']
['Python','_VP']
['Python','_V','_NP']
['Python','trains','_NP']
['Python','trains','_A','_NP','_P','_A','_N']
['Python','trains','logistic','_NP','_P','_A','_N']
['Python','trains','logistic','_N','_P','_A','_N']
['Python','trains','logistic','data science','_P','_A','_N']
['Python','trains','logistic','data science','about','_A', '_N']
['Python','trains','logistic','data science','about','logistic','_N']
['Python','trains','logistic','data science','about','logistic','Python']
~~~
End of explanation
#단어의 분포에 따라 각 토픽에 weight를 할당
def sample_from(weights):
'''i를 weight[i] / sum(weight)의 확률로 반환'''
total = sum(weights)
rnd = total * random.random() # 0과 total 사이를 균일하게 선택
for i, w in enumerate(weights):
rnd -= w # return the smallest i such that
if rnd <= 0: return i # sum(weights[:(i+1)]) >= rnd
Explanation: <p><span style="color:blue">**5) 토픽 모델링**</span></p>
End of explanation
documents = [
["Hadoop", "Big Data", "HBase", "Java", "Spark", "Storm", "Cassandra"],
["NoSQL", "MongoDB", "Cassandra", "HBase", "Postgres"],
["Python", "scikit-learn", "scipy", "numpy", "statsmodels", "pandas"],
["R", "Python", "statistics", "regression", "probability"],
["machine learning", "regression", "decision trees", "libsvm"],
["Python", "R", "Java", "C++", "Haskell", "programming languages"],
["statistics", "probability", "mathematics", "theory"],
["machine learning", "scikit-learn", "Mahout", "neural networks"],
["neural networks", "deep learning", "Big Data", "artificial intelligence"],
["Hadoop", "Java", "MapReduce", "Big Data"],
["statistics", "R", "statsmodels"],
["C++", "deep learning", "artificial intelligence", "probability"],
["pandas", "R", "Python"],
["databases", "HBase", "Postgres", "MySQL", "MongoDB"],
["libsvm", "regression", "support vector machines"]
]
#총 K=4개의 토픽을 반환해 보자!
K = 4
#각 토픽이 각 문서에 할당되는 횟수 (Counter는 각각의 문서를 의미)
document_topic_counts = [Counter()
for _ in documents]
#각 단어가 각 토픽에 할당되는 횟수 (Counter는 각 토픽을 의미)
topic_word_counts = [Counter() for _ in range(K)]
#각 토픽에 할당죄는 총 단어수 (각각의 숫자는 각 토픽을 의미)
topic_counts = [0 for _ in range(K)]
#각 문서에 포함되는 총 단어수 (각각의 숫자는 각 문서를 의미)
document_lengths = [len(d) for d in documents]
#단어 종류의 수
distinct_words = set(word for document in documents for word in document)
W = len(distinct_words)
#총 문서의 수
D = len(documents)
# documents[3]의 문서중 토픽 1과 관련 있는 단어의 수를 구하면.
document_topic_counts[3][1]
#npl라는 단어가 토픽 2와 연관지어 나오는 횟수는?
topic_word_counts[2]["nlp"]
def p_topic_given_document(topic, d, alpha=0.1):
문서 d의 모든 단어 중에서 topic에 속하는
단어의 비율 (smoothing을 더한 비율)
return ((document_topic_counts[d][topic] + alpha) /
(document_lengths[d] + K * alpha))
def p_word_given_topic(word, topic, beta=0.1):
topic에 속한 단어 중에서 word의 비율 (smoothing을 더한 비율)
return ((topic_word_counts[topic][word] + beta) /
(topic_counts[topic] + W * beta))
def topic_weight(d, word, k):
문서와 문서의 단어가 주어지면, k번째 토픽의 weight를 반환
return p_word_given_topic(word, k) * p_topic_given_document(k, d)
def choose_new_topic(d, word):
return sample_from([topic_weight(d, word, k)
for k in range(K)])
random.seed(0)
document_topics = [[random.randrange(K) for word in document]
for document in documents]
for d in range(D):
for word, topic in zip(documents[d], document_topics[d]):
document_topic_counts[d][topic] += 1
topic_word_counts[topic][word] += 1
topic_counts[topic] += 1
for iter in range(1000):
for d in range(D):
for i, (word, topic) in enumerate(zip(documents[d],
document_topics[d])):
# remove this word / topic from the counts
# so that it doesn't influence the weights
document_topic_counts[d][topic] -= 1
topic_word_counts[topic][word] -= 1
topic_counts[topic] -= 1
document_lengths[d] -= 1
# choose a new topic based on the weights
new_topic = choose_new_topic(d, word)
document_topics[d][i] = new_topic
# and now add it back to the counts
document_topic_counts[d][new_topic] += 1
topic_word_counts[new_topic][word] += 1
topic_counts[new_topic] += 1
document_lengths[d] += 1
#토픽의 의미를 찾기위해 각 토픽에 대해 가장 영향력이 높은(weight 값이 큰) 단어들이 무언인지 보자
for k, word_counts in enumerate(topic_word_counts):
for word, count in word_counts.most_common():
if count > 0: print(k, word, count)
# 단어들을 보고 다음고 ㅏ같이 이름을 지정해주자
topic_names = ["Big Data and programming languages",
"databases",
"machine learning",
"statistics"]
#사용자의 관심사가 무엇인지 알아볼 수 있다.
for document, topic_counts in zip(documents, document_topic_counts):
print(document)
for topic, count in topic_counts.most_common():
if count > 0:
print(topic_names[topic], count)
print()
Explanation: ~~~
결국, weight가 [1,1,3] 이라면
1/5의 확룔로 0,
1/5의 확률로 1,
3/5의 확률로 2를 반환
~~~
End of explanation |
3,267 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examples of split
A split agent has a single input stream and two or more output streams.
Step1: split_element
<b>split_element(func, in_stream, out_streams)</b>
<br>
<br>
where
<ol>
<li><b>func</b> is a function with an argument which is an element of a single input stream and that returns a list with one element for each out_stream. <i>func</i> may have additional keyword arguments and may also have a state.</li>
<li><b>in_stream</b> is a single input stream.</li>
<li><b>out_streams</b> is a list of output streams.</li>
</ol>
In the example below, <i>func</i> is <i>f</i> which takes a single argument v (an element of the input stream) and returns a list of two values, one value for each of two output streams.
<br>
The agent split_element has a single input stream, <b>x</b> and a list <b>[y, z]</b> of output streams. The list of output streams correspond to the list of values returned by f.
<br>
<br>
<b>y[n], z[n] = f(x[n])</b>
<br>
<br>
In this example,
<br>
y[n] = x[n]+100 and z[n] = x[n]*2
<br>
Code
The code creates streams, x, y, and z, creates the split_element agent, and extends stream x. Calling run() executes a step in which all specified agents execute until all inputs have been processed. Then recent values of the output streams are printed.
Step2: Using Lambda Expressions
Lambda expressions in split_element can be convenient as shown in this example which is essentially the same as the previous one.
Step3: Example of the decorator @split_e
The decorator <b>@split_e</b> operates the same as split_element, except that the agent is created by calling the decorated function.
<br>
Compare this example with the first example which used <i>split_element</i>. The two examples are almost identical. The difference is in the way that the agent is created. In this example, the agent is created by calling (the decorated) function <i>f</i> whereas in the previous example, the agent was created by calling <i>split_element</i>.
Step4: Example of functional forms
You may want to use a function that returns the streams resulting from a split instead of having the streams specified in out_streams, i.e. you may prefer to write
Step5: Example with keyword arguments
This example shows how to pass keyword arguments to <i>split_element</i>. In the example, <i>addend</i> and <i>multiplicand</i> are arguments of <i>f</i> the encapsulated function, and these arguments are passed as keyword arguments to <i>split_element</i>.
Step6: Split element with state
This example shows how to create an agent with state. The encapsulated function takes two arguments --- an element of the input stream and a <b>state</b> --- and it returns two values
Step7: Example with state and keyword arguments
This example shows an encapsulated function with a state and an argument called <i>state_increment</i> which is passed as a keyword argument to <i>split_element</i>.
Step8: Example with StreamArray and NumPy arrays
Step9: Example of split list
split_list is the same as split_element except that the encapsulated function operates on a <i>list</i> of elements of the input stream rather than on a single element. Operating on a list can be more efficient than operating sequentially on each of the elements of the list. This is especially important when working with arrays.
<br>
<br>
In this example, f operates on a list, <i>lst</i> of elements, and has keyword arguments <i>addend</i> and <i>multiplier</i>. It returns two lists corresponding to two output streams of the agent.
Step10: Example of split list with arrays
In this example, the encapsulated function <i>f</i> operates on an array <i>a</i> which is a segment of the input stream array, <i>x</i>. The operations in <i>f</i> are array operations (not list operations). For example, the result of <i>a * multiplier </i> is specified by numpy multiplication of an array with a scalar.
Step11: Test of unzip
unzip is the opposite of zip_stream.
<br>
<br>
An element of the input stream is a list or tuple whose length is the same as the number of output streams; the <i>j</i>-th element of the list is placed in the <i>j</i>-th output stream.
<br>
<br>
In this example, when the unzip agent receives the triple (1, 10, 100) on the input stream <i>w</i> it puts 1 on stream <i>x</i>, and 10 on stream <i>y</i>, and 100 on stream <i>z</i>.
Step12: Example of separate
<b>separate</b> is the opposite of <b>mix</b>.
<br>
The elements of the input stream are pairs (index, value). When a pair <i>(i,v)</i> arrives on the input stream the value <i>v</i> is appended to the <i>i</i>-th output stream.
<br>
<br>
In this example, when (0, 1) and (2, 100) arrive on the input stream <i>x</i>, the value 1 is appended to the 0-th output stream which is <i>y</i> and the value 100 is appended to output stream indexed 2 which is stream <i>w</i>.
Step13: Example of separate with stream arrays.
This is the same example as the previous case. The only difference is that since the elements of the input stream are pairs, the dimension of <i>x</i> is 2.
Step14: Example of split window
The input stream is broken up into windows. In this example, with <i>window_size</i>=2 and <i>step_size</i>=2, the sequence of windows are <i>x[0, 1], x[2, 3], x[4, 5], ....</i>.
<br>
<br>
The encapsulated function operates on a window and returns <i>n</i> values where <i>n</i> is the number of output streams. In this example, max(window) is appended to the output stream with index 0, i.e. stream <i>y</i>, and min(window) is appended to the output stream with index 1, i.e., stream <i>z</i>.
<br>
<br>
Note
Step15: Example that illustrates zip followed by unzip is the identity.
zip_stream followed by unzip returns the initial streams.
Step16: Example that illustrates that mix followed by separate is the identity.
Step17: Simple example of timed_unzip
An element of the input stream is a pair (timestamp, list). The sequence of timestamps must be increasing. The list has length n where n is the number of output streams. The m-th element of the list is the value of the m-th output stream associated with that timestamp. For example, if an element of the input stream <i>x</i> is (5, ["B", "a"]) then (5, "B") is appended to stream <i>y</i> and (5, "a') is appended to stream <i>z</i>.
Step18: Example that illustrates that timed_zip followed by timed_unzip is the identity. | Python Code:
import os
import sys
sys.path.append("../")
from IoTPy.core.stream import Stream, run
from IoTPy.agent_types.split import split_element, split_list, split_window
from IoTPy.agent_types.split import unzip, separate, timed_unzip
from IoTPy.agent_types.basics import split_e, fsplit_2e
from IoTPy.helper_functions.recent_values import recent_values
Explanation: Examples of split
A split agent has a single input stream and two or more output streams.
End of explanation
def simple_example_of_split_element():
# Specify streams
x = Stream('x')
y = Stream('y')
z = Stream('z')
# Specify encapsulated functions
def f(v): return [v+100, v*2]
# Create agent with input stream x and output streams y, z.
split_element(func=f, in_stream=x, out_streams=[y,z])
# Put test values in the input streams.
x.extend(list(range(5)))
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream y are')
print (recent_values(y))
print ('recent values of stream z are')
print (recent_values(z))
print ('Finished first run')
# Put more test values in the input streams.
x.extend(list(range(100, 105)))
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream y are')
print (recent_values(y))
print ('recent values of stream z are')
print (recent_values(z))
print ('Finished second run.')
simple_example_of_split_element()
Explanation: split_element
<b>split_element(func, in_stream, out_streams)</b>
<br>
<br>
where
<ol>
<li><b>func</b> is a function with an argument which is an element of a single input stream and that returns a list with one element for each out_stream. <i>func</i> may have additional keyword arguments and may also have a state.</li>
<li><b>in_stream</b> is a single input stream.</li>
<li><b>out_streams</b> is a list of output streams.</li>
</ol>
In the example below, <i>func</i> is <i>f</i> which takes a single argument v (an element of the input stream) and returns a list of two values, one value for each of two output streams.
<br>
The agent split_element has a single input stream, <b>x</b> and a list <b>[y, z]</b> of output streams. The list of output streams correspond to the list of values returned by f.
<br>
<br>
<b>y[n], z[n] = f(x[n])</b>
<br>
<br>
In this example,
<br>
y[n] = x[n]+100 and z[n] = x[n]*2
<br>
Code
The code creates streams, x, y, and z, creates the split_element agent, and extends stream x. Calling run() executes a step in which all specified agents execute until all inputs have been processed. Then recent values of the output streams are printed.
End of explanation
def example_of_split_element_with_lambda():
# Specify streams
x = Stream('x')
y = Stream('y')
z = Stream('z')
# Create agent with input stream x and output streams y, z.
split_element(lambda v: [v+100, v*2], x, [y,z])
# Put test values in the input streams.
x.extend(list(range(5)))
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream y are')
print (recent_values(y))
print ('recent values of stream z are')
print (recent_values(z))
example_of_split_element_with_lambda()
Explanation: Using Lambda Expressions
Lambda expressions in split_element can be convenient as shown in this example which is essentially the same as the previous one.
End of explanation
def simple_example_of_split_e():
# Specify streams
x = Stream('x')
y = Stream('y')
z = Stream('z')
# Specify encapsulated functions
@split_e
def f(v): return [v+100, v*2]
# Create agent with input stream x and output streams y, z.
f(in_stream=x, out_streams=[y,z])
# Put test values in the input streams.
x.extend(list(range(5)))
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream y are')
print (recent_values(y))
print ('recent values of stream z are')
print (recent_values(z))
simple_example_of_split_e()
Explanation: Example of the decorator @split_e
The decorator <b>@split_e</b> operates the same as split_element, except that the agent is created by calling the decorated function.
<br>
Compare this example with the first example which used <i>split_element</i>. The two examples are almost identical. The difference is in the way that the agent is created. In this example, the agent is created by calling (the decorated) function <i>f</i> whereas in the previous example, the agent was created by calling <i>split_element</i>.
End of explanation
def simple_example_of_functional_form():
# ------------------------------------------------------
# Specifying a functional form
# The functional form takes a single input stream and returns
# three streams.
def h(w):
# Specify streams
x = Stream('x')
y = Stream('y')
z = Stream('z')
# Specify encapsulated functions
def f(v): return [v+100, v*2, v**2]
# Create agent with input stream x and output streams y, z.
split_element(func=f, in_stream=w, out_streams=[x,y,z])
# Return streams created by this function.
return x, y, z
# ------------------------------------------------------
# Using the functional form.
# Specify streams
w = Stream('w')
# Create agent with input stream x and output streams a, b, c.
a, b, c = h(w)
# Put test values in the input streams.
w.extend(list(range(5)))
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream a are')
print (recent_values(a))
print ('recent values of stream b are')
print (recent_values(b))
print ('recent values of stream c are')
print (recent_values(c))
simple_example_of_functional_form()
Explanation: Example of functional forms
You may want to use a function that returns the streams resulting from a split instead of having the streams specified in out_streams, i.e. you may prefer to write:
<br>
<br>
a, b, c = h(u)
<br>
<br>
where <i>u</i> is a stream that is split into streams <i>a</i>, <i>b</i>, and <i>c</i>,
instead of writing:
<br>
<br>
h(in_stream=u, out_streams=[a, b, c])
<br>
<br>
This example illustrates how a functional form can be specified and used. Function <i>h</i> creates and returns the three streams <i>x</i>, <i>y</i>, and <i>z</i>. Calling the function creates a <i>split_element</i> agent.
End of explanation
def example_of_split_element_with_keyword_args():
# Specify streams
x = Stream('x')
y = Stream('y')
z = Stream('z')
# Specify encapsulated functions
def f(v, addend, multiplicand):
return [v+addend, v*multiplicand]
# Create agent with input stream x and output streams y, z.
split_element(func=f, in_stream=x, out_streams=[y,z], addend=100, multiplicand=2)
# Put test values in the input streams.
x.extend(list(range(5)))
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream y are')
print (recent_values(y))
print ('recent values of stream z are')
print (recent_values(z))
example_of_split_element_with_keyword_args()
Explanation: Example with keyword arguments
This example shows how to pass keyword arguments to <i>split_element</i>. In the example, <i>addend</i> and <i>multiplicand</i> are arguments of <i>f</i> the encapsulated function, and these arguments are passed as keyword arguments to <i>split_element</i>.
End of explanation
def example_of_split_element_with_state():
# Specify streams
x = Stream('x')
y = Stream('y')
z = Stream('z')
# Specify encapsulated functions
def f(v, state):
next_state = state+1
return ([v+state, v*state], next_state)
# Create agent with input stream x and output streams y, z.
split_element(func=f, in_stream=x, out_streams=[y,z], state=0)
# Put test values in the input streams.
x.extend(list(range(5)))
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream y are')
print (recent_values(y))
print ('recent values of stream z are')
print (recent_values(z))
example_of_split_element_with_state()
Explanation: Split element with state
This example shows how to create an agent with state. The encapsulated function takes two arguments --- an element of the input stream and a <b>state</b> --- and it returns two values: a list of elements corresponding to the output streams and the <b>next state</b>. The function may have additional arguments which are passed as keyword arguments to <i>split_element</i>.
<br>
<br>
The call <i>split_element(...)</i> to create the agent must have a keyword argument called <b>state</b> with its initial value. For example:
<br>
split_element(func=f, in_stream=x, out_streams=[y,z], <b>state=0</b>)
<br>
In this example, the sequence of values of <i>state</i> is 0, 1, 2, .... which is also the sequence of values of the input stream and hence also of <i>v</i>.
End of explanation
def example_of_split_element_with_state_and_keyword_args():
# Specify streams
x = Stream('x')
y = Stream('y')
z = Stream('z')
# Specify encapsulated functions
def f(v, state, state_increment):
next_state = state + state_increment
return ([v+state, v*state], next_state)
# Create agent with input stream x and output streams y, z.
split_element(func=f, in_stream=x, out_streams=[y,z], state=0, state_increment=10)
# Put test values in the input streams.
x.extend(list(range(5)))
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream y are')
print (recent_values(y))
print ('recent values of stream z are')
print (recent_values(z))
example_of_split_element_with_state_and_keyword_args()
Explanation: Example with state and keyword arguments
This example shows an encapsulated function with a state and an argument called <i>state_increment</i> which is passed as a keyword argument to <i>split_element</i>.
End of explanation
import numpy as np
from IoTPy.core.stream import StreamArray
def example_of_split_element_with_stream_array():
# Specify streams
x = StreamArray('x')
y = StreamArray('y')
z = StreamArray('z')
# Specify encapsulated functions
def f(v, addend, multiplier):
return [v+addend, v*multiplier]
# Create agent with input stream x and output streams y, z.
split_element(func=f, in_stream=x, out_streams=[y,z],
addend=1.0, multiplier=2.0)
# Put test values in the input streams.
A = np.linspace(0.0, 4.0, 5)
x.extend(A)
# Execute a step
run()
# Look at recent values of streams.
assert np.array_equal(recent_values(y), A + 1.0)
assert np.array_equal(recent_values(z), A * 2.0)
print ('recent values of stream y are')
print (recent_values(y))
print ('recent values of stream z are')
print (recent_values(z))
example_of_split_element_with_stream_array()
Explanation: Example with StreamArray and NumPy arrays
End of explanation
def example_of_split_list():
# Specify streams
x = Stream('x')
y = Stream('y')
z = Stream('z')
# Specify encapsulated functions
def f(lst, addend, multiplier):
return ([v+addend for v in lst], [v*multiplier for v in lst])
# Create agent with input stream x and output streams y, z.
split_list(func=f, in_stream=x, out_streams=[y,z], addend=100, multiplier=2)
# Put test values in the input streams.
x.extend(list(range(5)))
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream y are')
print (recent_values(y))
print ('recent values of stream z are')
print (recent_values(z))
example_of_split_list()
Explanation: Example of split list
split_list is the same as split_element except that the encapsulated function operates on a <i>list</i> of elements of the input stream rather than on a single element. Operating on a list can be more efficient than operating sequentially on each of the elements of the list. This is especially important when working with arrays.
<br>
<br>
In this example, f operates on a list, <i>lst</i> of elements, and has keyword arguments <i>addend</i> and <i>multiplier</i>. It returns two lists corresponding to two output streams of the agent.
End of explanation
def example_of_split_list_with_arrays():
# Specify streams
x = StreamArray('x')
y = StreamArray('y')
z = StreamArray('z')
# Specify encapsulated functions
def f(a, addend, multiplier):
# a is an array
# return two arrays.
return (a + addend, a * multiplier)
# Create agent with input stream x and output streams y, z.
split_list(func=f, in_stream=x, out_streams=[y,z], addend=100, multiplier=2)
# Put test values in the input streams.
x.extend(np.arange(5.0))
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream y are')
print (recent_values(y))
print ('recent values of stream z are')
print (recent_values(z))
example_of_split_list_with_arrays()
Explanation: Example of split list with arrays
In this example, the encapsulated function <i>f</i> operates on an array <i>a</i> which is a segment of the input stream array, <i>x</i>. The operations in <i>f</i> are array operations (not list operations). For example, the result of <i>a * multiplier </i> is specified by numpy multiplication of an array with a scalar.
End of explanation
def simple_test_unzip():
# Specify streams
w = Stream('w')
x = Stream('x')
y = Stream('y')
z = Stream('z')
# Create agent with input stream x and output streams y, z.
unzip(in_stream=w, out_streams=[x,y,z])
# Put test values in the input streams.
w.extend([(1, 10, 100), (2, 20, 200), (3, 30, 300)])
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream x are')
print (recent_values(x))
print ('recent values of stream y are')
print (recent_values(y))
print ('recent values of stream z are')
print (recent_values(z))
simple_test_unzip()
Explanation: Test of unzip
unzip is the opposite of zip_stream.
<br>
<br>
An element of the input stream is a list or tuple whose length is the same as the number of output streams; the <i>j</i>-th element of the list is placed in the <i>j</i>-th output stream.
<br>
<br>
In this example, when the unzip agent receives the triple (1, 10, 100) on the input stream <i>w</i> it puts 1 on stream <i>x</i>, and 10 on stream <i>y</i>, and 100 on stream <i>z</i>.
End of explanation
def simple_test_separate():
# Specify streams
x = Stream('x')
y = Stream('y')
z = Stream('z')
w = Stream('w')
# Create agent with input stream x and output streams y, z.
separate(in_stream=x, out_streams=[y,z,w])
# Put test values in the input streams.
x.extend([(0,1), (2, 100), (0, 2), (1, 10), (1, 20)])
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream y are')
print (recent_values(y))
print ('recent values of stream z are')
print (recent_values(z))
print ('recent values of stream w are')
print (recent_values(w))
simple_test_separate()
Explanation: Example of separate
<b>separate</b> is the opposite of <b>mix</b>.
<br>
The elements of the input stream are pairs (index, value). When a pair <i>(i,v)</i> arrives on the input stream the value <i>v</i> is appended to the <i>i</i>-th output stream.
<br>
<br>
In this example, when (0, 1) and (2, 100) arrive on the input stream <i>x</i>, the value 1 is appended to the 0-th output stream which is <i>y</i> and the value 100 is appended to output stream indexed 2 which is stream <i>w</i>.
End of explanation
def test_separate_with_stream_array():
# Specify streams
x = StreamArray('x', dimension=2)
y = StreamArray('y')
z = StreamArray('z')
# Create agent with input stream x and output streams y, z.
separate(in_stream=x, out_streams=[y,z])
# Put test values in the input streams.
x.extend(np.array([[1.0, 10.0], [0.0, 2.0], [1.0, 20.0], [0.0, 4.0]]))
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream y are')
print (recent_values(y))
print ('recent values of stream z are')
print (recent_values(z))
test_separate_with_stream_array()
Explanation: Example of separate with stream arrays.
This is the same example as the previous case. The only difference is that since the elements of the input stream are pairs, the dimension of <i>x</i> is 2.
End of explanation
def simple_example_of_split_window():
# Specify streams
x = Stream('x')
y = Stream('y')
z = Stream('z')
# Specify encapsulated functions
def f(window): return (max(window), min(window))
# Create agent with input stream x and output streams y, z.
split_window(func=f, in_stream=x, out_streams=[y,z],
window_size=2, step_size=2)
# Put test values in the input streams.
x.extend(list(range(5)))
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream y are')
print (recent_values(y))
print ('recent values of stream z are')
print (recent_values(z))
simple_example_of_split_window()
Explanation: Example of split window
The input stream is broken up into windows. In this example, with <i>window_size</i>=2 and <i>step_size</i>=2, the sequence of windows are <i>x[0, 1], x[2, 3], x[4, 5], ....</i>.
<br>
<br>
The encapsulated function operates on a window and returns <i>n</i> values where <i>n</i> is the number of output streams. In this example, max(window) is appended to the output stream with index 0, i.e. stream <i>y</i>, and min(window) is appended to the output stream with index 1, i.e., stream <i>z</i>.
<br>
<br>
Note: You can also use the lambda function as in:
<br>
split_window(lambda window: (max(window), min(window)), x, [y,z], 2, 2)
End of explanation
from IoTPy.agent_types.merge import zip_stream
def example_zip_plus_unzip():
# Specify streams
x = Stream('x')
y = Stream('y')
z = Stream('z')
u = Stream('u')
v = Stream('v')
# Create agents
zip_stream(in_streams=[x,y], out_stream=z)
unzip(in_stream=z, out_streams=[u,v])
# Put test values in the input streams.
x.extend(['A', 'B', 'C'])
y.extend(list(range(100, 1000, 100)))
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream u are')
print (recent_values(u))
print ('recent values of stream v are')
print (recent_values(v))
example_zip_plus_unzip()
Explanation: Example that illustrates zip followed by unzip is the identity.
zip_stream followed by unzip returns the initial streams.
End of explanation
from IoTPy.agent_types.merge import mix
def example_mix_plus_separate():
# Specify streams
x = Stream('x')
y = Stream('y')
z = Stream('z')
u = Stream('u')
v = Stream('v')
# Create agents
mix(in_streams=[x,y], out_stream=z)
separate(in_stream=z, out_streams=[u,v])
# Put test values in the input streams.
x.extend(['A', 'B', 'C'])
y.extend(list(range(100, 1000, 100)))
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream u are')
print (recent_values(u))
print ('recent values of stream v are')
print (recent_values(v))
example_mix_plus_separate()
Explanation: Example that illustrates that mix followed by separate is the identity.
End of explanation
def test_timed_unzip():
# Specify streams
x = Stream('x')
y = Stream('y')
z = Stream('z')
# Create agent with input stream x and output streams y, z.
timed_unzip(in_stream=x, out_streams=[y,z])
# Put test values in the input streams.
x.extend([(1, ["A", None]), (5, ["B", "a"]), (7, [None, "b"]),
(9, ["C", "c"]), (10, [None, "d"])])
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream y are')
print (recent_values(y))
print ('recent values of stream z are')
print (recent_values(z))
test_timed_unzip()
Explanation: Simple example of timed_unzip
An element of the input stream is a pair (timestamp, list). The sequence of timestamps must be increasing. The list has length n where n is the number of output streams. The m-th element of the list is the value of the m-th output stream associated with that timestamp. For example, if an element of the input stream <i>x</i> is (5, ["B", "a"]) then (5, "B") is appended to stream <i>y</i> and (5, "a') is appended to stream <i>z</i>.
End of explanation
from IoTPy.agent_types.merge import timed_zip
def test_timed_zip_plus_timed_unzip():
# Specify streams
x = Stream('x')
y = Stream('y')
z = Stream('z')
u = Stream('u')
v = Stream('v')
# Create agents
timed_zip(in_streams=[x,y], out_stream=z)
timed_unzip(in_stream=z, out_streams=[u,v])
# Put test values in the input streams.
x.extend([[1, 'a'], [3, 'b'], [10, 'd'], [15, 'e'], [17, 'f']])
y.extend([[2, 'A'], [3, 'B'], [9, 'D'], [20, 'E']])
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream u are')
print (recent_values(u))
print ('recent values of stream v are')
print (recent_values(v))
test_timed_zip_plus_timed_unzip()
Explanation: Example that illustrates that timed_zip followed by timed_unzip is the identity.
End of explanation |
3,268 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interpolation Exercise 1
Step1: 2D trajectory interpolation
The file trajectory.npz contains 3 Numpy arrays that describe a 2d trajectory of a particle as a function of time
Step2: Use these arrays to create interpolated functions $x(t)$ and $y(t)$. Then use those functions to create the following arrays
Step3: Make a parametric plot of ${x(t),y(t)}$ that shows the interpolated values and the original points | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from scipy.interpolate import interp1d
Explanation: Interpolation Exercise 1
End of explanation
dictionary = np.load('trajectory.npz')
y = dictionary.items()[0][1]
t = dictionary.items()[1][1]
x = dictionary.items()[2][1]
assert isinstance(x, np.ndarray) and len(x)==40
assert isinstance(y, np.ndarray) and len(y)==40
assert isinstance(t, np.ndarray) and len(t)==40
Explanation: 2D trajectory interpolation
The file trajectory.npz contains 3 Numpy arrays that describe a 2d trajectory of a particle as a function of time:
t which has discrete values of time t[i].
x which has values of the x position at those times: x[i] = x(t[i]).
x which has values of the y position at those times: y[i] = y(t[i]).
Load those arrays into this notebook and save them as variables x, y and t:
End of explanation
x_approx = interp1d(t, x, kind='cubic')
y_approx = interp1d(t, y, kind='cubic')
newt = np.linspace(0,4,200)
newx = x_approx(newt)
newy = y_approx(newt)
assert newt[0]==t.min()
assert newt[-1]==t.max()
assert len(newt)==200
assert len(newx)==200
assert len(newy)==200
Explanation: Use these arrays to create interpolated functions $x(t)$ and $y(t)$. Then use those functions to create the following arrays:
newt which has 200 points between ${t_{min},t_{max}}$.
newx which has the interpolated values of $x(t)$ at those times.
newy which has the interpolated values of $y(t)$ at those times.
End of explanation
plt.figure(figsize=(12,8));
plt.plot(x, y, marker='o', linestyle='', label='Original Data')
plt.plot(newx, newy, label='Interpolated Curve');
plt.legend();
plt.xlabel('X(t)');
plt.ylabel('Y(t)');
plt.title('Position as a Function of Time');
assert True # leave this to grade the trajectory plot
Explanation: Make a parametric plot of ${x(t),y(t)}$ that shows the interpolated values and the original points:
For the interpolated points, use a solid line.
For the original points, use circles of a different color and no line.
Customize you plot to make it effective and beautiful.
End of explanation |
3,269 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<p style="text-align
Step2: 1. Implementar o algoritmo K-means
Nesta etapa você irá implementar as funções que compõe o algoritmo do KMeans uma a uma. É importante entender e ler a documentação de cada função, principalmente as dimensões dos dados esperados na saída.
1.1 Inicializar os centróides
A primeira etapa do algoritmo consiste em inicializar os centróides de maneira aleatória. Essa etapa é uma das mais importantes do algoritmo e uma boa inicialização pode diminuir bastante o tempo de convergência.
Para inicializar os centróides você pode considerar o conhecimento prévio sobre os dados, mesmo sem saber a quantidade de grupos ou sua distribuição.
Dica
Step3: Teste a função criada e visualize os centróides que foram calculados.
Step5: 1.2 Definir os Clusters
Na segunda etapa do algoritmo serão definidos o grupo de cada dado, de acordo com os centróides calculados.
1.2.1 Função de distância
Codifique a função de distância euclidiana entre dois pontos (a, b).
Definido pela equação
Step6: Teste a função criada.
Step8: 1.2.2 Calcular o centroide mais próximo
Utilizando a função de distância codificada anteriormente, complete a função abaixo para calcular o centroid mais próximo de um ponto qualquer.
Dica
Step9: Teste a função criada
Step11: 1.2.3 Calcular centroid mais próximo de cada dado do dataset
Utilizando a função anterior que retorna o índice do centroid mais próximo, calcule o centroid mais próximo de cada dado do dataset.
Step12: Teste a função criada visualizando os cluster formados.
Step14: 1.3 Métrica de avaliação
Após formar os clusters, como sabemos se o resultado gerado é bom? Para isso, precisamos definir uma métrica de avaliação.
O algoritmo K-means tem como objetivo escolher centróides que minimizem a soma quadrática das distância entre os dados de um cluster e seu centróide. Essa métrica é conhecida como inertia.
$$\sum_{i=0}^{n}\min_{c_j \in C}(||x_i - c_j||^2)$$
A inertia, ou o critério de soma dos quadrados dentro do cluster, pode ser reconhecido como uma medida de o quão internamente coerentes são os clusters, porém ela sofre de alguns inconvenientes
Step15: Teste a função codificada executando o código abaixo.
Step17: 1.4 Atualizar os clusters
Nessa etapa, os centróides são recomputados. O novo valor de cada centróide será a media de todos os dados atribuídos ao cluster.
Step18: Visualize os clusters formados
Step19: Execute a função de atualização e visualize novamente os cluster formados
Step20: 2. K-means
2.1 Algoritmo completo
Utilizando as funções codificadas anteriormente, complete a classe do algoritmo K-means!
Step21: Verifique o resultado do algoritmo abaixo!
Step22: 2.2 Comparar com algoritmo do Scikit-Learn
Use a implementação do algoritmo do scikit-learn do K-means para o mesmo conjunto de dados. Mostre o valor da inércia e os conjuntos gerados pelo modelo. Você pode usar a mesma estrutura da célula de código anterior.
Dica
Step23: 3. Método do cotovelo
Implemete o método do cotovelo e mostre o melhor K para o conjunto de dados.
Step24: 4. Dataset Real
Exercícios
1 - Aplique o algoritmo do K-means desenvolvido por você no datatse iris [1]. Mostre os resultados obtidos utilizando pelo menos duas métricas de avaliação de clusteres [2].
[1] http | Python Code:
# import libraries
# linear algebra
import numpy as np
# data processing
import pandas as pd
# data visualization
from matplotlib import pyplot as plt
# load the data with pandas
dataset = pd.read_csv('dataset.csv', header=None)
dataset = np.array(dataset)
plt.scatter(dataset[:,0], dataset[:,1], s=10)
plt.show()
Explanation: <p style="text-align: center;">Clusterização e algoritmo K-means</p>
Organizar dados em agrupamentos é um dos modos mais fundamentais de compreensão e aprendizado. Como por exemplo, os organismos em um sistema biologico são classificados em domínio, reino, filo, classe, etc. A análise de agrupamento é o estudo formal de métodos e algoritmos para agrupar objetos de acordo com medidas ou características semelhantes. A análise de cluster, em sua essência, não utiliza rótulos de categoria que marcam objetos com identificadores anteriores, ou seja, rótulos de classe. A ausência de informação de categoria distingue o agrupamento de dados (aprendizagem não supervisionada) da classificação ou análise discriminante (aprendizagem supervisionada). O objetivo da clusterização é encontrar estruturas em dados e, portanto, é de natureza exploratória.
A técnica de Clustering tem uma longa e rica história em uma variedade de campos científicos. Um dos algoritmos de clusterização mais populares e simples, o K-means, foi publicado pela primeira vez em 1955. Apesar do K-means ter sido proposto há mais de 50 anos e milhares de algoritmos de clustering terem sido publicados desde então, o K-means é ainda amplamente utilizado.
Fonte: Anil K. Jain, Data clustering: 50 years beyond K-means, Pattern Recognition Letters, Volume 31, Issue 8, 2010
Objetivo
Implementar as funções do algoritmo KMeans passo-a-passo
Comparar a implementação com o algoritmo do Scikit-Learn
Entender e codificar o Método do Cotovelo
Utilizar o K-means em um dataset real
Carregando os dados de teste
Carregue os dados disponibilizados, e identifique visualmente em quantos grupos os dados parecem estar distribuídos.
End of explanation
def calculate_initial_centers(dataset, k):
Inicializa os centróides iniciais de maneira arbitrária
Argumentos:
dataset -- Conjunto de dados - [m,n]
k -- Número de centróides desejados
Retornos:
centroids -- Lista com os centróides calculados - [k,n]
#### CODE HERE ####
centroid = []
c = 0
while (c < k):
x = np.array(np.random.uniform(min(dataset[:,0]),max(dataset[:,0])))
y = np.array(np.random.uniform(min(dataset[:,1]),max(dataset[:,1])))
centroid.append([x,y])
c += 1
centroids = np.array(centroid)
### END OF CODE ###
return centroids
Explanation: 1. Implementar o algoritmo K-means
Nesta etapa você irá implementar as funções que compõe o algoritmo do KMeans uma a uma. É importante entender e ler a documentação de cada função, principalmente as dimensões dos dados esperados na saída.
1.1 Inicializar os centróides
A primeira etapa do algoritmo consiste em inicializar os centróides de maneira aleatória. Essa etapa é uma das mais importantes do algoritmo e uma boa inicialização pode diminuir bastante o tempo de convergência.
Para inicializar os centróides você pode considerar o conhecimento prévio sobre os dados, mesmo sem saber a quantidade de grupos ou sua distribuição.
Dica: https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.uniform.html
End of explanation
k = 3
centroids = calculate_initial_centers(dataset, k)
plt.scatter(dataset[:,0], dataset[:,1], s=10)
plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red',s=100)
plt.show()
Explanation: Teste a função criada e visualize os centróides que foram calculados.
End of explanation
import math
def euclidean_distance(a, b):
Calcula a distância euclidiana entre os pontos a e b
Argumentos:
a -- Um ponto no espaço - [1,n]
b -- Um ponto no espaço - [1,n]
Retornos:
distance -- Distância euclidiana entre os pontos
#### CODE HERE ####
#s = 0
#for i in range(len(a)):
# diff = (a[i] - b[i])
# s += diff*diff
#distance = math.sqrt(s)
#distance = math.sqrt(sum([((a[i] - b[i])**2) for i in range(len(a))]))
distance = math.sqrt(sum((a-b)**2))
### END OF CODE ###
return distance
Explanation: 1.2 Definir os Clusters
Na segunda etapa do algoritmo serão definidos o grupo de cada dado, de acordo com os centróides calculados.
1.2.1 Função de distância
Codifique a função de distância euclidiana entre dois pontos (a, b).
Definido pela equação:
$$ dist(a, b) = \sqrt{(a_1-b_1)^{2}+(a_2-b_2)^{2}+ ... + (a_n-b_n)^{2}} $$
$$ dist(a, b) = \sqrt{\sum_{i=1}^{n}(a_i-b_i)^{2}} $$
End of explanation
a = np.array([1, 5, 9])
b = np.array([3, 7, 8])
if (euclidean_distance(a,b) == 3):
print("Distância calculada corretamente!")
else:
print("Função de distância incorreta")
Explanation: Teste a função criada.
End of explanation
def nearest_centroid(a, centroids):
Calcula o índice do centroid mais próximo ao ponto a
Argumentos:
a -- Um ponto no espaço - [1,n]
centroids -- Lista com os centróides - [k,n]
Retornos:
nearest_index -- Índice do centróide mais próximo
#### CODE HERE ####
dist = float("inf")
k = len(centroids)
for i in range(k):
d = euclidean_distance(a, centroids[i])
if d < dist:
nindex = i
dist = d
nearest_index = nindex
### END OF CODE ###
return nearest_index
Explanation: 1.2.2 Calcular o centroide mais próximo
Utilizando a função de distância codificada anteriormente, complete a função abaixo para calcular o centroid mais próximo de um ponto qualquer.
Dica: https://docs.scipy.org/doc/numpy/reference/generated/numpy.argmin.html
End of explanation
# Seleciona um ponto aleatório no dataset
index = np.random.randint(dataset.shape[0])
a = dataset[index,:]
# Usa a função para descobrir o centroid mais próximo
idx_nearest_centroid = nearest_centroid(a, centroids)
# Plota os dados ------------------------------------------------
plt.scatter(dataset[:,0], dataset[:,1], s=10)
# Plota o ponto aleatório escolhido em uma cor diferente
plt.scatter(a[0], a[1], c='magenta', s=30)
# Plota os centroids
plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red', s=100)
# Plota o centroid mais próximo com uma cor diferente
plt.scatter(centroids[idx_nearest_centroid,0],
centroids[idx_nearest_centroid,1],
marker='^', c='springgreen', s=100)
# Cria uma linha do ponto escolhido para o centroid selecionado
plt.plot([a[0], centroids[idx_nearest_centroid,0]], [a[1], centroids[idx_nearest_centroid,1]], c='orange')
plt.annotate('CENTROID', (centroids[idx_nearest_centroid,0], centroids[idx_nearest_centroid,1],))
plt.show()
Explanation: Teste a função criada
End of explanation
def all_nearest_centroids(dataset, centroids):
Calcula o índice do centroid mais próximo para cada
ponto do dataset
Argumentos:
dataset -- Conjunto de dados - [m,n]
centroids -- Lista com os centróides - [k,n]
Retornos:
nearest_indexes -- Índices do centróides mais próximos - [m,1]
#### CODE HERE ####
nearest_indexes = [nearest_centroid(dataset[i], centroids) for i in range(len(dataset))]
### END OF CODE ###
return nearest_indexes
Explanation: 1.2.3 Calcular centroid mais próximo de cada dado do dataset
Utilizando a função anterior que retorna o índice do centroid mais próximo, calcule o centroid mais próximo de cada dado do dataset.
End of explanation
nearest_indexes = all_nearest_centroids(dataset, centroids)
plt.scatter(dataset[:,0], dataset[:,1], c=nearest_indexes)
plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red', s=100)
plt.show()
Explanation: Teste a função criada visualizando os cluster formados.
End of explanation
def inertia(dataset, centroids, nearest_indexes):
Soma das distâncias quadradas das amostras para o
centro do cluster mais próximo.
Argumentos:
dataset -- Conjunto de dados - [m,n]
centroids -- Lista com os centróides - [k,n]
nearest_indexes -- Índices do centróides mais próximos - [m,1]
Retornos:
inertia -- Soma total do quadrado da distância entre
os dados de um cluster e seu centróide
tmp_data = np.array([[1,2,3],[3,6,5],[4,5,6]])
tmp_centroide = np.array([[2,3,4]])
tmp_nearest_indexes = all_nearest_centroids(tmp_data, tmp_centroide)
if inertia(tmp_data, tmp_centroide, tmp_nearest_indexes) == 26:
#### CODE HERE ####
'''s = 0
for i in range(len(tmp_data)):
m = euclidean_distance(tmp_data[i],tmp_centroide[0])**2
s += m'''
inertia = sum([euclidean_distance(tmp_data[i],tmp_centroide[0])**2 for i in range(len(tmp_data))])
### END OF CODE ###
return inertia
Explanation: 1.3 Métrica de avaliação
Após formar os clusters, como sabemos se o resultado gerado é bom? Para isso, precisamos definir uma métrica de avaliação.
O algoritmo K-means tem como objetivo escolher centróides que minimizem a soma quadrática das distância entre os dados de um cluster e seu centróide. Essa métrica é conhecida como inertia.
$$\sum_{i=0}^{n}\min_{c_j \in C}(||x_i - c_j||^2)$$
A inertia, ou o critério de soma dos quadrados dentro do cluster, pode ser reconhecido como uma medida de o quão internamente coerentes são os clusters, porém ela sofre de alguns inconvenientes:
A inertia pressupõe que os clusters são convexos e isotrópicos, o que nem sempre é o caso. Desta forma, pode não representar bem em aglomerados alongados ou variedades com formas irregulares.
A inertia não é uma métrica normalizada: sabemos apenas que valores mais baixos são melhores e zero é o valor ótimo. Mas em espaços de dimensões muito altas, as distâncias euclidianas tendem a se tornar infladas (este é um exemplo da chamada “maldição da dimensionalidade”). A execução de um algoritmo de redução de dimensionalidade, como o PCA, pode aliviar esse problema e acelerar os cálculos.
Fonte: https://scikit-learn.org/stable/modules/clustering.html
Para podermos avaliar os nosso clusters, codifique a métrica da inertia abaixo, para isso você pode utilizar a função de distância euclidiana construída anteriormente.
$$inertia = \sum_{i=0}^{n}\min_{c_j \in C} (dist(x_i, c_j))^2$$
End of explanation
tmp_data = np.array([[1,2,3],[3,6,5],[4,5,6]])
tmp_centroide = np.array([[2,3,4]])
tmp_nearest_indexes = all_nearest_centroids(tmp_data, tmp_centroide)
if inertia(tmp_data, tmp_centroide, tmp_nearest_indexes) == 26:
print("Inertia calculada corretamente!")
else:
print("Função de inertia incorreta!")
# Use a função para verificar a inertia dos seus clusters
inertia(dataset, centroids, nearest_indexes)
Explanation: Teste a função codificada executando o código abaixo.
End of explanation
def update_centroids(dataset, centroids, nearest_indexes):
Atualiza os centroids
Argumentos:
dataset -- Conjunto de dados - [m,n]
centroids -- Lista com os centróides - [k,n]
nearest_indexes -- Índices do centróides mais próximos - [m,1]
Retornos:
centroids -- Lista com centróides atualizados - [k,n]
#### CODE HERE ####
### END OF CODE ###
return centroids
Explanation: 1.4 Atualizar os clusters
Nessa etapa, os centróides são recomputados. O novo valor de cada centróide será a media de todos os dados atribuídos ao cluster.
End of explanation
nearest_indexes = all_nearest_centroids(dataset, centroids)
# Plota os os cluster ------------------------------------------------
plt.scatter(dataset[:,0], dataset[:,1], c=nearest_indexes)
# Plota os centroids
plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red', s=100)
for index, centroid in enumerate(centroids):
dataframe = dataset[nearest_indexes == index,:]
for data in dataframe:
plt.plot([centroid[0], data[0]], [centroid[1], data[1]],
c='lightgray', alpha=0.3)
plt.show()
Explanation: Visualize os clusters formados
End of explanation
centroids = update_centroids(dataset, centroids, nearest_indexes)
Explanation: Execute a função de atualização e visualize novamente os cluster formados
End of explanation
class KMeans():
def __init__(self, n_clusters=8, max_iter=300):
self.n_clusters = n_clusters
self.max_iter = max_iter
def fit(self,X):
# Inicializa os centróides
self.cluster_centers_ = [None]
# Computa o cluster de cada amostra
self.labels_ = [None]
# Calcula a inércia inicial
old_inertia = [None]
for index in [None]:
#### CODE HERE ####
### END OF CODE ###
return self
def predict(self, X):
return [None]
Explanation: 2. K-means
2.1 Algoritmo completo
Utilizando as funções codificadas anteriormente, complete a classe do algoritmo K-means!
End of explanation
kmeans = KMeans(n_clusters=k)
kmeans.fit(dataset)
print("Inércia = ", kmeans.inertia_)
plt.scatter(dataset[:,0], dataset[:,1], c=kmeans.labels_)
plt.scatter(kmeans.cluster_centers_[:,0],
kmeans.cluster_centers_[:,1], marker='^', c='red', s=100)
plt.show()
Explanation: Verifique o resultado do algoritmo abaixo!
End of explanation
#### CODE HERE ####
# fonte: https://stackabuse.com/k-means-clustering-with-scikit-learn/
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
plt.scatter(dataset[:,0],dataset[:,1], label='True Position')
kmeans = KMeans(n_clusters=k)
kmeans.fit(dataset)
print("Inércia = ", kmeans.inertia_)
#print(kmeans.cluster_centers_)
#print(kmeans.labels_)
plt.scatter(dataset[:,0],dataset[:,1], c=kmeans.labels_, cmap='Set3')
plt.scatter(kmeans.cluster_centers_[:,0] ,kmeans.cluster_centers_[:,1], marker='^', color='black', s=100)
Explanation: 2.2 Comparar com algoritmo do Scikit-Learn
Use a implementação do algoritmo do scikit-learn do K-means para o mesmo conjunto de dados. Mostre o valor da inércia e os conjuntos gerados pelo modelo. Você pode usar a mesma estrutura da célula de código anterior.
Dica: https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans
End of explanation
#### CODE HERE ####
Explanation: 3. Método do cotovelo
Implemete o método do cotovelo e mostre o melhor K para o conjunto de dados.
End of explanation
#### CODE HERE ####
from sklearn import metrics
url="http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data"
data=pd.read_csv(url, header=None)
labels_true = [0, 0, 0, 1, 1, 1]
labels_pred = [0, 0, 1, 1, 2, 2]
metrics.homogeneity_score(labels_true, labels_pred)
metrics.completeness_score(labels_true, labels_pred)
#2
#3
#4
Explanation: 4. Dataset Real
Exercícios
1 - Aplique o algoritmo do K-means desenvolvido por você no datatse iris [1]. Mostre os resultados obtidos utilizando pelo menos duas métricas de avaliação de clusteres [2].
[1] http://archive.ics.uci.edu/ml/datasets/iris
[2] http://scikit-learn.org/stable/modules/clustering.html#clustering-evaluation
Dica: você pode utilizar as métricas completeness e homogeneity.
2 - Tente melhorar o resultado obtido na questão anterior utilizando uma técnica de mineração de dados. Explique a diferença obtida.
Dica: você pode tentar normalizar os dados [3].
- [3] https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.normalize.html
3 - Qual o número de clusteres (K) você escolheu na questão anterior? Desenvolva o Método do Cotovelo sem usar biblioteca e descubra o valor de K mais adequado. Após descobrir, utilize o valor obtido no algoritmo do K-means.
4 - Utilizando os resultados da questão anterior, refaça o cálculo das métricas e comente os resultados obtidos. Houve uma melhoria? Explique.
End of explanation |
3,270 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Split matrix randomly into train, valid and test sets
Step1: Select random batch
This can be used for mini-batch training.
Step2: Using sklearn
Scikit-learn has a train_test_split function. | Python Code:
import numpy as np
# matrix dimensions
N = 100
M = 20
# train-valid-test ratio, let's use 80-10-10
train_ratio = 0.8
valid_ratio = 0.1
test_ratio = 1.0 - train_ratio - valid_ratio # this is never used
# array indices
train_split = int(train_ratio * N)
valid_split = int(valid_ratio * N)
# create a random matrix
X = np.random.random((N, M))
# create random permutations of row indices
indices = np.random.permutation(range(X.shape[0]))
# split the indices array into train-test-valid
train_indices = indices[:train_split]
valid_indices = indices[train_split:train_split+valid_split]
test_indices = indices[train_split+valid_split:]
# select rows for train-valid-test
X_train = X[train_indices]
X_valid = X[valid_indices]
X_test = X[test_indices]
X_train.shape, X_valid.shape, X_test.shape
Explanation: Split matrix randomly into train, valid and test sets
End of explanation
nb_samples = 100 # number of samples
nb_features = 20 # number of features
batch_size = 16
X = np.random.random((nb_samples, nb_features))
y = np.random.randint(0, 2, nb_samples) # random binary labels
batch_indices = np.random.choice(nb_samples, batch_size)
X_batch = X[batch_indices]
y_batch = y[batch_indices]
X_batch.shape, y_batch.shape
Explanation: Select random batch
This can be used for mini-batch training.
End of explanation
import numpy as np
from sklearn.model_selection import train_test_split
nb_samples = 100 # number of samples
nb_features = 20 # number of features
batch_size = 16
X = np.random.random((nb_samples, nb_features))
y = np.random.randint(0, 2, nb_samples) # random binary labels
X_train, X_test, y_train, y_test = train_test_split(X, y)
y_train.shape, X_test.shape
Explanation: Using sklearn
Scikit-learn has a train_test_split function.
End of explanation |
3,271 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
用不到 50 行代码训练 GAN(基于 PyTorch
本文作者为前谷歌高级工程师、AI 初创公司 Wavefront 创始人兼 CTO Dev Nag,介绍了他是如何用不到五十行代码,在 PyTorch 平台上完成对 GAN 的训练。
什么是 GAN?
在进入技术层面之前,为照顾新入门的开发者,先来介绍下什么是 GAN。
2014 年,Ian Goodfellow 和他在蒙特利尔大学的同事发表了一篇震撼学界的论文。没错,我说的就是《Generative Adversarial Nets》,这标志着生成对抗网络(GAN)的诞生,而这是通过对计算图和博弈论的创新性结合。他们的研究展示,给定充分的建模能力,两个博弈模型能够通过简单的反向传播(backpropagation)来协同训练。
这两个模型的角色定位十分鲜明。给定真实数据集 R,G 是生成器(generator),它的任务是生成能以假乱真的假数据;而 D 是判别器 (discriminator),它从真实数据集或者 G 那里获取数据, 然后做出判别真假的标记。Ian Goodfellow 的比喻是,G 就像一个赝品作坊,想要让做出来的东西尽可能接近真品,蒙混过关。而 D 就是文物鉴定专家,要能区分出真品和高仿(但在这个例子中,造假者 G 看不到原始数据,而只有 D 的鉴定结果——前者是在盲干)。
理想情况下,D 和 G 都会随着不断训练,做得越来越好——直到 G 基本上成为了一个“赝品制造大师”,而 D 因无法正确区分两种数据分布输给 G。
实践中,Ian Goodfellow 展示的这项技术在本质上是:G 能够对原始数据集进行一种无监督学习,找到以更低维度的方式(lower-dimensional manner)来表示数据的某种方法。而无监督学习之所以重要,就好像 Yann LeCun 的那句话:“无监督学习是蛋糕的糕体”。这句话中的蛋糕,指的是无数学者、开发者苦苦追寻的“真正的 AI”。
开始之前,我们需要导入各种包,并且初始化变量
Step1: 用 PyTorch 训练 GAN
Dev Nag:在表面上,GAN 这门如此强大、复杂的技术,看起来需要编写天量的代码来执行,但事实未必如此。我们使用 PyTorch,能够在 50 行代码以内创建出简单的 GAN 模型。这之中,其实只有五个部分需要考虑:
R:原始、真实数据集
I:作为熵的一项来源,进入生成器的随机噪音
G:生成器,试图模仿原始数据
D:判别器,试图区别 G 的生成数据和 R
我们教 G 糊弄 D、教 D 当心 G 的“训练”环。
1.) R:在我们的例子里,从最简单的 R 着手——贝尔曲线(bell curve)。它把平均数(mean)和标准差(standard deviation)作为输入,然后输出能提供样本数据正确图形(从 Gaussian 用这些参数获得 )的函数。在我们的代码例子中,我们使用 4 的平均数和 1.25 的标准差。
Step2: 2.) I:生成器的输入是随机的,为提高点难度,我们使用均匀分布(uniform distribution )而非标准分布。这意味着,我们的 Model G 不能简单地改变输入(放大/缩小、平移)来复制 R,而需要用非线性的方式来改造数据。
Step3: 3.) G
Step4: 4.) D
Step5: 5.) 最后,训练环在两个模式中变幻:第一步,用被准确标记的真实数据 vs. 假数据训练 D;随后,训练 G 来骗过 D,这里是用的不准确标记。道友们,这是正邪之间的较量。
即便你从没接触过 PyTorch,大概也能明白发生了什么。在第一部分(for d_index in range(d_steps)循环里),我们让两种类型的数据经过 D,并对 D 的猜测 vs. 真实标记执行不同的评判标准。这是 “forward” 那一步;随后我们需要 “backward()” 来计算梯度,然后把这用来在 d_optimizer step() 中更新 D 的参数。这里,G 被使用但尚未被训练。
在最后的部分(for g_index in range(g_steps)循环里),我们对 G 执行同样的操作——注意我们要让 G 的输出穿过 D (这其实是送给造假者一个鉴定专家来练手)。但在这一步,我们并不优化、或者改变 D。我们不想让鉴定者 D 学习到错误的标记。因此,我们只执行 g_optimizer.step()。 | Python Code:
# Generative Adversarial Networks (GAN) example in PyTorch.
# See related blog post at https://medium.com/@devnag/generative-adversarial-networks-gans-in-50-lines-of-code-pytorch-e81b79659e3f#.sch4xgsa9
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
# Data params
data_mean = 4
data_stddev = 1.25
# Model params
g_input_size = 1 # Random noise dimension coming into generator, per output vector
g_hidden_size = 50 # Generator complexity
g_output_size = 1 # size of generated output vector
d_input_size = 100 # Minibatch size - cardinality of distributions
d_hidden_size = 50 # Discriminator complexity
d_output_size = 1 # Single dimension for 'real' vs. 'fake'
minibatch_size = d_input_size
d_learning_rate = 2e-4 # 2e-4
g_learning_rate = 2e-4
optim_betas = (0.9, 0.999)
num_epochs = 33300
print_interval = 333
d_steps = 1 # 'k' steps in the original GAN paper. Can put the discriminator on higher training freq than generator
g_steps = 1
# ### Uncomment only one of these
#(name, preprocess, d_input_func) = ("Raw data", lambda data: data, lambda x: x)
(name, preprocess, d_input_func) = ("Data and variances", lambda data: decorate_with_diffs(data, 2.0), lambda x: x * 2)
print("Using data [%s]" % (name))
Explanation: 用不到 50 行代码训练 GAN(基于 PyTorch
本文作者为前谷歌高级工程师、AI 初创公司 Wavefront 创始人兼 CTO Dev Nag,介绍了他是如何用不到五十行代码,在 PyTorch 平台上完成对 GAN 的训练。
什么是 GAN?
在进入技术层面之前,为照顾新入门的开发者,先来介绍下什么是 GAN。
2014 年,Ian Goodfellow 和他在蒙特利尔大学的同事发表了一篇震撼学界的论文。没错,我说的就是《Generative Adversarial Nets》,这标志着生成对抗网络(GAN)的诞生,而这是通过对计算图和博弈论的创新性结合。他们的研究展示,给定充分的建模能力,两个博弈模型能够通过简单的反向传播(backpropagation)来协同训练。
这两个模型的角色定位十分鲜明。给定真实数据集 R,G 是生成器(generator),它的任务是生成能以假乱真的假数据;而 D 是判别器 (discriminator),它从真实数据集或者 G 那里获取数据, 然后做出判别真假的标记。Ian Goodfellow 的比喻是,G 就像一个赝品作坊,想要让做出来的东西尽可能接近真品,蒙混过关。而 D 就是文物鉴定专家,要能区分出真品和高仿(但在这个例子中,造假者 G 看不到原始数据,而只有 D 的鉴定结果——前者是在盲干)。
理想情况下,D 和 G 都会随着不断训练,做得越来越好——直到 G 基本上成为了一个“赝品制造大师”,而 D 因无法正确区分两种数据分布输给 G。
实践中,Ian Goodfellow 展示的这项技术在本质上是:G 能够对原始数据集进行一种无监督学习,找到以更低维度的方式(lower-dimensional manner)来表示数据的某种方法。而无监督学习之所以重要,就好像 Yann LeCun 的那句话:“无监督学习是蛋糕的糕体”。这句话中的蛋糕,指的是无数学者、开发者苦苦追寻的“真正的 AI”。
开始之前,我们需要导入各种包,并且初始化变量
End of explanation
# ##### DATA: Target data and generator input data
def get_distribution_sampler(mu, sigma):
return lambda n: torch.Tensor(np.random.normal(mu, sigma, (1, n))) # Gaussian
Explanation: 用 PyTorch 训练 GAN
Dev Nag:在表面上,GAN 这门如此强大、复杂的技术,看起来需要编写天量的代码来执行,但事实未必如此。我们使用 PyTorch,能够在 50 行代码以内创建出简单的 GAN 模型。这之中,其实只有五个部分需要考虑:
R:原始、真实数据集
I:作为熵的一项来源,进入生成器的随机噪音
G:生成器,试图模仿原始数据
D:判别器,试图区别 G 的生成数据和 R
我们教 G 糊弄 D、教 D 当心 G 的“训练”环。
1.) R:在我们的例子里,从最简单的 R 着手——贝尔曲线(bell curve)。它把平均数(mean)和标准差(standard deviation)作为输入,然后输出能提供样本数据正确图形(从 Gaussian 用这些参数获得 )的函数。在我们的代码例子中,我们使用 4 的平均数和 1.25 的标准差。
End of explanation
def get_generator_input_sampler():
return lambda m, n: torch.rand(m, n) # Uniform-dist data into generator, _NOT_ Gaussian
Explanation: 2.) I:生成器的输入是随机的,为提高点难度,我们使用均匀分布(uniform distribution )而非标准分布。这意味着,我们的 Model G 不能简单地改变输入(放大/缩小、平移)来复制 R,而需要用非线性的方式来改造数据。
End of explanation
# ##### MODELS: Generator model and discriminator model
class Generator(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(Generator, self).__init__()
self.map1 = nn.Linear(input_size, hidden_size)
self.map2 = nn.Linear(hidden_size, hidden_size)
self.map3 = nn.Linear(hidden_size, output_size)
def forward(self, x):
x = F.elu(self.map1(x))
x = F.sigmoid(self.map2(x))
return self.map3(x)
Explanation: 3.) G: 该生成器是个标准的前馈图(feedforward graph)——两层隐层,三个线性映射(linear maps)。我们使用了 ELU (exponential linear unit)。G 将从 I 获得平均分布的数据样本,然后找到某种方式来模仿 R 中标准分布的样本。
End of explanation
class Discriminator(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(Discriminator, self).__init__()
self.map1 = nn.Linear(input_size, hidden_size)
self.map2 = nn.Linear(hidden_size, hidden_size)
self.map3 = nn.Linear(hidden_size, output_size)
def forward(self, x):
x = F.elu(self.map1(x))
x = F.elu(self.map2(x))
return F.sigmoid(self.map3(x))
# 还有一些其他的样板代码
def extract(v):
return v.data.storage().tolist()
def stats(d):
return [np.mean(d), np.std(d)]
def decorate_with_diffs(data, exponent):
mean = torch.mean(data.data, 1, keepdim=True)
mean_broadcast = torch.mul(torch.ones(data.size()), mean.tolist()[0][0])
diffs = torch.pow(data - Variable(mean_broadcast), exponent)
return torch.cat([data, diffs], 1)
d_sampler = get_distribution_sampler(data_mean, data_stddev)
gi_sampler = get_generator_input_sampler()
G = Generator(input_size=g_input_size, hidden_size=g_hidden_size, output_size=g_output_size)
D = Discriminator(input_size=d_input_func(d_input_size), hidden_size=d_hidden_size, output_size=d_output_size)
criterion = nn.BCELoss() # Binary cross entropy: http://pytorch.org/docs/nn.html#bceloss
d_optimizer = optim.Adam(D.parameters(), lr=d_learning_rate, betas=optim_betas)
g_optimizer = optim.Adam(G.parameters(), lr=g_learning_rate, betas=optim_betas)
Explanation: 4.) D: 判别器的代码和 G 的生成器代码很接近。一个有两层隐层和三个线性映射的前馈图。它会从 R 或 G 那里获得样本,然后输出 0 或 1 的判别值,对应反例和正例。这几乎是神经网络的最弱版本了。
End of explanation
for epoch in range(num_epochs):
for d_index in range(d_steps):
# 1. Train D on real+fake
D.zero_grad()
# 1A: Train D on real
d_real_data = Variable(d_sampler(d_input_size))
d_real_decision = D(preprocess(d_real_data))
d_real_error = criterion(d_real_decision, Variable(torch.ones(1))) # ones = true
d_real_error.backward() # compute/store gradients, but don't change params
# 1B: Train D on fake
d_gen_input = Variable(gi_sampler(minibatch_size, g_input_size))
d_fake_data = G(d_gen_input).detach() # detach to avoid training G on these labels
d_fake_decision = D(preprocess(d_fake_data.t()))
d_fake_error = criterion(d_fake_decision, Variable(torch.zeros(1))) # zeros = fake
d_fake_error.backward()
d_optimizer.step() # Only optimizes D's parameters; changes based on stored gradients from backward()
for g_index in range(g_steps):
# 2. Train G on D's response (but DO NOT train D on these labels)
G.zero_grad()
gen_input = Variable(gi_sampler(minibatch_size, g_input_size))
g_fake_data = G(gen_input)
dg_fake_decision = D(preprocess(g_fake_data.t()))
g_error = criterion(dg_fake_decision, Variable(torch.ones(1))) # we want to fool, so pretend it's all genuine
g_error.backward()
g_optimizer.step() # Only optimizes G's parameters
if epoch % print_interval == 0:
print("epoch: %s : D: %s/%s G: %s (Real: %s, Fake: %s) " % (epoch,
extract(d_real_error)[0],
extract(d_fake_error)[0],
extract(g_error)[0],
stats(extract(d_real_data)),
stats(extract(d_fake_data))))
Explanation: 5.) 最后,训练环在两个模式中变幻:第一步,用被准确标记的真实数据 vs. 假数据训练 D;随后,训练 G 来骗过 D,这里是用的不准确标记。道友们,这是正邪之间的较量。
即便你从没接触过 PyTorch,大概也能明白发生了什么。在第一部分(for d_index in range(d_steps)循环里),我们让两种类型的数据经过 D,并对 D 的猜测 vs. 真实标记执行不同的评判标准。这是 “forward” 那一步;随后我们需要 “backward()” 来计算梯度,然后把这用来在 d_optimizer step() 中更新 D 的参数。这里,G 被使用但尚未被训练。
在最后的部分(for g_index in range(g_steps)循环里),我们对 G 执行同样的操作——注意我们要让 G 的输出穿过 D (这其实是送给造假者一个鉴定专家来练手)。但在这一步,我们并不优化、或者改变 D。我们不想让鉴定者 D 学习到错误的标记。因此,我们只执行 g_optimizer.step()。
End of explanation |
3,272 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Object Model
bqplot is based on Grammar of Graphics paradigm. The Object Model in bqplot gives the user the full flexibility to build custom plots. This means the API is verbose but fully customizable.
The following are the steps to build a Figure in bqplot using the Object Model
Step1: For creating other marks (like scatter, pie, bars, etc.), only step 3 needs to be changed. Lets look a simple example to create a bar chart
Step2: Multiple marks can be rendered in a figure. It's as easy as passing a list of marks when constructing the Figure object | Python Code:
from bqplot import (
LinearScale,
Axis,
Figure,
OrdinalScale,
LinearScale,
Bars,
Lines,
Scatter,
)
# first, let's create two vectors x and y to plot using a Lines mark
import numpy as np
x = np.linspace(-10, 10, 100)
y = np.sin(x)
# 1. Create the scales
xs = LinearScale()
ys = LinearScale()
# 2. Create the axes for x and y
xax = Axis(scale=xs, label="X")
yax = Axis(scale=ys, orientation="vertical", label="Y")
# 3. Create a Lines mark by passing in the scales
# note that Lines object is stored in `line` which can be used later to update the plot
line = Lines(x=x, y=y, scales={"x": xs, "y": ys})
# 4. Create a Figure object by assembling marks and axes
fig = Figure(marks=[line], axes=[xax, yax], title="Simple Line Chart")
# 5. Render the figure using display or just as is
fig
Explanation: Object Model
bqplot is based on Grammar of Graphics paradigm. The Object Model in bqplot gives the user the full flexibility to build custom plots. This means the API is verbose but fully customizable.
The following are the steps to build a Figure in bqplot using the Object Model:
Build the scales for x and y quantities using the Scale classes (Scales map the data into pixels in the figure)
Build the marks using the Mark classes. Marks represent the core plotting objects (lines, scatter, bars, pies etc.). Marks take the scale objects created in step 1 as arguments
Build the axes for x and y scales
Finally create a figure using Figure class. Figure takes marks and axes as inputs. Figure object is a widget (it inherits from DOMWidget) and can be rendered like any other jupyter widget
Let's look a simple example to understand these concepts:
End of explanation
# first, let's create two vectors x and y to plot a bar chart
x = list("ABCDE")
y = np.random.rand(5)
# 1. Create the scales
xs = OrdinalScale() # note the use of ordinal scale to represent categorical data
ys = LinearScale()
# 2. Create the axes for x and y
xax = Axis(scale=xs, label="X", grid_lines="none") # no grid lines needed for x
yax = Axis(
scale=ys, orientation="vertical", label="Y", tick_format=".0%"
) # note the use of tick_format to format ticks
# 3. Create a Bars mark by passing in the scales
# note that Bars object is stored in `bar` object which can be used later to update the plot
bar = Bars(x=x, y=y, scales={"x": xs, "y": ys}, padding=0.2)
# 4. Create a Figure object by assembling marks and axes
Figure(marks=[bar], axes=[xax, yax], title="Simple Bar Chart")
Explanation: For creating other marks (like scatter, pie, bars, etc.), only step 3 needs to be changed. Lets look a simple example to create a bar chart:
End of explanation
# first, let's create two vectors x and y
import numpy as np
x = np.linspace(-10, 10, 25)
y = 3 * x + 5
y_noise = y + 10 * np.random.randn(25) # add some random noise to y
# 1. Create the scales
xs = LinearScale()
ys = LinearScale()
# 2. Create the axes for x and y
xax = Axis(scale=xs, label="X")
yax = Axis(scale=ys, orientation="vertical", label="Y")
# 3. Create a Lines and Scatter marks by passing in the scales
# additional attributes (stroke_width, colors etc.) can be passed as attributes to the mark objects as needed
line = Lines(x=x, y=y, scales={"x": xs, "y": ys}, colors=["green"], stroke_width=3)
scatter = Scatter(
x=x, y=y_noise, scales={"x": xs, "y": ys}, colors=["red"], stroke="black"
)
# 4. Create a Figure object by assembling marks and axes
# pass both the marks (line and scatter) as a list to the marks attribute
Figure(marks=[line, scatter], axes=[xax, yax], title="Scatter and Line")
Explanation: Multiple marks can be rendered in a figure. It's as easy as passing a list of marks when constructing the Figure object
End of explanation |
3,273 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow Tutorial #12
Adversarial Noise for MNIST
by Magnus Erik Hvass Pedersen
/ GitHub / Videos on YouTube
Introduction
The previous Tutorial #11 showed how to find so-called adversarial examples for a state-of-the-art neural network, which caused the network to mis-classify images even though they looked identical to the human eye. For example, an image of a parrot became mis-classified as a bookcase when adding the adversarial noise, but the image looked completely unchanged to the human eye.
The adversarial noise in Tutorial #11 was found through an optimization process for each individual image. Because the noise was specialized for each image, it may not generalize and have any effect on other images.
In this tutorial we will instead find adversarial noise that causes nearly all input images to become mis-classified as a desired target-class. The MNIST data-set of hand-written digits is used as an example. The adversarial noise is now clearly visible to the human eye, but the digits are still easily identified by a human, while the neural network mis-classifies nearly all the images.
In this tutorial we will also try and make the neural network immune to adversarial noise.
Tutorial #11 used NumPy for the adversarial optimization. In this tutorial we will show how to implement the optimization process directly in TensorFlow. This might be faster, especially when using a GPU, because it does not need to copy data to and from the GPU in each iteration.
It is recommended that you first study Tutorial #11. You should also be familiar with TensorFlow in general, see e.g. Tutorials #01 and #02.
Flowchart
The following chart shows roughly how the data flows in the Convolutional Neural Network that is implemented below.
This example shows an input image with a hand-written 7-digit. The adversarial noise is then added to the image. Red noise-pixels are positive and make the input image darker in those pixels, while blue noise-pixels are negative and make the input lighter in those pixels.
The noisy image is then fed to the neural network which results in a predicted class-number. In this case the adversarial noise fools the network into believing that the 7-digit shows a 3-digit. The noise is clearly visible to humans, but the 7-digit is still easily identified by a human.
The remarkable thing here, is that a single noise-pattern causes the neural network to mis-classify almost all input images to a desired target-class.
There are two separate optimization procedures in this neural network. First we optimize the variables of the neural network so as to classify images in the training-set. This is the normal optimization procedure for neural networks. Once the classification accuracy is good enough, we switch to the second optimization procedure, which tries to find a single pattern of adversarial noise, that causes all input images to be mis-classified as the given target-class.
The two optimization procedures are completely separate. The first procedure only modifies the variables of the neural network, while the second procedure only modifies the adversarial noise.
Step1: Imports
Step2: This was developed using Python 3.5.2 (Anaconda) and TensorFlow version
Step3: Load Data
The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
Step4: The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
Step5: The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test-set, so we calculate it now.
Step6: Data Dimensions
The data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below.
Step7: Helper-function for plotting images
Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image. If the noise is supplied then it is added to all images.
Step8: Plot a few images to see if data is correct
Step9: TensorFlow Graph
The computational graph for the neural network will now be constructed using TensorFlow and PrettyTensor. As usual, we need to create placeholder variables for feeding images into the graph and then we add the adversarial noise to the images. The noisy images are then used as input to a convolutional neural network.
There are two separate optimization procedures for this network. A normal optimization procedure for the variables of the neural network itself, and another optimization procedure for the adversarial noise. Both optimization procedures are implemented directly in TensorFlow.
Placeholder variables
Placeholder variables provide the input to the computational graph in TensorFlow that we may change each time we execute the graph. We call this feeding the placeholder variables.
First we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional array. The data-type is set to float32 and the shape is set to [None, img_size_flat], where None means that the tensor may hold an arbitrary number of images with each image being a vector of length img_size_flat.
Step10: The convolutional layers expect x to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead [num_images, img_height, img_width, num_channels]. Note that img_height == img_width == img_size and num_images can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is
Step11: Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.
Step12: We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.
Step13: Adversarial Noise
The pixels in the input image are float-values between 0.0 and 1.0. The adversarial noise is a number that is added or subtracted from the pixels in the input image.
The limit of the adversarial noise is set to 0.35 so the noise will be between ±0.35.
Step14: The optimizer for the adversarial noise will try and minimize two loss-measures
Step15: When we create the new variable for the noise, we must inform TensorFlow which variable-collections that it belongs to, so we can later inform the two optimizers which variables to update.
First we define a name for our new variable-collection. This is just a string.
Step16: Then we create a list of the collections that we want the new noise-variable to belong to. If we add the noise-variable to the collection tf.GraphKeys.VARIABLES then it will also get initialized with all the other variables in the TensorFlow graph, but it will not get optimized. This is a bit confusing.
Step17: Now we can create the new variable for the adversarial noise. It will be initialized to zero. It will not be trainable, so it will not be optimized along with the other variables of the neural network. This allows us to create two separate optimization procedures.
Step18: The adversarial noise will be limited / clipped to the given
± noise-limit that we set above. Note that this is actually not executed at this point in the computational graph, but will instead be executed after the optimization-step, see further below.
Step19: The noisy image is just the sum of the input image and the adversarial noise.
Step20: When adding the noise to the input image, it may overflow the boundaries for a valid image, so we clip / limit the noisy image to ensure its pixel-values are between 0 and 1.
Step21: Convolutional Neural Network
We will use PrettyTensor to construct the convolutional neural network. First we need to wrap the tensor for the noisy image in a PrettyTensor-object, which provides functions that construct the neural network.
Step22: Now that we have wrapped the input image in a PrettyTensor object, we can add the convolutional and fully-connected layers in just a few lines of source-code.
Step23: Note that pt.defaults_scope(activation_fn=tf.nn.relu) makes activation_fn=tf.nn.relu an argument for each of the layers constructed inside the with-block, so that Rectified Linear Units (ReLU) are used for each of these layers. The defaults_scope makes it easy to change arguments for all of the layers.
Optimizer for Normal Training
This is a list of the variables for the neural network that will be trained during the normal optimization procedure. Note that 'x_noise
Step24: Optimization of these variables in the neural network is done with the Adam-optimizer using the loss-measure that was returned from PrettyTensor when we constructed the neural network above.
Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.
Step25: Optimizer for Adversarial Noise
Get the list of variables that must be optimized in the second procedure for the adversarial noise.
Step26: Show the list of variable-names. There is only one, which is the adversarial noise variable that we created above.
Step27: We will combine the loss-function for the normal optimization with a so-called L2-loss for the noise-variable. This should result in the minimum values for the adversarial noise along with the best classification accuracy.
The L2-loss is scaled by a weight that is typically set close to zero.
Step28: Combine the normal loss-function with the L2-loss for the adversarial noise.
Step29: We can now create the optimizer for the adversarial noise. Because this optimizer is not supposed to update all the variables of the neural network, we must give it a list of the variables that we want updated, which is the variable for the adversarial noise. Also note the learning-rate is much greater than for the normal optimizer above.
Step30: We have now created two optimizers for the neural network, one for the variables of the neural network and another for the single variable with the adversarial noise.
Performance Measures
We need a few more operations in the TensorFlow graph which will make it easier for us to display the progress to the user during optimization.
First we calculate the predicted class number from the output of the Neural Network y_pred, which is a vector with 10 elements. The class number is the index of the largest element.
Step31: Then we create a vector of booleans telling us whether the predicted class equals the true class of each image.
Step32: The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.
Step33: TensorFlow Run
Create TensorFlow session
Once the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
Step34: Initialize variables
The variables for weights and biases must be initialized before we start optimizing them.
Step35: This is a helper-function for initializing / resetting the adversarial noise to zero.
Step36: Call the function to initialize the adversarial noise.
Step37: Helper-function to perform optimization iterations
There are 55,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.
If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.
Step38: Below is the function for performing a number of optimization iterations so as to gradually improve the variables of the neural network. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations.
This function is similar to the previous tutorials, except that it now takes an argument for the adversarial target-class. When this target-class is set to an integer, it will be used instead of the true class-number for the training-data. The adversarial optimizer is also used instead of the normal optimizer, and after each step of the adversarial optimizer, the noise will be limited / clipped to the allowed range. This optimizes the adversarial noise and ignores the other variables of the neural network.
Step39: Helper-functions for getting and plotting the noise
This function gets the adversarial noise from inside the TensorFlow graph.
Step40: This function plots the adversarial noise and prints some statistics.
Step41: Helper-function to plot example errors
Function for plotting examples of images from the test-set that have been mis-classified.
Step42: Helper-function to plot confusion matrix
Step43: Helper-function for showing the performance
Function for printing the classification accuracy on the test-set.
It takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.
Note that this function can use a lot of computer memory, which is why the test-set is split into smaller batches. If you have little RAM in your computer and it crashes, then you can try and lower the batch-size.
Step44: Normal optimization of neural network
First we perform 1000 optimization iterations with the normal optimizer. This finds the variables that makes the neural network perform well on the training-set.
The adversarial noise is not effective yet because it has only been initialized to zero above and it is not being updated during this optimization.
Step45: The classification accuracy is now about 96-97% on the test-set. (This will vary each time you run this Python Notebook).
Step46: Find the adversarial noise
Before we start optimizing the adversarial noise, we first initialize it to zero. This was already done above but it is repeated here in case you want to re-run this code with another target-class.
Step47: Now perform optimization of the adversarial noise. This uses the adversarial optimizer instead of the normal optimizer, which means that it only optimizes the variable for the adversarial noise, while ignoring all the other variables of the neural network.
Step48: The adversarial noise has now been optimized and it can be shown in a plot. The red pixels show positive noise-values and the blue pixels show negative noise-values. This noise-pattern is added to every input image. The positive (red) noise-values makes the pixels darker and the negative (blue) noise-values makes the pixels brighter. Examples of this are shown below.
Step49: When this noise is added to all the images in the test-set, the result is typically a classification accuracy of 10-15% depending on the target-class that was chosen. We can also see from the confusion matrix that most images in the test-set are now classified as the desired target-class - although some of the target-classes require more adversarial noise than others.
So we have found adversarial noise that makes the neural network mis-classify almost all images in the test-set as our desired target-class.
We can also show some examples of mis-classified images with the adversarial noise. The noise is clearly visible but the digits are still easily identified by the human eye.
Step50: Adversarial noise for all target-classes
This is a helper-function for finding the adversarial noise for all target-classes. The function loops over all the class-numbers from 0 to 9 and runs the optimization above. The results are then stored in an array.
Step51: Plot the adversarial noise for all target-classes
This is a helper-function for plotting a grid with the adversarial noise for all target-classes 0 to 9.
Step52: Red pixels show positive noise values, and blue pixels show negative noise values.
In some of these noise-images you can see traces of the numbers. For example, the noise for target-class 0 shows a red circle surrounded by blue. This means that a little noise will be added to the input image in the shape of a circle, and it will dampen the other pixels. This is sufficient for most input images in the MNIST data-set to be mis-classified as a 0. Another example is the noise for 3 which also shows traces of the number 3 with red pixels. But the noise for the other classes is less obvious.
Immunity to adversarial noise
We will now try and make the neural network immune to adversarial noise. We do this by re-training the neural network to ignore the adversarial noise. This process can be repeated a number of times.
Helper-function to make a neural network immune to noise
This is the helper-function for making the neural network immune to adversarial noise. First it runs the optimization to find the adversarial noise. Then it runs the normal optimization to make the neural network immune to that noise.
Step53: Make immune to noise for target-class 3
First try and make the neural network immune to the adverserial noise for targer-class 3.
First we find the adversarial noise that causes the neural network to mis-classify most of the images in the test-set. Then we run the normal optimization which fine-tunes the variables of the neural network to ignore this noise and this brings the classification accuracy for the noisy images up to 95-97% again.
Step54: Now try and run it again. It is now more difficult to find adversarial noise for the target-class 3. The neural network seems to have become somewhat immune to adversarial noise.
Step55: Make immune to noise for all target-classes
Now try and make the neural network immune to adversarial noise for all target-classes. Unfortunately this does not seem to work so well.
Step56: Make immune to all target-classes (double runs)
Now try and use double-runs to make the neural network immune to adversarial noise for all target-classes. Unfortunately this does not seem to work so well either.
Making the neural network immune to one adversarial target-class appears to cancel the immunity towards the other target-classes.
Step57: Plot the adversarial noise
We have now performed many optimizations of both the neural network and the adversarial noise. Let us see how the adversarial noise looks now.
Step58: Interestingly, the neural network now has a higher classification accuracy on noisy images than we had on clean images before all these optimizations.
Step59: Performance on clean images
Now let us see how the neural network performs on clean images so we reset the adversarial noise to zero.
Step60: The neural network now performs worse on clean images compared to noisy images.
Step61: Close TensorFlow Session
We are now done using TensorFlow, so we close the session to release its resources. | Python Code:
from IPython.display import Image
Image('images/12_adversarial_noise_flowchart.png')
Explanation: TensorFlow Tutorial #12
Adversarial Noise for MNIST
by Magnus Erik Hvass Pedersen
/ GitHub / Videos on YouTube
Introduction
The previous Tutorial #11 showed how to find so-called adversarial examples for a state-of-the-art neural network, which caused the network to mis-classify images even though they looked identical to the human eye. For example, an image of a parrot became mis-classified as a bookcase when adding the adversarial noise, but the image looked completely unchanged to the human eye.
The adversarial noise in Tutorial #11 was found through an optimization process for each individual image. Because the noise was specialized for each image, it may not generalize and have any effect on other images.
In this tutorial we will instead find adversarial noise that causes nearly all input images to become mis-classified as a desired target-class. The MNIST data-set of hand-written digits is used as an example. The adversarial noise is now clearly visible to the human eye, but the digits are still easily identified by a human, while the neural network mis-classifies nearly all the images.
In this tutorial we will also try and make the neural network immune to adversarial noise.
Tutorial #11 used NumPy for the adversarial optimization. In this tutorial we will show how to implement the optimization process directly in TensorFlow. This might be faster, especially when using a GPU, because it does not need to copy data to and from the GPU in each iteration.
It is recommended that you first study Tutorial #11. You should also be familiar with TensorFlow in general, see e.g. Tutorials #01 and #02.
Flowchart
The following chart shows roughly how the data flows in the Convolutional Neural Network that is implemented below.
This example shows an input image with a hand-written 7-digit. The adversarial noise is then added to the image. Red noise-pixels are positive and make the input image darker in those pixels, while blue noise-pixels are negative and make the input lighter in those pixels.
The noisy image is then fed to the neural network which results in a predicted class-number. In this case the adversarial noise fools the network into believing that the 7-digit shows a 3-digit. The noise is clearly visible to humans, but the 7-digit is still easily identified by a human.
The remarkable thing here, is that a single noise-pattern causes the neural network to mis-classify almost all input images to a desired target-class.
There are two separate optimization procedures in this neural network. First we optimize the variables of the neural network so as to classify images in the training-set. This is the normal optimization procedure for neural networks. Once the classification accuracy is good enough, we switch to the second optimization procedure, which tries to find a single pattern of adversarial noise, that causes all input images to be mis-classified as the given target-class.
The two optimization procedures are completely separate. The first procedure only modifies the variables of the neural network, while the second procedure only modifies the adversarial noise.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
import time
from datetime import timedelta
import math
# We also need PrettyTensor.
import prettytensor as pt
Explanation: Imports
End of explanation
tf.__version__
Explanation: This was developed using Python 3.5.2 (Anaconda) and TensorFlow version:
End of explanation
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/MNIST/', one_hot=True)
Explanation: Load Data
The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
End of explanation
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
Explanation: The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
End of explanation
data.test.cls = np.argmax(data.test.labels, axis=1)
Explanation: The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test-set, so we calculate it now.
End of explanation
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = 1
# Number of classes, one class for each of 10 digits.
num_classes = 10
Explanation: Data Dimensions
The data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below.
End of explanation
def plot_images(images, cls_true, cls_pred=None, noise=0.0):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Get the i'th image and reshape the array.
image = images[i].reshape(img_shape)
# Add the adversarial noise to the image.
image += noise
# Ensure the noisy pixel-values are between 0 and 1.
image = np.clip(image, 0.0, 1.0)
# Plot image.
ax.imshow(image,
cmap='binary', interpolation='nearest')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
Explanation: Helper-function for plotting images
Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image. If the noise is supplied then it is added to all images.
End of explanation
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
Explanation: Plot a few images to see if data is correct
End of explanation
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
Explanation: TensorFlow Graph
The computational graph for the neural network will now be constructed using TensorFlow and PrettyTensor. As usual, we need to create placeholder variables for feeding images into the graph and then we add the adversarial noise to the images. The noisy images are then used as input to a convolutional neural network.
There are two separate optimization procedures for this network. A normal optimization procedure for the variables of the neural network itself, and another optimization procedure for the adversarial noise. Both optimization procedures are implemented directly in TensorFlow.
Placeholder variables
Placeholder variables provide the input to the computational graph in TensorFlow that we may change each time we execute the graph. We call this feeding the placeholder variables.
First we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional array. The data-type is set to float32 and the shape is set to [None, img_size_flat], where None means that the tensor may hold an arbitrary number of images with each image being a vector of length img_size_flat.
End of explanation
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
Explanation: The convolutional layers expect x to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead [num_images, img_height, img_width, num_channels]. Note that img_height == img_width == img_size and num_images can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is:
End of explanation
y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')
Explanation: Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.
End of explanation
y_true_cls = tf.argmax(y_true, dimension=1)
Explanation: We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.
End of explanation
noise_limit = 0.35
Explanation: Adversarial Noise
The pixels in the input image are float-values between 0.0 and 1.0. The adversarial noise is a number that is added or subtracted from the pixels in the input image.
The limit of the adversarial noise is set to 0.35 so the noise will be between ±0.35.
End of explanation
noise_l2_weight = 0.02
Explanation: The optimizer for the adversarial noise will try and minimize two loss-measures: (1) The normal loss-measure for the neural network, so we will find the noise that gives the best classification accuracy for the adversarial target-class; and (2) the so-called L2-loss-measure which tries to keep the noise as low as possible.
The following weight determines how important the L2-loss is compared to the normal loss-measure. An L2-weight close to zero usually works best.
End of explanation
ADVERSARY_VARIABLES = 'adversary_variables'
Explanation: When we create the new variable for the noise, we must inform TensorFlow which variable-collections that it belongs to, so we can later inform the two optimizers which variables to update.
First we define a name for our new variable-collection. This is just a string.
End of explanation
collections = [tf.GraphKeys.VARIABLES, ADVERSARY_VARIABLES]
Explanation: Then we create a list of the collections that we want the new noise-variable to belong to. If we add the noise-variable to the collection tf.GraphKeys.VARIABLES then it will also get initialized with all the other variables in the TensorFlow graph, but it will not get optimized. This is a bit confusing.
End of explanation
x_noise = tf.Variable(tf.zeros([img_size, img_size, num_channels]),
name='x_noise', trainable=False,
collections=collections)
Explanation: Now we can create the new variable for the adversarial noise. It will be initialized to zero. It will not be trainable, so it will not be optimized along with the other variables of the neural network. This allows us to create two separate optimization procedures.
End of explanation
x_noise_clip = tf.assign(x_noise, tf.clip_by_value(x_noise,
-noise_limit,
noise_limit))
Explanation: The adversarial noise will be limited / clipped to the given
± noise-limit that we set above. Note that this is actually not executed at this point in the computational graph, but will instead be executed after the optimization-step, see further below.
End of explanation
x_noisy_image = x_image + x_noise
Explanation: The noisy image is just the sum of the input image and the adversarial noise.
End of explanation
x_noisy_image = tf.clip_by_value(x_noisy_image, 0.0, 1.0)
Explanation: When adding the noise to the input image, it may overflow the boundaries for a valid image, so we clip / limit the noisy image to ensure its pixel-values are between 0 and 1.
End of explanation
x_pretty = pt.wrap(x_noisy_image)
Explanation: Convolutional Neural Network
We will use PrettyTensor to construct the convolutional neural network. First we need to wrap the tensor for the noisy image in a PrettyTensor-object, which provides functions that construct the neural network.
End of explanation
with pt.defaults_scope(activation_fn=tf.nn.relu):
y_pred, loss = x_pretty.\
conv2d(kernel=5, depth=16, name='layer_conv1').\
max_pool(kernel=2, stride=2).\
conv2d(kernel=5, depth=36, name='layer_conv2').\
max_pool(kernel=2, stride=2).\
flatten().\
fully_connected(size=128, name='layer_fc1').\
softmax_classifier(class_count=num_classes, labels=y_true)
Explanation: Now that we have wrapped the input image in a PrettyTensor object, we can add the convolutional and fully-connected layers in just a few lines of source-code.
End of explanation
[var.name for var in tf.trainable_variables()]
Explanation: Note that pt.defaults_scope(activation_fn=tf.nn.relu) makes activation_fn=tf.nn.relu an argument for each of the layers constructed inside the with-block, so that Rectified Linear Units (ReLU) are used for each of these layers. The defaults_scope makes it easy to change arguments for all of the layers.
Optimizer for Normal Training
This is a list of the variables for the neural network that will be trained during the normal optimization procedure. Note that 'x_noise:0' is not in the list, so the adversarial noise is not being optimized in the normal procedure.
End of explanation
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss)
Explanation: Optimization of these variables in the neural network is done with the Adam-optimizer using the loss-measure that was returned from PrettyTensor when we constructed the neural network above.
Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.
End of explanation
adversary_variables = tf.get_collection(ADVERSARY_VARIABLES)
Explanation: Optimizer for Adversarial Noise
Get the list of variables that must be optimized in the second procedure for the adversarial noise.
End of explanation
[var.name for var in adversary_variables]
Explanation: Show the list of variable-names. There is only one, which is the adversarial noise variable that we created above.
End of explanation
l2_loss_noise = noise_l2_weight * tf.nn.l2_loss(x_noise)
Explanation: We will combine the loss-function for the normal optimization with a so-called L2-loss for the noise-variable. This should result in the minimum values for the adversarial noise along with the best classification accuracy.
The L2-loss is scaled by a weight that is typically set close to zero.
End of explanation
loss_adversary = loss + l2_loss_noise
Explanation: Combine the normal loss-function with the L2-loss for the adversarial noise.
End of explanation
optimizer_adversary = tf.train.AdamOptimizer(learning_rate=1e-2).minimize(loss_adversary, var_list=adversary_variables)
Explanation: We can now create the optimizer for the adversarial noise. Because this optimizer is not supposed to update all the variables of the neural network, we must give it a list of the variables that we want updated, which is the variable for the adversarial noise. Also note the learning-rate is much greater than for the normal optimizer above.
End of explanation
y_pred_cls = tf.argmax(y_pred, dimension=1)
Explanation: We have now created two optimizers for the neural network, one for the variables of the neural network and another for the single variable with the adversarial noise.
Performance Measures
We need a few more operations in the TensorFlow graph which will make it easier for us to display the progress to the user during optimization.
First we calculate the predicted class number from the output of the Neural Network y_pred, which is a vector with 10 elements. The class number is the index of the largest element.
End of explanation
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
Explanation: Then we create a vector of booleans telling us whether the predicted class equals the true class of each image.
End of explanation
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
Explanation: The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.
End of explanation
session = tf.Session()
Explanation: TensorFlow Run
Create TensorFlow session
Once the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
End of explanation
session.run(tf.initialize_all_variables())
Explanation: Initialize variables
The variables for weights and biases must be initialized before we start optimizing them.
End of explanation
def init_noise():
session.run(tf.initialize_variables([x_noise]))
Explanation: This is a helper-function for initializing / resetting the adversarial noise to zero.
End of explanation
init_noise()
Explanation: Call the function to initialize the adversarial noise.
End of explanation
train_batch_size = 64
Explanation: Helper-function to perform optimization iterations
There are 55,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.
If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.
End of explanation
def optimize(num_iterations, adversary_target_cls=None):
# Start-time used for printing time-usage below.
start_time = time.time()
for i in range(num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(train_batch_size)
# If we are searching for the adversarial noise, then
# use the adversarial target-class instead.
if adversary_target_cls is not None:
# The class-labels are One-Hot encoded.
# Set all the class-labels to zero.
y_true_batch = np.zeros_like(y_true_batch)
# Set the element for the adversarial target-class to 1.
y_true_batch[:, adversary_target_cls] = 1.0
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# If doing normal optimization of the neural network.
if adversary_target_cls is None:
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
else:
# Run the adversarial optimizer instead.
# Note that we have 'faked' the class above to be
# the adversarial target-class instead of the true class.
session.run(optimizer_adversary, feed_dict=feed_dict_train)
# Clip / limit the adversarial noise. This executes
# another TensorFlow operation. It cannot be executed
# in the same session.run() as the optimizer, because
# it may run in parallel so the execution order is not
# guaranteed. We need the clip to run after the optimizer.
session.run(x_noise_clip)
# Print status every 100 iterations.
if (i % 100 == 0) or (i == num_iterations - 1):
# Calculate the accuracy on the training-set.
acc = session.run(accuracy, feed_dict=feed_dict_train)
# Message for printing.
msg = "Optimization Iteration: {0:>6}, Training Accuracy: {1:>6.1%}"
# Print it.
print(msg.format(i, acc))
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# Print the time-usage.
print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))
Explanation: Below is the function for performing a number of optimization iterations so as to gradually improve the variables of the neural network. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations.
This function is similar to the previous tutorials, except that it now takes an argument for the adversarial target-class. When this target-class is set to an integer, it will be used instead of the true class-number for the training-data. The adversarial optimizer is also used instead of the normal optimizer, and after each step of the adversarial optimizer, the noise will be limited / clipped to the allowed range. This optimizes the adversarial noise and ignores the other variables of the neural network.
End of explanation
def get_noise():
# Run the TensorFlow session to retrieve the contents of
# the x_noise variable inside the graph.
noise = session.run(x_noise)
return np.squeeze(noise)
Explanation: Helper-functions for getting and plotting the noise
This function gets the adversarial noise from inside the TensorFlow graph.
End of explanation
def plot_noise():
# Get the adversarial noise from inside the TensorFlow graph.
noise = get_noise()
# Print statistics.
print("Noise:")
print("- Min:", noise.min())
print("- Max:", noise.max())
print("- Std:", noise.std())
# Plot the noise.
plt.imshow(noise, interpolation='nearest', cmap='seismic',
vmin=-1.0, vmax=1.0)
Explanation: This function plots the adversarial noise and prints some statistics.
End of explanation
def plot_example_errors(cls_pred, correct):
# This function is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
# Get the adversarial noise from inside the TensorFlow graph.
noise = get_noise()
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9],
noise=noise)
Explanation: Helper-function to plot example errors
Function for plotting examples of images from the test-set that have been mis-classified.
End of explanation
def plot_confusion_matrix(cls_pred):
# This is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the true classifications for the test-set.
cls_true = data.test.cls
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)
Explanation: Helper-function to plot confusion matrix
End of explanation
# Split the test-set into smaller batches of this size.
test_batch_size = 256
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# Number of images in the test-set.
num_test = len(data.test.images)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_test, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_test:
# The ending index for the next batch is denoted j.
j = min(i + test_batch_size, num_test)
# Get the images from the test-set between index i and j.
images = data.test.images[i:j, :]
# Get the associated labels.
labels = data.test.labels[i:j, :]
# Create a feed-dict with these images and labels.
feed_dict = {x: images,
y_true: labels}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Convenience variable for the true class-numbers of the test-set.
cls_true = data.test.cls
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
# Calculate the number of correctly classified images.
# When summing a boolean array, False means 0 and True means 1.
correct_sum = correct.sum()
# Classification accuracy is the number of correctly classified
# images divided by the total number of images in the test-set.
acc = float(correct_sum) / num_test
# Print the accuracy.
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, correct_sum, num_test))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
Explanation: Helper-function for showing the performance
Function for printing the classification accuracy on the test-set.
It takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.
Note that this function can use a lot of computer memory, which is why the test-set is split into smaller batches. If you have little RAM in your computer and it crashes, then you can try and lower the batch-size.
End of explanation
optimize(num_iterations=1000)
Explanation: Normal optimization of neural network
First we perform 1000 optimization iterations with the normal optimizer. This finds the variables that makes the neural network perform well on the training-set.
The adversarial noise is not effective yet because it has only been initialized to zero above and it is not being updated during this optimization.
End of explanation
print_test_accuracy(show_example_errors=True)
Explanation: The classification accuracy is now about 96-97% on the test-set. (This will vary each time you run this Python Notebook).
End of explanation
init_noise()
Explanation: Find the adversarial noise
Before we start optimizing the adversarial noise, we first initialize it to zero. This was already done above but it is repeated here in case you want to re-run this code with another target-class.
End of explanation
optimize(num_iterations=1000, adversary_target_cls=3)
Explanation: Now perform optimization of the adversarial noise. This uses the adversarial optimizer instead of the normal optimizer, which means that it only optimizes the variable for the adversarial noise, while ignoring all the other variables of the neural network.
End of explanation
plot_noise()
Explanation: The adversarial noise has now been optimized and it can be shown in a plot. The red pixels show positive noise-values and the blue pixels show negative noise-values. This noise-pattern is added to every input image. The positive (red) noise-values makes the pixels darker and the negative (blue) noise-values makes the pixels brighter. Examples of this are shown below.
End of explanation
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
Explanation: When this noise is added to all the images in the test-set, the result is typically a classification accuracy of 10-15% depending on the target-class that was chosen. We can also see from the confusion matrix that most images in the test-set are now classified as the desired target-class - although some of the target-classes require more adversarial noise than others.
So we have found adversarial noise that makes the neural network mis-classify almost all images in the test-set as our desired target-class.
We can also show some examples of mis-classified images with the adversarial noise. The noise is clearly visible but the digits are still easily identified by the human eye.
End of explanation
def find_all_noise(num_iterations=1000):
# Adversarial noise for all target-classes.
all_noise = []
# For each target-class.
for i in range(num_classes):
print("Finding adversarial noise for target-class:", i)
# Reset the adversarial noise to zero.
init_noise()
# Optimize the adversarial noise.
optimize(num_iterations=num_iterations,
adversary_target_cls=i)
# Get the adversarial noise from inside the TensorFlow graph.
noise = get_noise()
# Append the noise to the array.
all_noise.append(noise)
# Print newline.
print()
return all_noise
all_noise = find_all_noise(num_iterations=300)
Explanation: Adversarial noise for all target-classes
This is a helper-function for finding the adversarial noise for all target-classes. The function loops over all the class-numbers from 0 to 9 and runs the optimization above. The results are then stored in an array.
End of explanation
def plot_all_noise(all_noise):
# Create figure with 10 sub-plots.
fig, axes = plt.subplots(2, 5)
fig.subplots_adjust(hspace=0.2, wspace=0.1)
# For each sub-plot.
for i, ax in enumerate(axes.flat):
# Get the adversarial noise for the i'th target-class.
noise = all_noise[i]
# Plot the noise.
ax.imshow(noise,
cmap='seismic', interpolation='nearest',
vmin=-1.0, vmax=1.0)
# Show the classes as the label on the x-axis.
ax.set_xlabel(i)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
plot_all_noise(all_noise)
Explanation: Plot the adversarial noise for all target-classes
This is a helper-function for plotting a grid with the adversarial noise for all target-classes 0 to 9.
End of explanation
def make_immune(target_cls, num_iterations_adversary=500,
num_iterations_immune=200):
print("Target-class:", target_cls)
print("Finding adversarial noise ...")
# Find the adversarial noise.
optimize(num_iterations=num_iterations_adversary,
adversary_target_cls=target_cls)
# Newline.
print()
# Print classification accuracy.
print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False)
# Newline.
print()
print("Making the neural network immune to the noise ...")
# Try and make the neural network immune to this noise.
# Note that the adversarial noise has not been reset to zero
# so the x_noise variable still holds the noise.
# So we are training the neural network to ignore the noise.
optimize(num_iterations=num_iterations_immune)
# Newline.
print()
# Print classification accuracy.
print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False)
Explanation: Red pixels show positive noise values, and blue pixels show negative noise values.
In some of these noise-images you can see traces of the numbers. For example, the noise for target-class 0 shows a red circle surrounded by blue. This means that a little noise will be added to the input image in the shape of a circle, and it will dampen the other pixels. This is sufficient for most input images in the MNIST data-set to be mis-classified as a 0. Another example is the noise for 3 which also shows traces of the number 3 with red pixels. But the noise for the other classes is less obvious.
Immunity to adversarial noise
We will now try and make the neural network immune to adversarial noise. We do this by re-training the neural network to ignore the adversarial noise. This process can be repeated a number of times.
Helper-function to make a neural network immune to noise
This is the helper-function for making the neural network immune to adversarial noise. First it runs the optimization to find the adversarial noise. Then it runs the normal optimization to make the neural network immune to that noise.
End of explanation
make_immune(target_cls=3)
Explanation: Make immune to noise for target-class 3
First try and make the neural network immune to the adverserial noise for targer-class 3.
First we find the adversarial noise that causes the neural network to mis-classify most of the images in the test-set. Then we run the normal optimization which fine-tunes the variables of the neural network to ignore this noise and this brings the classification accuracy for the noisy images up to 95-97% again.
End of explanation
make_immune(target_cls=3)
Explanation: Now try and run it again. It is now more difficult to find adversarial noise for the target-class 3. The neural network seems to have become somewhat immune to adversarial noise.
End of explanation
for i in range(10):
make_immune(target_cls=i)
# Print newline.
print()
Explanation: Make immune to noise for all target-classes
Now try and make the neural network immune to adversarial noise for all target-classes. Unfortunately this does not seem to work so well.
End of explanation
for i in range(10):
make_immune(target_cls=i)
# Print newline.
print()
make_immune(target_cls=i)
# Print newline.
print()
Explanation: Make immune to all target-classes (double runs)
Now try and use double-runs to make the neural network immune to adversarial noise for all target-classes. Unfortunately this does not seem to work so well either.
Making the neural network immune to one adversarial target-class appears to cancel the immunity towards the other target-classes.
End of explanation
plot_noise()
Explanation: Plot the adversarial noise
We have now performed many optimizations of both the neural network and the adversarial noise. Let us see how the adversarial noise looks now.
End of explanation
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
Explanation: Interestingly, the neural network now has a higher classification accuracy on noisy images than we had on clean images before all these optimizations.
End of explanation
init_noise()
Explanation: Performance on clean images
Now let us see how the neural network performs on clean images so we reset the adversarial noise to zero.
End of explanation
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
Explanation: The neural network now performs worse on clean images compared to noisy images.
End of explanation
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
Explanation: Close TensorFlow Session
We are now done using TensorFlow, so we close the session to release its resources.
End of explanation |
3,274 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
Step1: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
Step2: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise
Step3: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise
Step4: If you built labels correctly, you should see the next output.
Step5: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise
Step6: Exercise
Step7: If you build features correctly, it should look like that cell output below.
Step8: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise
Step9: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like
Step10: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise
Step11: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise
Step12: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation
Step13: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise
Step14: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[
Step15: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
Step16: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
Step17: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
Step18: Testing | Python Code:
import numpy as np
import tensorflow as tf
with open('../sentiment-network/reviews.txt', 'r') as f:
reviews = f.read()
with open('../sentiment-network/labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
Explanation: Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
End of explanation
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
Explanation: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
End of explanation
# Create your dictionary that maps vocab words to integers here
vocab_to_int =
# Convert the reviews to integers, same shape as reviews list, but with integers
reviews_ints =
Explanation: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.
Also, convert the reviews to integers and store the reviews in a new list called reviews_ints.
End of explanation
# Convert labels to 1s and 0s for 'positive' and 'negative'
labels =
Explanation: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise: Convert labels from positive and negative to 1 and 0, respectively.
End of explanation
from collections import Counter
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
Explanation: If you built labels correctly, you should see the next output.
End of explanation
# Filter out that review with 0 length
reviews_ints =
Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise: First, remove the review with zero length from the reviews_ints list.
End of explanation
seq_len = 200
features =
Explanation: Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.
This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
End of explanation
features[:10,:100]
Explanation: If you build features correctly, it should look like that cell output below.
End of explanation
split_frac = 0.8
train_x, val_x =
train_y, val_y =
val_x, test_x =
val_y, test_y =
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
Explanation: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.
End of explanation
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
Explanation: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2500, 200)
Build the graph
Here, we'll build the graph. First up, defining the hyperparameters.
lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.
lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.
batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.
learning_rate: Learning rate
End of explanation
n_words = len(vocab_to_int)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ =
labels_ =
keep_prob =
Explanation: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.
End of explanation
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding =
embed =
Explanation: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer has 200 units, the function will return a tensor with size [batch_size, 200].
End of explanation
with graph.as_default():
# Your basic LSTM cell
lstm =
# Add dropout to the cell
drop =
# Stack up multiple LSTM layers, for deep learning
cell =
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
Explanation: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:
tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>)
you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
Most of the time, your network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.
So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.
Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.
Here is a tutorial on building RNNs that will help you out.
End of explanation
with graph.as_default():
outputs, final_state =
Explanation: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.
End of explanation
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
Explanation: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.
End of explanation
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
Explanation: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
End of explanation
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
Explanation: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
End of explanation
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
Explanation: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
End of explanation
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
Explanation: Testing
End of explanation |
3,275 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TF-Agents Authors.
Step1: Actor-Learner API를 사용한 SAC minitaur
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step2: 설정
먼저 필요한 여러 도구를 가져옵니다.
Step3: 하이퍼 매개변수
Step4: 환경
RL의 환경은 우리가 해결하려고 하는 작업 또는 문제를 나타냅니다. suites를 사용하여 TF-Agents에서 표준 환경을 쉽게 만들 수 있습니다. 문자열 환경 이름을 고려하여 OpenAI Gym, Atari, DM Control 등과 같은 소스에서 환경을 로드하기 위한 여러 suites가 마련되어 있습니다.
이제 Pybullet Suite에서 Minituar 환경을 로드하겠습니다.
Step5: 이 환경의 목표는 에이전트가 Minitaur 로봇을 제어하고 가능한 빨리 전진하도록 하는 정책을 훈련하는 것입니다. 에피소드는 1000개 스텝 동안 유지되며 이익은 에피소드 전체에서 보상의 합계입니다.
정책이 actions를 생성하는 데 사용하는 observation로 환경이 제공하는 정보를 살펴보겠습니다.
Step6: 보이는 바와 같이 관찰이 상당히 복잡합니다. 모든 모터의 각도, 속도 및 토크를 나타내는 28개의 값을 받습니다. 그러면 환경이 [-1, 1] 사이의 행동에 대해 8개의 값을 예상합니다. 이들 값이 기대하는 모터 각도입니다.
일반적으로, 훈련 중 데이터 수집을 위한 환경과 평가를 위한 환경의 두 가지 환경을 만듭니다. 이들 환경은 순수 Python으로 작성되고 Actor Learner API가 직접 사용하는 numpy 배열을 사용합니다.
Step7: 배포 전략
DistributionStrategy API를 사용하여 데이터 병렬 처리를 사용하는 다중 GPU 또는 TPU와 같은 여러 기기에서 훈련 스텝 계산을 실행할 수 있습니다. 훈련 스텝은 다음과 같습니다.
훈련 데이터의 배치를 받습니다.
여러 기기에 이 데이터를 분할합니다.
정방향 스텝을 계산합니다.
손실의 MEAN을 집계하고 계산합니다.
역방향 스텝을 계산하고 그래디언트 변수 업데이트를 수행합니다.
TF-Agents Learner API 및 DistributionStrategy API를 사용하면 아래의 훈련 로직을 변경하지 않고도 GPU의 훈련 스텝 실행(MirroredStrategy 사용)을 TPU(TPUStrategy 사용)로 전환하기가 매우 쉽습니다.
GPU 활성화
GPU에서 실행하려면 먼저 노트북에서 사용할 수 있게 GPU를 활성화해야합니다.
Edit→Notebook Settings로 이동합니다.
Hardware Accelerator 드롭다운에서 GPU를 선택합니다.
전략 선택
strategy_utils를 사용하여 전략을 생성합니다. 내부적으로, 다음 매개변수를 전달합니다.
use_gpu = False는 CPU를 사용하는 tf.distribute.get_strategy()를 반환합니다.
use_gpu = True는 하나의 머신에서 TensorFlow에 보이는 모든 GPU를 사용하는 tf.distribute.MirroredStrategy()를 반환합니다.
Step8: 아래에서 볼 수 있듯이 모든 변수와 에이전트는 strategy.scope() 아래에 생성되어야 합니다.
에이전트
SAC 에이전트를 만들려면 먼저 훈련할 네트워크를 만들어야 합니다. SAC는 actor-critic 에이전트이므로 두 개의 네트워크가 필요합니다.
Critic은 Q(s,a)에 대한 추정치를 제공합니다. 즉, 관찰값과 행동을 입력으로 받고 주어진 상태에 대해 행동이 얼마나 좋은지를 추정합니다.
Step9: 이 critic을 사용하여 actor 네트워크를 훈련하면 관찰값에 따라 행동을 생성할 수 있습니다.
ActorNetwork는 tanh 함수 제한(tanh-squashed) MultivariateNormalDiag 분포를 위한 매개변수를 예측합니다. 그런 다음 이 분포는 행동을 생성해야 할 때마다 현재 관찰값에 따라 샘플링됩니다.
Step10: 이러한 네트워크를 통해 이제 에이전트를 인스턴스화할 수 있습니다.
Step11: 재현 버퍼
환경에서 수집된 데이터를 추적하기 위해 Deepmind의 효율적이고 확장 가능하며 사용하기 쉬운 재현 시스템인 Reverb를 사용합니다. 이를 통해 Actor가 수집하고 Learner가 훈련 중에 소비하는 경험 데이터를 저장합니다.
이 튜토리얼에서는 max_size보다 덜 중요하지만 비동기 수집 및 훈련을 포함한 분산 설정에서 2~1000개 사이의 samples_per_insert를 사용하여 rate_limiters.SampleToInsertRatio를 실험할 수 있습니다. 예를 들면 다음과 같습니다.
rate_limiter=reverb.rate_limiters.SampleToInsertRatio(samples_per_insert=3.0, min_size_to_sample=3, error_buffer=3.0)
Step12: 재현 버퍼는 저장될 텐서를 설명하는 사양을 사용하여 구성되며 tf_agent.collect_data_spec을 사용하여 에이전트에서 가져올 수 있습니다.
SAC Agent에는 손실 계산을 위해 현재와 다음 관찰 값이 모두 필요하기 때문에 sequence_length=2를 설정합니다.
Step13: 이제 Reverb 재현 버퍼에서 TensorFlow 데이터세트를 생성합니다. Learner에 이 데이터세트를 전달하여 훈련을 위한 경험을 샘플링합니다.
Step14: 정책
TF-Agents에서 정책은 RL의 표준 정책 개념을 나타냅니다. 즉, 주어진 time_step에서 행동 또는 행동에 대한 분포를 생성합니다. 기본 메서드는 policy_step = policy.step(time_step)이고, 여기서 policy_step은 명명된 튜플 PolicyStep(action, state, info)입니다. policy_step.action은 환경에 적용할 action이고 state는 상태 저장(RNN) 정책에 대한 상태를 나타내며 info에는 행동의 로그 확률 등의 보조 정보가 포함될 수 있습니다.
에이전트에는 두 가지 정책이 있습니다.
agent.policy — 평가 및 배포에 사용되는 기본 정책입니다.
agent.collect_policy — 데이터 수집에 사용되는 두 번째 정책입니다.
Step15: 에이전트와 독립적으로 정책을 만들 수 있습니다. 예를 들어, tf_agents.policies.random_tf_policy를 사용하여 각 time_step 동안 행동을 무작위로 선택하는 정책을 생성합니다.
Step16: 행위자(Actor)
행위자는 정책과 환경 간의 상호 작용을 관리합니다.
Actor 구성 요소에는 환경 인스턴스(py_environment)와 정책 변수의 복사본이 포함됩니다.
각 Actor 작업자는 정책 변수의 로컬 값이 주어지면 일련의 데이터 수집 스텝을 실행합니다.
actor.run()을 호출하기 전에 훈련 스크립트에서 변수 컨테이너 클라이언트 인스턴스를 사용하여 변수 업데이트를 명시적으로 수행합니다.
관찰된 경험은 각 데이터 수집 단계에서 재현 버퍼에 기록됩니다.
데이터 수집 스텝을 실행할 때 Actor는 관찰자에게 (상태, 행동, 보상)의 궤적을 전달하여 Reverb 재현 시스템에 캐싱하고 기록하도록 합니다.
stride_length=1이므로 프레임 [(t0,t1) (t1,t2) (t2,t3), ...]에 대한 궤적을 저장합니다.
Step17: 무작위 정책으로 Actor를 만들고 재현 버퍼를 시드할 경험을 수집합니다.
Step18: 수집 정책으로 Actor를 인스턴스화하여 훈련 중에 더 많은 경험을 수집합니다.
Step19: 훈련 중에 정책을 평가하는 데 사용할 Actor를 만듭니다. 나중에 메트릭을 기록하기 위해 actor.eval_metrics(num_eval_episodes)를 전달합니다.
Step20: 학습자(Learner)
Learner 구성 요소는 에이전트를 포함하고, 재현 버퍼의 경험 데이터를 사용하여 정책 변수에 그래디언트 스텝 업데이트를 수행합니다. 하나 이상의 훈련 스텝 후에 Learner는 새로운 변수 값 세트를 변수 컨테이너에 푸시할 수 있습니다.
Step21: 메트릭 및 평가
위의 actor.eval_metrics로 eval Actor를 인스턴스화하여 정책 평가 중에 가장 일반적으로 사용되는 메트릭을 생성합니다.
평균 이익, 이익은 에피소드에 대한 환경에서 정책을 실행하는 동안 얻은 보상의 합계이며 일반적으로 몇 에피소드에 걸쳐 평균을 구합니다.
평균 에피소드 길이
Actor를 실행하여 이들 메트릭을 생성합니다.
Step22: 다른 메트릭의 기타 표준 구현에 대해서는 메트릭 모듈을 확인하세요.
에이전트 훈련하기
훈련 루프에는 환경에서 데이터를 수집하는 것과 에이전트의 네트워크를 최적화하는 것이 포함됩니다. 그 과정에서 이따금 에이전트의 정책을 평가하여 진행 상황을 파악합니다.
Step23: 시각화
플롯
에이전트의 성능을 확인하기 위해 평균 이익 대 글로벌 스텝을 플롯할 수 있습니다. Minitaur에서 보상 함수는 minitaur가 1000개 스텝에서 얼마나 멀리까지 가는지를 기준으로 하며, 에너지 소비에 불이익을 줍니다.
Step25: 비디오
각 스텝에서 환경을 렌더링하여 에이전트의 성능을 시각화하면 도움이 됩니다. 이를 수행하기 전에 먼저 이 Colab에 비디오를 포함하는 함수를 작성하겠습니다.
Step26: 다음 코드는 몇 가지 에피소드에 대한 에이전트 정책을 시각화합니다. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TF-Agents Authors.
End of explanation
!sudo apt-get update
!sudo apt-get install -y xvfb ffmpeg
!pip install 'imageio==2.4.0'
!pip install matplotlib
!pip install tf-agents[reverb]
!pip install pybullet
Explanation: Actor-Learner API를 사용한 SAC minitaur
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/agents/tutorials/7_SAC_minitaur_tutorial"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a>
</td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/agents/tutorials/7_SAC_minitaur_tutorial.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/agents/blob/master/docs/tutorials/7_SAC_minitaur_tutorial.ipynb"><img>GitHub에서 소그 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/agents/tutorials/7_SAC_minitaur_tutorial.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드</a></td>
</table>
소개
이 예는 Minitaur 환경에서 Soft Actor Critic 에이전트를 훈련하는 방법을 보여줍니다.
DQN Colab을 통해 작업했다면 이 내용이 매우 친숙할 것입니다. 주목할만한 변경 사항은 다음과 같습니다.
에이전트를 DQN에서 SAC로 변경합니다.
CartPole보다 훨씬 복잡한 환경인 Minitaur에서 훈련합니다. Minitaur 환경은 네 발 달린 로봇이 전진하도록 훈련하는 데 목표를 두고 있습니다.
분산 강화 학습을 위해 TF-Agents Actor-Learner API를 사용합니다.
이 API는 경험 재현 버퍼와 변수 컨테이너(매개변수 서버)를 사용하는 분산 데이터 수집과 여러 기기에 걸친 분산 훈련을 모두 지원합니다. 이 API는 매우 간단하고 모듈식으로 설계되었습니다. Reverb는 재현 버퍼 및 가변 컨테이너 모두에 사용하고 TF DistributionStrategy API는 GPU 및 TPU에서의 분산 훈련에 사용합니다.
다음과 같은 종속성을 설치하지 않은 경우, 다음을 실행합니다.
End of explanation
import base64
import imageio
import IPython
import matplotlib.pyplot as plt
import os
import reverb
import tempfile
import PIL.Image
import tensorflow as tf
from tf_agents.agents.ddpg import critic_network
from tf_agents.agents.sac import sac_agent
from tf_agents.agents.sac import tanh_normal_projection_network
from tf_agents.environments import suite_pybullet
from tf_agents.metrics import py_metrics
from tf_agents.networks import actor_distribution_network
from tf_agents.policies import greedy_policy
from tf_agents.policies import py_tf_eager_policy
from tf_agents.policies import random_py_policy
from tf_agents.replay_buffers import reverb_replay_buffer
from tf_agents.replay_buffers import reverb_utils
from tf_agents.train import actor
from tf_agents.train import learner
from tf_agents.train import triggers
from tf_agents.train.utils import spec_utils
from tf_agents.train.utils import strategy_utils
from tf_agents.train.utils import train_utils
tempdir = tempfile.gettempdir()
Explanation: 설정
먼저 필요한 여러 도구를 가져옵니다.
End of explanation
env_name = "MinitaurBulletEnv-v0" # @param {type:"string"}
# Use "num_iterations = 1e6" for better results (2 hrs)
# 1e5 is just so this doesn't take too long (1 hr)
num_iterations = 100000 # @param {type:"integer"}
initial_collect_steps = 10000 # @param {type:"integer"}
collect_steps_per_iteration = 1 # @param {type:"integer"}
replay_buffer_capacity = 10000 # @param {type:"integer"}
batch_size = 256 # @param {type:"integer"}
critic_learning_rate = 3e-4 # @param {type:"number"}
actor_learning_rate = 3e-4 # @param {type:"number"}
alpha_learning_rate = 3e-4 # @param {type:"number"}
target_update_tau = 0.005 # @param {type:"number"}
target_update_period = 1 # @param {type:"number"}
gamma = 0.99 # @param {type:"number"}
reward_scale_factor = 1.0 # @param {type:"number"}
actor_fc_layer_params = (256, 256)
critic_joint_fc_layer_params = (256, 256)
log_interval = 5000 # @param {type:"integer"}
num_eval_episodes = 20 # @param {type:"integer"}
eval_interval = 10000 # @param {type:"integer"}
policy_save_interval = 5000 # @param {type:"integer"}
Explanation: 하이퍼 매개변수
End of explanation
env = suite_pybullet.load(env_name)
env.reset()
PIL.Image.fromarray(env.render())
Explanation: 환경
RL의 환경은 우리가 해결하려고 하는 작업 또는 문제를 나타냅니다. suites를 사용하여 TF-Agents에서 표준 환경을 쉽게 만들 수 있습니다. 문자열 환경 이름을 고려하여 OpenAI Gym, Atari, DM Control 등과 같은 소스에서 환경을 로드하기 위한 여러 suites가 마련되어 있습니다.
이제 Pybullet Suite에서 Minituar 환경을 로드하겠습니다.
End of explanation
print('Observation Spec:')
print(env.time_step_spec().observation)
print('Action Spec:')
print(env.action_spec())
Explanation: 이 환경의 목표는 에이전트가 Minitaur 로봇을 제어하고 가능한 빨리 전진하도록 하는 정책을 훈련하는 것입니다. 에피소드는 1000개 스텝 동안 유지되며 이익은 에피소드 전체에서 보상의 합계입니다.
정책이 actions를 생성하는 데 사용하는 observation로 환경이 제공하는 정보를 살펴보겠습니다.
End of explanation
collect_env = suite_pybullet.load(env_name)
eval_env = suite_pybullet.load(env_name)
Explanation: 보이는 바와 같이 관찰이 상당히 복잡합니다. 모든 모터의 각도, 속도 및 토크를 나타내는 28개의 값을 받습니다. 그러면 환경이 [-1, 1] 사이의 행동에 대해 8개의 값을 예상합니다. 이들 값이 기대하는 모터 각도입니다.
일반적으로, 훈련 중 데이터 수집을 위한 환경과 평가를 위한 환경의 두 가지 환경을 만듭니다. 이들 환경은 순수 Python으로 작성되고 Actor Learner API가 직접 사용하는 numpy 배열을 사용합니다.
End of explanation
use_gpu = True #@param {type:"boolean"}
strategy = strategy_utils.get_strategy(tpu=False, use_gpu=use_gpu)
Explanation: 배포 전략
DistributionStrategy API를 사용하여 데이터 병렬 처리를 사용하는 다중 GPU 또는 TPU와 같은 여러 기기에서 훈련 스텝 계산을 실행할 수 있습니다. 훈련 스텝은 다음과 같습니다.
훈련 데이터의 배치를 받습니다.
여러 기기에 이 데이터를 분할합니다.
정방향 스텝을 계산합니다.
손실의 MEAN을 집계하고 계산합니다.
역방향 스텝을 계산하고 그래디언트 변수 업데이트를 수행합니다.
TF-Agents Learner API 및 DistributionStrategy API를 사용하면 아래의 훈련 로직을 변경하지 않고도 GPU의 훈련 스텝 실행(MirroredStrategy 사용)을 TPU(TPUStrategy 사용)로 전환하기가 매우 쉽습니다.
GPU 활성화
GPU에서 실행하려면 먼저 노트북에서 사용할 수 있게 GPU를 활성화해야합니다.
Edit→Notebook Settings로 이동합니다.
Hardware Accelerator 드롭다운에서 GPU를 선택합니다.
전략 선택
strategy_utils를 사용하여 전략을 생성합니다. 내부적으로, 다음 매개변수를 전달합니다.
use_gpu = False는 CPU를 사용하는 tf.distribute.get_strategy()를 반환합니다.
use_gpu = True는 하나의 머신에서 TensorFlow에 보이는 모든 GPU를 사용하는 tf.distribute.MirroredStrategy()를 반환합니다.
End of explanation
observation_spec, action_spec, time_step_spec = (
spec_utils.get_tensor_specs(collect_env))
with strategy.scope():
critic_net = critic_network.CriticNetwork(
(observation_spec, action_spec),
observation_fc_layer_params=None,
action_fc_layer_params=None,
joint_fc_layer_params=critic_joint_fc_layer_params,
kernel_initializer='glorot_uniform',
last_kernel_initializer='glorot_uniform')
Explanation: 아래에서 볼 수 있듯이 모든 변수와 에이전트는 strategy.scope() 아래에 생성되어야 합니다.
에이전트
SAC 에이전트를 만들려면 먼저 훈련할 네트워크를 만들어야 합니다. SAC는 actor-critic 에이전트이므로 두 개의 네트워크가 필요합니다.
Critic은 Q(s,a)에 대한 추정치를 제공합니다. 즉, 관찰값과 행동을 입력으로 받고 주어진 상태에 대해 행동이 얼마나 좋은지를 추정합니다.
End of explanation
with strategy.scope():
actor_net = actor_distribution_network.ActorDistributionNetwork(
observation_spec,
action_spec,
fc_layer_params=actor_fc_layer_params,
continuous_projection_net=(
tanh_normal_projection_network.TanhNormalProjectionNetwork))
Explanation: 이 critic을 사용하여 actor 네트워크를 훈련하면 관찰값에 따라 행동을 생성할 수 있습니다.
ActorNetwork는 tanh 함수 제한(tanh-squashed) MultivariateNormalDiag 분포를 위한 매개변수를 예측합니다. 그런 다음 이 분포는 행동을 생성해야 할 때마다 현재 관찰값에 따라 샘플링됩니다.
End of explanation
with strategy.scope():
train_step = train_utils.create_train_step()
tf_agent = sac_agent.SacAgent(
time_step_spec,
action_spec,
actor_network=actor_net,
critic_network=critic_net,
actor_optimizer=tf.keras.optimizers.Adam(
learning_rate=actor_learning_rate),
critic_optimizer=tf.keras.optimizers.Adam(
learning_rate=critic_learning_rate),
alpha_optimizer=tf.keras.optimizers.Adam(
learning_rate=alpha_learning_rate),
target_update_tau=target_update_tau,
target_update_period=target_update_period,
td_errors_loss_fn=tf.math.squared_difference,
gamma=gamma,
reward_scale_factor=reward_scale_factor,
train_step_counter=train_step)
tf_agent.initialize()
Explanation: 이러한 네트워크를 통해 이제 에이전트를 인스턴스화할 수 있습니다.
End of explanation
table_name = 'uniform_table'
table = reverb.Table(
table_name,
max_size=replay_buffer_capacity,
sampler=reverb.selectors.Uniform(),
remover=reverb.selectors.Fifo(),
rate_limiter=reverb.rate_limiters.MinSize(1))
reverb_server = reverb.Server([table])
Explanation: 재현 버퍼
환경에서 수집된 데이터를 추적하기 위해 Deepmind의 효율적이고 확장 가능하며 사용하기 쉬운 재현 시스템인 Reverb를 사용합니다. 이를 통해 Actor가 수집하고 Learner가 훈련 중에 소비하는 경험 데이터를 저장합니다.
이 튜토리얼에서는 max_size보다 덜 중요하지만 비동기 수집 및 훈련을 포함한 분산 설정에서 2~1000개 사이의 samples_per_insert를 사용하여 rate_limiters.SampleToInsertRatio를 실험할 수 있습니다. 예를 들면 다음과 같습니다.
rate_limiter=reverb.rate_limiters.SampleToInsertRatio(samples_per_insert=3.0, min_size_to_sample=3, error_buffer=3.0)
End of explanation
reverb_replay = reverb_replay_buffer.ReverbReplayBuffer(
tf_agent.collect_data_spec,
sequence_length=2,
table_name=table_name,
local_server=reverb_server)
Explanation: 재현 버퍼는 저장될 텐서를 설명하는 사양을 사용하여 구성되며 tf_agent.collect_data_spec을 사용하여 에이전트에서 가져올 수 있습니다.
SAC Agent에는 손실 계산을 위해 현재와 다음 관찰 값이 모두 필요하기 때문에 sequence_length=2를 설정합니다.
End of explanation
dataset = reverb_replay.as_dataset(
sample_batch_size=batch_size, num_steps=2).prefetch(50)
experience_dataset_fn = lambda: dataset
Explanation: 이제 Reverb 재현 버퍼에서 TensorFlow 데이터세트를 생성합니다. Learner에 이 데이터세트를 전달하여 훈련을 위한 경험을 샘플링합니다.
End of explanation
tf_eval_policy = tf_agent.policy
eval_policy = py_tf_eager_policy.PyTFEagerPolicy(
tf_eval_policy, use_tf_function=True)
tf_collect_policy = tf_agent.collect_policy
collect_policy = py_tf_eager_policy.PyTFEagerPolicy(
tf_collect_policy, use_tf_function=True)
Explanation: 정책
TF-Agents에서 정책은 RL의 표준 정책 개념을 나타냅니다. 즉, 주어진 time_step에서 행동 또는 행동에 대한 분포를 생성합니다. 기본 메서드는 policy_step = policy.step(time_step)이고, 여기서 policy_step은 명명된 튜플 PolicyStep(action, state, info)입니다. policy_step.action은 환경에 적용할 action이고 state는 상태 저장(RNN) 정책에 대한 상태를 나타내며 info에는 행동의 로그 확률 등의 보조 정보가 포함될 수 있습니다.
에이전트에는 두 가지 정책이 있습니다.
agent.policy — 평가 및 배포에 사용되는 기본 정책입니다.
agent.collect_policy — 데이터 수집에 사용되는 두 번째 정책입니다.
End of explanation
random_policy = random_py_policy.RandomPyPolicy(
collect_env.time_step_spec(), collect_env.action_spec())
Explanation: 에이전트와 독립적으로 정책을 만들 수 있습니다. 예를 들어, tf_agents.policies.random_tf_policy를 사용하여 각 time_step 동안 행동을 무작위로 선택하는 정책을 생성합니다.
End of explanation
rb_observer = reverb_utils.ReverbAddTrajectoryObserver(
reverb_replay.py_client,
table_name,
sequence_length=2,
stride_length=1)
Explanation: 행위자(Actor)
행위자는 정책과 환경 간의 상호 작용을 관리합니다.
Actor 구성 요소에는 환경 인스턴스(py_environment)와 정책 변수의 복사본이 포함됩니다.
각 Actor 작업자는 정책 변수의 로컬 값이 주어지면 일련의 데이터 수집 스텝을 실행합니다.
actor.run()을 호출하기 전에 훈련 스크립트에서 변수 컨테이너 클라이언트 인스턴스를 사용하여 변수 업데이트를 명시적으로 수행합니다.
관찰된 경험은 각 데이터 수집 단계에서 재현 버퍼에 기록됩니다.
데이터 수집 스텝을 실행할 때 Actor는 관찰자에게 (상태, 행동, 보상)의 궤적을 전달하여 Reverb 재현 시스템에 캐싱하고 기록하도록 합니다.
stride_length=1이므로 프레임 [(t0,t1) (t1,t2) (t2,t3), ...]에 대한 궤적을 저장합니다.
End of explanation
initial_collect_actor = actor.Actor(
collect_env,
random_policy,
train_step,
steps_per_run=initial_collect_steps,
observers=[rb_observer])
initial_collect_actor.run()
Explanation: 무작위 정책으로 Actor를 만들고 재현 버퍼를 시드할 경험을 수집합니다.
End of explanation
env_step_metric = py_metrics.EnvironmentSteps()
collect_actor = actor.Actor(
collect_env,
collect_policy,
train_step,
steps_per_run=1,
metrics=actor.collect_metrics(10),
summary_dir=os.path.join(tempdir, learner.TRAIN_DIR),
observers=[rb_observer, env_step_metric])
Explanation: 수집 정책으로 Actor를 인스턴스화하여 훈련 중에 더 많은 경험을 수집합니다.
End of explanation
eval_actor = actor.Actor(
eval_env,
eval_policy,
train_step,
episodes_per_run=num_eval_episodes,
metrics=actor.eval_metrics(num_eval_episodes),
summary_dir=os.path.join(tempdir, 'eval'),
)
Explanation: 훈련 중에 정책을 평가하는 데 사용할 Actor를 만듭니다. 나중에 메트릭을 기록하기 위해 actor.eval_metrics(num_eval_episodes)를 전달합니다.
End of explanation
saved_model_dir = os.path.join(tempdir, learner.POLICY_SAVED_MODEL_DIR)
# Triggers to save the agent's policy checkpoints.
learning_triggers = [
triggers.PolicySavedModelTrigger(
saved_model_dir,
tf_agent,
train_step,
interval=policy_save_interval),
triggers.StepPerSecondLogTrigger(train_step, interval=1000),
]
agent_learner = learner.Learner(
tempdir,
train_step,
tf_agent,
experience_dataset_fn,
triggers=learning_triggers,
strategy=strategy)
Explanation: 학습자(Learner)
Learner 구성 요소는 에이전트를 포함하고, 재현 버퍼의 경험 데이터를 사용하여 정책 변수에 그래디언트 스텝 업데이트를 수행합니다. 하나 이상의 훈련 스텝 후에 Learner는 새로운 변수 값 세트를 변수 컨테이너에 푸시할 수 있습니다.
End of explanation
def get_eval_metrics():
eval_actor.run()
results = {}
for metric in eval_actor.metrics:
results[metric.name] = metric.result()
return results
metrics = get_eval_metrics()
def log_eval_metrics(step, metrics):
eval_results = (', ').join(
'{} = {:.6f}'.format(name, result) for name, result in metrics.items())
print('step = {0}: {1}'.format(step, eval_results))
log_eval_metrics(0, metrics)
Explanation: 메트릭 및 평가
위의 actor.eval_metrics로 eval Actor를 인스턴스화하여 정책 평가 중에 가장 일반적으로 사용되는 메트릭을 생성합니다.
평균 이익, 이익은 에피소드에 대한 환경에서 정책을 실행하는 동안 얻은 보상의 합계이며 일반적으로 몇 에피소드에 걸쳐 평균을 구합니다.
평균 에피소드 길이
Actor를 실행하여 이들 메트릭을 생성합니다.
End of explanation
#@test {"skip": true}
try:
%%time
except:
pass
# Reset the train step
tf_agent.train_step_counter.assign(0)
# Evaluate the agent's policy once before training.
avg_return = get_eval_metrics()["AverageReturn"]
returns = [avg_return]
for _ in range(num_iterations):
# Training.
collect_actor.run()
loss_info = agent_learner.run(iterations=1)
# Evaluating.
step = agent_learner.train_step_numpy
if eval_interval and step % eval_interval == 0:
metrics = get_eval_metrics()
log_eval_metrics(step, metrics)
returns.append(metrics["AverageReturn"])
if log_interval and step % log_interval == 0:
print('step = {0}: loss = {1}'.format(step, loss_info.loss.numpy()))
rb_observer.close()
reverb_server.stop()
Explanation: 다른 메트릭의 기타 표준 구현에 대해서는 메트릭 모듈을 확인하세요.
에이전트 훈련하기
훈련 루프에는 환경에서 데이터를 수집하는 것과 에이전트의 네트워크를 최적화하는 것이 포함됩니다. 그 과정에서 이따금 에이전트의 정책을 평가하여 진행 상황을 파악합니다.
End of explanation
#@test {"skip": true}
steps = range(0, num_iterations + 1, eval_interval)
plt.plot(steps, returns)
plt.ylabel('Average Return')
plt.xlabel('Step')
plt.ylim()
Explanation: 시각화
플롯
에이전트의 성능을 확인하기 위해 평균 이익 대 글로벌 스텝을 플롯할 수 있습니다. Minitaur에서 보상 함수는 minitaur가 1000개 스텝에서 얼마나 멀리까지 가는지를 기준으로 하며, 에너지 소비에 불이익을 줍니다.
End of explanation
def embed_mp4(filename):
Embeds an mp4 file in the notebook.
video = open(filename,'rb').read()
b64 = base64.b64encode(video)
tag = '''
<video width="640" height="480" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4">
Your browser does not support the video tag.
</video>'''.format(b64.decode())
return IPython.display.HTML(tag)
Explanation: 비디오
각 스텝에서 환경을 렌더링하여 에이전트의 성능을 시각화하면 도움이 됩니다. 이를 수행하기 전에 먼저 이 Colab에 비디오를 포함하는 함수를 작성하겠습니다.
End of explanation
num_episodes = 3
video_filename = 'sac_minitaur.mp4'
with imageio.get_writer(video_filename, fps=60) as video:
for _ in range(num_episodes):
time_step = eval_env.reset()
video.append_data(eval_env.render())
while not time_step.is_last():
action_step = eval_actor.policy.action(time_step)
time_step = eval_env.step(action_step.action)
video.append_data(eval_env.render())
embed_mp4(video_filename)
Explanation: 다음 코드는 몇 가지 에피소드에 대한 에이전트 정책을 시각화합니다.
End of explanation |
3,276 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Here we will test parameter recovery and model comparison for Rescorla-Wagner (RW), Hierarchical Gaussian Filters (HGF), and Switching Gaussian Filters (SGF) models of the social influence task.
Step1: Lets start by generating some behavioral data from the social influence task. Here green advice/choice is encoded as 0 and the blue advice/choice is encoded as 1.
Step2: plot performance of different agents in different blocks
Step3: Fit simulated behavior
Step4: Compute fit quality and plot posterior estimates from a hierarchical parameteric model
Step5: fit HGF agent to simulated data
Step6: Plot posterior estimates from simulated data for the HGF agent
Step7: Test model comparison | Python Code:
import numpy as np
from scipy import io
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
sns.set(style = 'white', color_codes = True)
%matplotlib inline
import sys
import os
import os
cwd = os.getcwd()
sys.path.append(cwd[:-len('befit/examples/social_influence')])
Explanation: Here we will test parameter recovery and model comparison for Rescorla-Wagner (RW), Hierarchical Gaussian Filters (HGF), and Switching Gaussian Filters (SGF) models of the social influence task.
End of explanation
import torch
from torch import ones, zeros, tensor
torch.manual_seed(1234)
nsub = 50 #number of subjects
trials = 120 #number of samples
from befit.tasks import SocialInfluence
from befit.simulate import Simulator
from befit.inference import Horseshoe, Normal
from befit.agents import RLSocInf, HGFSocInf, SGFSocInf
# load stimuli (trial offers, advices, and reliability of advices)
reliability = torch.from_numpy(np.load('advice_reliability.npy')).float()
reliability = reliability.reshape(trials, -1, 1).repeat(1, 1, nsub).reshape(trials, -1).unsqueeze(0)
offers = torch.from_numpy(np.load('offers.npy')).reshape(trials, -1, 1).repeat(1, 1, nsub)\
.reshape(trials, -1).unsqueeze(0)
stimuli = {'offers': offers,
'reliability': reliability}
socinfl = SocialInfluence(stimuli, nsub=nsub)
# RL agent
rl_agent = RLSocInf(runs=2*nsub, trials=trials)
trans_pars1 = torch.arange(-.5,.5,1/(2*nsub)).reshape(-1, 1) + tensor([[-2., 4., 0., 0.]])
rl_agent.set_parameters(trans_pars1)
sim1 = Simulator(socinfl, rl_agent, runs=2*nsub, trials=trials)
sim1.simulate_experiment()
# HGF agent
hgf_agent = HGFSocInf(runs=2*nsub, trials=trials)
trans_pars2 = torch.arange(-.5, .5, 1/(2*nsub)).reshape(-1, 1) + tensor([[2., 0., 4., 0., 0.]])
hgf_agent.set_parameters(trans_pars2)
sim2 = Simulator(socinfl, hgf_agent, runs=2*nsub, trials=trials)
sim2.simulate_experiment()
# SGF agent
sgf_agent = SGFSocInf(runs=2*nsub, trials=trials)
trans_pars3 = torch.arange(-.5, .5, 1/(2*nsub)).reshape(-1, 1) + tensor([[-2., -1., 4., 0., 0.]])
sgf_agent.set_parameters(trans_pars3)
sim3 = Simulator(socinfl, sgf_agent, runs=2*nsub, trials=trials)
sim3.simulate_experiment();
def posterior_accuracy(labels, df, vals):
for i, lbl in enumerate(labels):
std = df.loc[df['parameter'] == lbl].groupby(by='subject').std()
mean = df.loc[df['parameter'] == lbl].groupby(by='subject').mean()
print(lbl, np.sum(((mean+2*std).values[:, 0] > vals[i])*((mean-2*std).values[:, 0] < vals[i]))/(2*nsub))
Explanation: Lets start by generating some behavioral data from the social influence task. Here green advice/choice is encoded as 0 and the blue advice/choice is encoded as 1.
End of explanation
def compute_mean_performance(outcomes, responses):
cc1 = (outcomes * responses > 0.).float() # accept reliable offer
cc2 = (outcomes * (1 - responses) < 0.).float() # reject unreliable offer
return torch.einsum('ijk->k', cc1 + cc2)/trials
perf1 = compute_mean_performance(sim1.stimulus['outcomes'][..., 0],
sim1.responses.float()).numpy().reshape(2, -1)
print('RL agent: ', np.median(perf1, axis = -1))
fig, ax = plt.subplots(1,2, sharex = True, sharey = True)
ax[0].hist(perf1[0]);
ax[1].hist(perf1[1]);
fig.suptitle('RL agent', fontsize = 20);
ax[0].set_ylim([0, 20]);
ax[0].set_xlim([.5, 1.]);
perf2 = compute_mean_performance(sim2.stimulus['outcomes'][..., 0],
sim2.responses.float()).numpy().reshape(2, -1)
print('HGF agent: ', np.median(perf2, axis = -1))
fig, ax = plt.subplots(1,2, sharex = True, sharey = True)
ax[0].hist(perf2[0]);
ax[1].hist(perf2[1]);
fig.suptitle('HGF agent', fontsize = 20);
ax[0].set_ylim([0, 20]);
ax[0].set_xlim([.5, 1.]);
perf3 = compute_mean_performance(sim3.stimulus['outcomes'][..., 0],
sim3.responses.float()).numpy().reshape(2, -1)
print('SGF agent: ', np.median(perf3, axis = -1))
fig, ax = plt.subplots(1,2, sharex = True, sharey = True)
ax[0].hist(perf3[0]);
ax[1].hist(perf3[1]);
fig.suptitle('SGF agent', fontsize = 20);
ax[0].set_ylim([0, 20]);
ax[0].set_xlim([.5, 1.]);
Explanation: plot performance of different agents in different blocks
End of explanation
stimulus = sim1.stimulus
stimulus['mask'] = torch.ones(1, 120, 100)
rl_infer = Horseshoe(rl_agent, stimulus, sim1.responses)
rl_infer.infer_posterior(iter_steps=200)
labels = [r'$\alpha$', r'$\zeta$', r'$\beta$', r'$\theta$']
tp_df = rl_infer.sample_posterior(labels, n_samples=1000)
sim1.responses.dtype
Explanation: Fit simulated behavior
End of explanation
labels = [r'$\alpha$', r'$\zeta$', r'$\beta$', r'$\theta$']
trans_pars_rl = tp_df.melt(id_vars='subject', var_name='parameter')
vals = [trans_pars1[:,0].numpy(), trans_pars1[:, 1].numpy(), trans_pars1[:, 2].numpy(), trans_pars1[:, 3].numpy()]
posterior_accuracy(labels, trans_pars_rl, vals)
plt.figure()
#plot convergence of stochasitc ELBO estimates (log-model evidence)
plt.plot(rl_infer2.loss[-400:])
g = sns.FacetGrid(trans_pars_rl, col="parameter", height=3, sharey=False);
g = (g.map(sns.lineplot, 'subject', 'value', ci='sd'));
labels = [r'$\alpha$', r'$\zeta$', r'$\beta$', r'bias']
for i in range(len(labels)):
g.axes[0,i].plot(np.arange(2*nsub), trans_pars1[:,i].numpy(),'ro', zorder = 0);
Explanation: Compute fit quality and plot posterior estimates from a hierarchical parameteric model
End of explanation
stimulus = sim2.stimulus
stimulus['mask'] = torch.ones(1, 120, 100)
hgf_infer = Horseshoe(hgf_agent, stimulus, sim2.responses)
hgf_infer.infer_posterior(iter_steps=200)
labels = [r'$\mu_0^2$', r'$\eta$', r'$\zeta$', r'$\beta$', r'$\theta$']
hgf_tp_df, hgf_mu_df, hgf_sigma_df = hgf_infer.sample_posterior(labels, n_samples=1000)
labels = [r'$\mu_0^2$', r'$\eta$', r'$\zeta$', r'$\beta$', r'$\theta$']
trans_pars_hgf = hgf_tp_df.melt(id_vars='subject', var_name='parameter')
vals = [trans_pars2[:, i].numpy() for i in range(len(labels))]
posterior_accuracy(labels, trans_pars_hgf, vals)
Explanation: fit HGF agent to simulated data
End of explanation
plt.figure()
#plot convergence of stochasitc ELBO estimates (log-model evidence)
plt.plot(hgf_infer.loss[-400:])
g = sns.FacetGrid(trans_pars_hgf, col="parameter", height=3, sharey=False);
g = (g.map(sns.lineplot, 'subject', 'value', ci='sd'));
for i in range(len(labels)):
g.axes[0,i].plot(np.arange(2*nsub), trans_pars2[:,i].numpy(),'ro', zorder = 0);
stimulus = sim3.stimulus
stimulus['mask'] = torch.ones(1, 120, 100)
sgf_infer = Horseshoe(sgf_agent, stimulus, sim3.responses)
sgf_infer.infer_posterior(iter_steps=200)
labels = [r'$\rho_1$', r'$h$', r'$\zeta$', r'$\beta$', r'$\theta$']
sgf_tp_df, sgf_mu_df, sgf_sigma_df = sgf_infer.sample_posterior(labels, n_samples=1000)
labels = [r'$\rho_1$', r'$h$', r'$\zeta$', r'$\beta$', r'$\theta$']
trans_pars_sgf = sgf_tp_df.melt(id_vars='subject', var_name='parameter')
vals = [trans_pars3[:, i].numpy() for i in range(len(labels))]
posterior_accuracy(labels, trans_pars_sgf, vals)
plt.figure()
#plot convergence of stochasitc ELBO estimates (log-model evidence)
plt.plot(sgf_infer.loss[-400:])
g = sns.FacetGrid(trans_pars_sgf, col="parameter", height=3, sharey=False);
g = (g.map(sns.lineplot, 'subject', 'value', ci='sd'));
for i in range(len(labels)):
g.axes[0,i].plot(np.arange(2*nsub), trans_pars3[:,i].numpy(),'ro', zorder = 0);
g = sns.PairGrid(sgf_mu_df)
g = g.map_diag(sns.kdeplot)
g = g.map_offdiag(plt.scatter)
g = sns.PairGrid(sgf_sigma_df)
g = g.map_diag(sns.kdeplot)
g = g.map_offdiag(plt.scatter)
#plt.plot(rl_infer.loss[-400:]);
plt.plot(hgf_infer.loss[-400:]);
plt.plot(sgf_infer.loss[-400:]);
Explanation: Plot posterior estimates from simulated data for the HGF agent
End of explanation
stimulus = sim1.stimulus
stimulus['mask'] = torch.ones(1, 120, 100)
rl_infer = [Horseshoe(rl_agent, stimulus, sim1.responses),
Horseshoe(rl_agent, stimulus, sim2.responses),
Horseshoe(rl_agent, stimulus, sim3.responses)]
evidences = torch.zeros(3, 3, 2*nsub)
for i in range(3):
rl_infer[i].infer_posterior(iter_steps = 500)
evidences[0, i] = rl_infer[i].get_log_evidence_per_subject()
hgf_infer = [Horseshoe(hgf_agent, stimulus, sim1.responses),
Horseshoe(hgf_agent, stimulus, sim2.responses),
Horseshoe(hgf_agent, stimulus, sim3.responses)]
for i in range(3):
hgf_infer[i].infer_posterior(iter_steps = 500)
evidences[1, i] = hgf_infer[i].get_log_evidence_per_subject()
sgf_infer = [Horseshoe(sgf_agent, stimulus, sim1.responses),
Horseshoe(sgf_agent, stimulus, sim2.responses),
Horseshoe(sgf_agent, stimulus, sim3.responses)]
for i in range(3):
sgf_infer[i].infer_posterior(iter_steps = 500)
evidences[2, i] = sgf_infer[i].get_log_evidence_per_subject()
print((evidences[:, 0].argmax(dim=0) == 0).sum().float()/(2*nsub))
print((evidences[:, 1].argmax(dim=0) == 1).sum().float()/(2*nsub))
print((evidences[:, 2].argmax(dim=0) == 2).sum().float()/(2*nsub))
evidences.sum(-1)
Explanation: Test model comparison
End of explanation |
3,277 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Differentially Private Histograms
Plotting the distribution of ages in Adult
Step1: We first read in the list of ages in the Adult UCI dataset (the first column).
Step2: Using Numpy's native histogram function, we can find the distribution of ages, as determined by ten equally-spaced bins calculated by histogram.
Step3: Using matplotlib.pyplot, we can plot a barchart of the histogram distribution.
Step4: Differentially private histograms
Using diffprivlib, we can calculate a differentially private version of the histogram. For this example, we use the default settings
Step5: Privacy Leak
Step6: Mirroring the behaviour of np.histogram
Step7: Error
Step8: Effect of epsilon
Step9: Deciding on the range parameter
We know from the dataset description that everyone in the dataset is at least 17 years of age. We don't know off-hand what the upper bound is, so for this example we'll set the upper bound to 100. As of 2019, less than 0.005% of the world's population is aged over 100, so this is an appropriate simplification. Values in the dataset above 100 will be excluded from calculations.
An epsilon of 0.1 still preserves the broad structure of the histogram.
Step10: Error for smaller datasets
Let's repeate the first experiments above with a smaller dataset, this time the Cleveland heart disease dataset from the UCI Repository. This dataset has 303 samples, a small fractin of the Adult dataset processed previously.
Step11: We first find the histogram distribution using numpy.histogram.
Step12: And then find the histogram distribution using diffprivlib.histogram, using the defaults as before (with the accompanying warning).
Step13: And double-check that the bins are the same.
Step14: We then see that the error this time is 3%, a 100-fold increase in error.
Step15: Mirroring Numpy's behaviour
We can evaluate diffprivlib.models.histogram without any privacy by setting epsilon = float("inf"). This should give the exact same result as running numpy.histogram. | Python Code:
import numpy as np
from diffprivlib import tools as dp
import matplotlib.pyplot as plt
Explanation: Differentially Private Histograms
Plotting the distribution of ages in Adult
End of explanation
ages_adult = np.loadtxt("https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data",
usecols=0, delimiter=", ")
Explanation: We first read in the list of ages in the Adult UCI dataset (the first column).
End of explanation
hist, bins = np.histogram(ages_adult)
hist = hist / hist.sum()
Explanation: Using Numpy's native histogram function, we can find the distribution of ages, as determined by ten equally-spaced bins calculated by histogram.
End of explanation
plt.bar(bins[:-1], hist, width=(bins[1]-bins[0]) * 0.9)
plt.show()
Explanation: Using matplotlib.pyplot, we can plot a barchart of the histogram distribution.
End of explanation
dp_hist, dp_bins = dp.histogram(ages_adult)
dp_hist = dp_hist / dp_hist.sum()
plt.bar(dp_bins[:-1], dp_hist, width=(dp_bins[1] - dp_bins[0]) * 0.9)
plt.show()
Explanation: Differentially private histograms
Using diffprivlib, we can calculate a differentially private version of the histogram. For this example, we use the default settings:
- epsilon is 1.0
- range is not specified, so is calculated by the function on-the-fly. This throws a warning, as it leaks privacy about the data (from dp_bins, we know that there are people in the dataset aged 17 and 90).
End of explanation
dp_bins[0], dp_bins[-1]
Explanation: Privacy Leak: In this setting, we know for sure that at least one person in the dataset is aged 17, and another is aged 90.
End of explanation
np.all(dp_bins == bins)
Explanation: Mirroring the behaviour of np.histogram: We can see that the bins returned by diffprivlib.tools.histogram are identical to those given by numpy.histogram.
End of explanation
print("Total histogram error: %f" % np.abs(hist - dp_hist).sum())
Explanation: Error: We can see very little difference in the values of the histgram. In fact, we see an aggregate absolute error across all bins of the order of 0.01%. This is expected, due to the large size of the dataset (n=48842).
End of explanation
dp_hist, dp_bins = dp.histogram(ages_adult, epsilon=0.001)
dp_hist = dp_hist / dp_hist.sum()
print("Total histogram error: %f" % np.abs(hist - dp_hist).sum())
Explanation: Effect of epsilon: If we decrease epsilon (i.e. increase the privacy guarantee), the error will increase.
End of explanation
dp_hist2, dp_bins2 = dp.histogram(ages_adult, epsilon=0.1, range=(17, 100))
dp_hist2 = dp_hist2 / dp_hist2.sum()
plt.bar(dp_bins2[:-1], dp_hist2, width=(dp_bins2[1] - dp_bins2[0]) * 0.9)
plt.show()
Explanation: Deciding on the range parameter
We know from the dataset description that everyone in the dataset is at least 17 years of age. We don't know off-hand what the upper bound is, so for this example we'll set the upper bound to 100. As of 2019, less than 0.005% of the world's population is aged over 100, so this is an appropriate simplification. Values in the dataset above 100 will be excluded from calculations.
An epsilon of 0.1 still preserves the broad structure of the histogram.
End of explanation
ages_heart = np.loadtxt("https://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/processed.cleveland.data",
usecols=0, delimiter=",")
Explanation: Error for smaller datasets
Let's repeate the first experiments above with a smaller dataset, this time the Cleveland heart disease dataset from the UCI Repository. This dataset has 303 samples, a small fractin of the Adult dataset processed previously.
End of explanation
heart_hist, heart_bins = np.histogram(ages_heart)
heart_hist = heart_hist / heart_hist.sum()
Explanation: We first find the histogram distribution using numpy.histogram.
End of explanation
dp_heart_hist, dp_heart_bins = dp.histogram(ages_heart)
dp_heart_hist = dp_heart_hist / dp_heart_hist.sum()
Explanation: And then find the histogram distribution using diffprivlib.histogram, using the defaults as before (with the accompanying warning).
End of explanation
np.all(heart_bins == dp_heart_bins)
Explanation: And double-check that the bins are the same.
End of explanation
print("Total histogram error: %f" % np.abs(heart_hist - dp_heart_hist).sum())
Explanation: We then see that the error this time is 3%, a 100-fold increase in error.
End of explanation
heart_hist, _ = np.histogram(ages_heart)
dp_heart_hist, _ = dp.histogram(ages_heart, epsilon=float("inf"))
np.all(heart_hist == dp_heart_hist)
Explanation: Mirroring Numpy's behaviour
We can evaluate diffprivlib.models.histogram without any privacy by setting epsilon = float("inf"). This should give the exact same result as running numpy.histogram.
End of explanation |
3,278 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nerc', 'ukesm1-0-ll', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: NERC
Source ID: UKESM1-0-LL
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:26
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
3,279 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fluxing with PYPIT [v2]
Step1: For the standard User (Running the script)
Generate the sensitivity function from an extracted standard star
Here is an example fluxing file (see the fluxing docs for details)
Step2: Instrument and parameters
Step3: Instantiate
Step4: Sensitivity function
Step5: Load
Step6: Find the standard (from the brightest spectrum)
Step7: Sensitivity Function
Step8: Plot
Step9: Write
Step10: Flux science
Step11: Plot
Step12: Write science frames
Step13: Instantiate and Load a sensitivity function
Step14: Clean up | Python Code:
%matplotlib inline
# import
from importlib import reload
import os
from matplotlib import pyplot as plt
import glob
import numpy as np
from astropy.table import Table
from pypeit import fluxspec
from pypeit.spectrographs.util import load_spectrograph
Explanation: Fluxing with PYPIT [v2]
End of explanation
os.getenv('PYPEIT_DEV')
Explanation: For the standard User (Running the script)
Generate the sensitivity function from an extracted standard star
Here is an example fluxing file (see the fluxing docs for details):
# User-defined fluxing parameters
[rdx]
spectrograph = vlt_fors2
[fluxcalib]
balm_mask_wid = 12.
std_file = spec1d_STD_vlt_fors2_2018Dec04T004939.578.fits
sensfunc = bpm16274_fors2.fits
Here is the call, and the sensitivity function is written to bpm16274_fors2.fits
pypit_flux_spec fluxing_filename --plot
Apply it to all spectra a spec1d science file
Add a flux block and you can comment out the std_file parameter to avoid remaking the sensitivity function
# User-defined fluxing parameters
[rdx]
spectrograph = vlt_fors2
[fluxcalib]
balm_mask_wid = 12.
#std_file = spec1d_STD_vlt_fors2_2018Dec04T004939.578.fits
sensfunc = bpm16274_fors2.fits
flux read
spec1d_UnknownFRBHostY_vlt_fors2_2018Dec05T020241.687.fits FRB181112_fors2_1.fits
spec1d_UnknownFRBHostY_vlt_fors2_2018Dec05T021815.356.fits FRB181112_fors2_2.fits
spec1d_UnknownFRBHostY_vlt_fors2_2018Dec05T023349.816.fits FRB181112_fors2_3.fits
flux end
The new files contain fluxed spectra (and the original, unfluxed data too)
pypit_flux_spec fluxing_filename
Multi-detector (DEIMOS)
pypit_flux_spec sensfunc --std_file=spec1d_G191B2B_DEIMOS_2017Sep14T152432.fits --instr=keck_deimos --sensfunc_file=sens.yaml --multi_det=3,7
For Developers (primarily)
To play along from here, you need the Development suite reduced
And the $PYPIT_DEV environmental variable pointed at it
End of explanation
spectrograph = load_spectrograph('shane_kast_blue')
par = spectrograph.default_pypeit_par()
Explanation: Instrument and parameters
End of explanation
FxSpec = fluxspec.FluxSpec(spectrograph, par['fluxcalib'])
Explanation: Instantiate
End of explanation
std_file = os.getenv('PYPEIT_DEV')+'Cooked/Science/spec1d_Feige66_KASTb_2015May20T041246.960.fits'
sci_file = os.getenv('PYPEIT_DEV')+'Cooked/Science/spec1d_J1217p3905_KASTb_2015May20T045733.560.fits'
Explanation: Sensitivity function
End of explanation
FxSpec.load_objs(std_file, std=True)
Explanation: Load
End of explanation
_ = FxSpec.find_standard()
Explanation: Find the standard (from the brightest spectrum)
End of explanation
sensfunc = FxSpec.generate_sensfunc()
sensfunc
Explanation: Sensitivity Function
End of explanation
FxSpec.show_sensfunc()
FxSpec.steps
Explanation: Plot
End of explanation
_ = FxSpec.save_sens_dict(FxSpec.sens_dict, outfile='sensfunc.fits')
Explanation: Write
End of explanation
FxSpec.flux_science(sci_file)
FxSpec.sci_specobjs
FxSpec.sci_specobjs[0].optimal
Explanation: Flux science
End of explanation
plt.clf()
ax = plt.gca()
ax.plot(FxSpec.sci_specobjs[0].optimal['WAVE'], FxSpec.sci_specobjs[0].optimal['FLAM'])
ax.plot(FxSpec.sci_specobjs[0].optimal['WAVE'], FxSpec.sci_specobjs[0].optimal['FLAM_SIG'])
ax.set_ylim(-2, 30.)
#
ax.set_xlabel('Wavelength')
ax.set_ylabel('Flux (cgs 1e-17)')
plt.show()
Explanation: Plot
End of explanation
FxSpec.write_science('tmp.fits')
FxSpec.steps
Explanation: Write science frames
End of explanation
par['fluxcalib']['sensfunc'] = 'sensfunc.fits'
FxSpec2 = fluxspec.FluxSpec(spectrograph, par['fluxcalib'])
FxSpec2.show_sensfunc()
Explanation: Instantiate and Load a sensitivity function
End of explanation
os.remove('sensfunc.fits')
os.remove('tmp.fits')
Explanation: Clean up
End of explanation |
3,280 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
My sample df has four columns with NaN values. The goal is to concatenate all the kewwords rows from end to front while excluding the NaN values. | Problem:
import pandas as pd
import numpy as np
df = pd.DataFrame({'users': ['Hu Tao', 'Zhongli', 'Xingqiu'],
'keywords_0': ["a", np.nan, "c"],
'keywords_1': ["d", "e", np.nan],
'keywords_2': [np.nan, np.nan, "b"],
'keywords_3': ["f", np.nan, "g"]})
import numpy as np
def g(df):
df["keywords_all"] = df.filter(like='keyword').apply(lambda x: '-'.join(x.dropna()), axis=1)
for i in range(len(df)):
df.loc[i, "keywords_all"] = df.loc[i, "keywords_all"][::-1]
return df
df = g(df.copy()) |
3,281 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
License
Copyright 2017 J. Patrick Hall, [email protected]
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions
Step1: Load and prepare data for modeling
Step2: LIME is simpler to use with data containing no missing values
Step3: LIME can be unstable with data in which strong correlations exist between input variables
Step4: Remove one var from each correlated pair
Step5: Train a predictive model
Step6: Build local linear surrogate models to help interpret the model
Create a local region based on HouseStyle
Step7: Train penalized linear model in local region
Check R<sup>2</sup> to ensure surrogate model is a good fit for predictions
Use ranked predictions plot to ensure surrogate model is a good fit for predictions
Use trained GLM and coefficients to understand local region of response function
Step8: A ranked predictions plot is a way to visually check whether the surrogate model is a good fit for the complex model. The y-axis is the numeric prediction of both models for a given point. The x-axis is the rank of a point when the predictions are sorted by their GBM prediction, from lowest on the left to highest on the right. When both sets of predictions are aligned, as they are above, this a good indication that the linear model fits the complex, nonlinear GBM well in the approximately local region.
Both the R<sup>2</sup> and ranked predictions plot show the linear model is a good fit in the practical, approximately local sample. This means the regression coefficients are likely a very accurate representation of the behavior of the nonlinear model in this region.
Create explanations (or 'reason codes') for a row in the local set
The local glm coefficient multiplied by the value in a specific row are estimates of how much each variable contributed to each prediction decision. These values can tell you how a variable and it's values were weighted in any given decision by the model. These values are crucially important for machine learning interpretability and are often to referred to "local feature importance", "reason codes", or "turn-down codes." The latter phrases are borrowed from credit scoring. Credit lenders must provide reasons for turning down a credit application, even for automated decisions. Reason codes can be easily extracted from LIME local feature importance values, by simply ranking the variables that played the largest role in any given decision.
Step9: Create a local region based on predicted SalePrice quantiles
Step10: Train penalized linear model in local region
Step11: Here the R<sup>2</sup> and ranked predictions plot show a slightly less accurate fit in the local sample. So the regression coefficients and reason codes may be a bit more approximate than those in the first example.
Create explanations (or 'reason codes') for a row in the local set
Step12: Shutdown H2O | Python Code:
# imports
import h2o
import operator
import numpy as np
import pandas as pd
from h2o.estimators.glm import H2OGeneralizedLinearEstimator
from h2o.estimators.gbm import H2OGradientBoostingEstimator
# start h2o
h2o.init()
h2o.remove_all()
Explanation: License
Copyright 2017 J. Patrick Hall, [email protected]
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Local Interpretable Model Agnostic Explanations (LIME)
Based on: Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. "Why should i trust you?: Explaining the predictions of any classifier." In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135-1144. ACM, 2016.
http://www.kdd.org/kdd2016/papers/files/rfp0573-ribeiroA.pdf
Instead of perturbing a sample of interest to create a local region in which to fit a linear model, some of these examples use a practical sample, say all one story homes, from the data to create an approximately local region in which to fit a linear model. That model can be validated and the region examined to explain local prediction behavior.
Preliminaries: imports, start h2o, load and clean data
End of explanation
# load data
path = '../../03_regression/data/train.csv'
frame = h2o.import_file(path=path)
# assign target and inputs
y = 'SalePrice'
X = [name for name in frame.columns if name not in [y, 'Id']]
Explanation: Load and prepare data for modeling
End of explanation
# determine column types
# impute
reals, enums = [], []
for key, val in frame.types.items():
if key in X:
if val == 'enum':
enums.append(key)
else:
reals.append(key)
_ = frame[reals].impute(method='median')
_ = frame[enums].impute(method='mode')
# split into training and validation
train, valid = frame.split_frame([0.7])
Explanation: LIME is simpler to use with data containing no missing values
End of explanation
# print out correlated pairs
corr = train[reals].cor().as_data_frame()
for i in range(0, corr.shape[0]):
for j in range(0, corr.shape[1]):
if i != j:
if np.abs(corr.iat[i, j]) > 0.7:
print(corr.columns[i], corr.columns[j])
Explanation: LIME can be unstable with data in which strong correlations exist between input variables
End of explanation
X_reals_decorr = [i for i in reals if i not in ['GarageYrBlt', 'TotRmsAbvGrd', 'TotalBsmtSF', 'GarageCars']]
Explanation: Remove one var from each correlated pair
End of explanation
# train GBM model
model = H2OGradientBoostingEstimator(ntrees=100,
max_depth=10,
distribution='huber',
learn_rate=0.1,
stopping_rounds=5,
seed=12345)
model.train(y=y, x=X_reals_decorr, training_frame=train, validation_frame=valid)
preds = valid['Id'].cbind(model.predict(valid))
Explanation: Train a predictive model
End of explanation
local_frame = preds.cbind(valid.drop(['Id']))
local_frame = local_frame[local_frame['HouseStyle'] == '1Story']
local_frame['predict'] = local_frame['predict'].log()
local_frame.describe()
Explanation: Build local linear surrogate models to help interpret the model
Create a local region based on HouseStyle
End of explanation
%matplotlib inline
# initialize
local_glm = H2OGeneralizedLinearEstimator(lambda_search=True)
# train
local_glm.train(x=X_reals_decorr, y='predict', training_frame=local_frame)
# coefs
print('\nLocal GLM Coefficients:')
for c_name, c_val in sorted(local_glm.coef().items(), key=operator.itemgetter(1)):
if c_val != 0.0:
print('%s %s' % (str(c_name + ':').ljust(25), c_val))
# r2
print('\nLocal GLM R-square:\n%.2f' % local_glm.r2())
# ranked predictions plot
pred_frame = local_frame.cbind(local_glm.predict(local_frame))\
.as_data_frame()[['predict', 'predict0']]
pred_frame.columns = ['ML Preds.', 'Surrogate Preds.']
pred_frame.sort_values(by='ML Preds.', inplace=True)
pred_frame.reset_index(inplace=True, drop=True)
_ = pred_frame.plot(title='Ranked Predictions Plot')
Explanation: Train penalized linear model in local region
Check R<sup>2</sup> to ensure surrogate model is a good fit for predictions
Use ranked predictions plot to ensure surrogate model is a good fit for predictions
Use trained GLM and coefficients to understand local region of response function
End of explanation
row = 20 # select a row to describe
local_contrib_frame = pd.DataFrame(columns=['Name', 'Local Contribution', 'Sign'])
# multiply values in row by local glm coefficients
for name in local_frame[row, :].columns:
contrib = 0.0
try:
contrib = local_frame[row, name]*local_glm.coef()[name]
except:
pass
if contrib != 0.0:
local_contrib_frame = local_contrib_frame.append({'Name':name,
'Local Contribution': contrib,
'Sign': contrib > 0},
ignore_index=True)
# plot
_ = local_contrib_frame.plot(x = 'Name',
y = 'Local Contribution',
kind='bar',
title='Local Contributions for Row ' + str(row) + '\n',
color=local_contrib_frame.Sign.map({True: 'r', False: 'b'}),
legend=False)
Explanation: A ranked predictions plot is a way to visually check whether the surrogate model is a good fit for the complex model. The y-axis is the numeric prediction of both models for a given point. The x-axis is the rank of a point when the predictions are sorted by their GBM prediction, from lowest on the left to highest on the right. When both sets of predictions are aligned, as they are above, this a good indication that the linear model fits the complex, nonlinear GBM well in the approximately local region.
Both the R<sup>2</sup> and ranked predictions plot show the linear model is a good fit in the practical, approximately local sample. This means the regression coefficients are likely a very accurate representation of the behavior of the nonlinear model in this region.
Create explanations (or 'reason codes') for a row in the local set
The local glm coefficient multiplied by the value in a specific row are estimates of how much each variable contributed to each prediction decision. These values can tell you how a variable and it's values were weighted in any given decision by the model. These values are crucially important for machine learning interpretability and are often to referred to "local feature importance", "reason codes", or "turn-down codes." The latter phrases are borrowed from credit scoring. Credit lenders must provide reasons for turning down a credit application, even for automated decisions. Reason codes can be easily extracted from LIME local feature importance values, by simply ranking the variables that played the largest role in any given decision.
End of explanation
local_frame = preds.cbind(valid.drop(['Id'])).as_data_frame()
local_frame.sort_values('predict', axis=0, inplace=True)
local_frame = local_frame.iloc[0: local_frame.shape[0]//10, :]
local_frame = h2o.H2OFrame(local_frame)
local_frame['predict'] = local_frame['predict'].log()
local_frame.describe()
Explanation: Create a local region based on predicted SalePrice quantiles
End of explanation
# initialize
local_glm = H2OGeneralizedLinearEstimator(lambda_search=True)
# train
local_glm.train(x=X_reals_decorr, y='predict', training_frame=local_frame)
# ranked predictions plot
pred_frame = local_frame.cbind(local_glm.predict(local_frame))\
.as_data_frame()[['predict', 'predict0']]
pred_frame.columns = ['ML Preds.', 'Surrogate Preds.']
pred_frame.sort_values(by='ML Preds.', inplace=True)
pred_frame.reset_index(inplace=True, drop=True)
_ = pred_frame.plot(title='Ranked Predictions Plot')
# r2
print('\nLocal GLM R-square:\n%.2f' % local_glm.r2())
# coefs
print('\nLocal GLM Coefficients:')
for c_name, c_val in sorted(local_glm.coef().items(), key=operator.itemgetter(1)):
if c_val != 0.0:
print('%s %s' % (str(c_name + ':').ljust(25), c_val))
Explanation: Train penalized linear model in local region
End of explanation
row = 30 # select a row to describe
local_contrib_frame = pd.DataFrame(columns=['Name', 'Local Contribution', 'Sign'])
# multiply values in row by local glm coefficients
for name in local_frame[row, :].columns:
contrib = 0.0
try:
contrib = local_frame[row, name]*local_glm.coef()[name]
except:
pass
if contrib != 0.0:
local_contrib_frame = local_contrib_frame.append({'Name':name,
'Local Contribution': contrib,
'Sign': contrib > 0},
ignore_index=True)
# plot
_ = local_contrib_frame.plot(x = 'Name',
y = 'Local Contribution',
kind='bar',
title='Local Contributions for Row ' + str(row) + '\n',
color=local_contrib_frame.Sign.map({True: 'r', False: 'b'}),
legend=False)
Explanation: Here the R<sup>2</sup> and ranked predictions plot show a slightly less accurate fit in the local sample. So the regression coefficients and reason codes may be a bit more approximate than those in the first example.
Create explanations (or 'reason codes') for a row in the local set
End of explanation
h2o.cluster().shutdown(prompt=True)
Explanation: Shutdown H2O
End of explanation |
3,282 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intro to Neural Networks
Handwritten Digits Recognization
<img src = './img/neuralnetwork/neuron2.png' width = 700 align = 'left'>
<img src = './img/neuralnetwork/synapse.jpg' width = 150 align = 'right'>
The Neuron
Step1: Each iteration of the training process consists of the following steps
Step2: 1. Loading data
Step3: 2. Neural Network (using numpy)
Step4: 3. Randomly initializing weights
Step5: 4. Forwarding
Relu activition function
<img src = './img/neuralnetwork/relu2.jpg' width = '800'>
$$softmax = \frac{e^x}{\sum e^x}$$
Step6: 5. Loss & Backpropagation
Step7: Our goal in training is to find the best set of weights and biases that minimizes the loss function.
In order to know the appropriate amount to adjust the weights and biases by,
we need to know the derivative of the loss function
with respect to the weights and biases.
<img src = './img/neuralnetwork/graph3.jpeg' width = '900'>
Step8: 6. Put Together
Step9: Visualization
3.1 Matrices as manifold translators
before and after training
Step10: <img src = "./img/neuralnetwork/space2.jpeg" width=300 align = 'center'>
<img src = "./img/neuralnetwork/space.jpg" width=600 align = 'center'>
https
Step11: 3.3 Alternative manifold learning methods
Step12: Neural Network (using PyTorch) | Python Code:
1*0.25 + 0.5*(-1.5)
Explanation: Intro to Neural Networks
Handwritten Digits Recognization
<img src = './img/neuralnetwork/neuron2.png' width = 700 align = 'left'>
<img src = './img/neuralnetwork/synapse.jpg' width = 150 align = 'right'>
The Neuron: A Biological Information Processor
dentrites - the receivers
soma - neuron cell body (sums input signals)
axon - the transmitter
synapse 突触 - point of transmission
neuron activates after a certain threshold is met
Learning occurs via electro-chemical changes in effectiveness of synaptic junction.
<img src = "./img/neuralnetwork/layer.png" width=200 align = 'right'>
An Artificial Neuron: The Perceptron simulated on hardware or by software
- input connections - the receivers
- node, unit, or PE simulates neuron body
- output connection - the transmitter
- activation function employs a threshold or bias
- connection weights act as synaptic junctions
Learning occurs via changes in value of the connection weights.
<img src = "./img/neuralnetwork/layer.png" width=200 align = 'right'>
Neural Networks consist of the following components
An input layer, x
An arbitrary amount of hidden layers
An output layer, ŷ
A set of weights and biases between each layer, W and b
A choice of activation function for each hidden layer, σ.
e.g., Sigmoid activation function.
ANNs incorporate the two fundamental components of biological neural nets:
Neurones (nodes)
Synapses (weights)
<img src = './img/neuralnetwork/net2.png' width = 400>
End of explanation
# Author: Robert Guthrie
import torch
import torch.autograd as autograd
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
import sys
import matplotlib.cm as cm
import networkx as nx
import numpy as np
import pylab as plt
import matplotlib as mpl
from collections import defaultdict
from matplotlib.collections import LineCollection
%matplotlib inline
from sklearn import datasets
from sklearn.manifold import Isomap
from sklearn.manifold import TSNE
from sklearn.manifold import MDS
from sklearn.decomposition import PCA
Explanation: Each iteration of the training process consists of the following steps:
Calculating the predicted output ŷ, known as feedforward
Updating the weights and biases, known as backpropagation
<img src = "./img/neuralnetwork/sequence.png" width=1000>
activation function for each hidden layer, σ.
The output ŷ of a simple 2-layer Neural Network is:
$$ \widehat{y} = \sigma (w_2 z + b_2) = \sigma(w_2 \sigma(w_1 x + b_1) + b_2)$$
Lost or Cost function
\begin{eqnarray} C(w,b) \equiv
\frac{1}{2n} \sum_x \| y(x) - a\|^2.
\tag{6}\end{eqnarray}
Here, w denotes the collection of all weights in the network, b all the biases, n is the total number of training inputs, a is the vector of outputs from the network when x is input, and the sum is over all training inputs, x.
Chain rule for calculating derivative of the loss function with respect to the weights.
<img src = './img/neuralnetwork/chain.png' width = '500'>
Note that for simplicity, we have only displayed the partial derivative assuming a 1-layer Neural Network.
Gradient Descent
<img src ='./img/neuralnetwork/gradient.png' width = 600 align = 'center'>
<img src ='./img/neuralnetwork/local.png' width = 800 align = 'center'>
Gradient descent 遍历全部数据集算一次损失函数,速度慢,;
stochastic gradient descent 速度比较快,但收敛性能不太好。
mini-batch gradient decent 把数据分为若干个批,按批来更新参数,一批中的一组数据共同决定了本次梯度的方向,减少了随机性。
Understanding the Mathematics behind Gradient Descent
A simple mathematical intuition behind one of the commonly used optimisation algorithms in Machine Learning.
https://www.douban.com/note/713353797/
Linear neural networks: A simple case
<img src='./img/neuralnetwork/backpropagation1.png' width = '400' align = 'right'>
the output signal is created by
- summing up all the weighted input.
- No activation function will be applied.
<img src='./img/neuralnetwork/backpropagation2.png' width = '400' align = 'right'>
The error is the difference between the target and the actual output:
$e_i = \frac{1}{2} ( t_i - o_i ) ^ 2$
$e_1 = t_1 - o_1 = 1 - 0.92 = 0.08$
Depending on this error, we have to change the weights accordingly.
- we can calculate the fraction of the error e1 in w11 as:
- $e_1 \cdot \frac{w_{11}}{\sum_{i=1}^{4} w_{i1}} = 0.08 \cdot \frac{0.6}{0.6 + 0.4 + 0.1 + 0.2} = 0.037$
Weight updated
http://home.agh.edu.pl/~vlsi/AI/backp_t_en/backprop.html
<img src='./img/neuralnetwork/img02.gif' width = '400' align = 'center'>
<img src='./img/neuralnetwork/img14.gif' width = '400' align = 'center'>
<img src='./img/neuralnetwork/img17.gif' width = '400' align = 'center'>
<img src='./img/neuralnetwork/img19.gif' width = '400' align = 'center'>
We have known that $E = \sum_{j=1}^{n} \frac{1}{2} (t_j - o_j)^2$
And given $t_j$ is a constant, we have:
$\frac{\partial E}{\partial o_{j}} = t_j - o_j$
Apply the chain rule for the differentiation:
$\frac{\partial E}{\partial w_{ij}} = \frac{\partial E}{\partial o_{j}} \cdot \frac{\partial o_j}{\partial w_{ij}}$
$\frac{\partial E}{\partial w_{ij}} = (t_j - o_j) \cdot \frac{\partial o_j}{\partial w_{ij}} $
Further, we often use the sigmoid function as the activation function $\sigma(x) = \frac{1}{1+e^{-x}}$
Given $o_j = \sigma(\sum_{i=1}^{m} w_{ij}h_i)$, we have
$\frac{\partial E}{\partial w_{ij}} = (t_j - o_j) \cdot \frac{\partial }{\partial w_{ij}} \sigma(\sum_{i=1}^{m} w_{ij}h_i)$
And it is easy to differentiate: $\frac{\partial \sigma(x)}{\partial x} = \sigma(x) \cdot (1 - \sigma(x))$
$\frac{\partial E}{\partial w_{ij}} = (t_j - o_j) \cdot \sigma(\sum_{i=1}^{m} w_{ij}h_i) \cdot (1 - \sigma(\sum_{i=1}^{m} w_{ij}h_i)) \frac{\partial }{\partial w_{ij}} \sum_{i=1}^{m} w_{ij}h_i$
$\frac{\partial E}{\partial w_{ij}} = (t_j - o_j) \cdot \sigma(\sum_{i=1}^{m} w_{ij}h_i) \cdot (1 - \sigma(\sum_{i=1}^{m} w_{ij}h_i)) \cdot h_i$
<img src = './img/neuralnetwork/graph3.jpeg' width = '900'>
Handwritten Digit Recognition
https://github.com/lingfeiwu/people2vec
<img src = "./img/neuralnetwork/digits.png" width=200 align = 'left'>
<img src = "./img/neuralnetwork/net.jpeg" width=400 align = 'right'>
Each image has 8*8 = 64 pixels
- input = 64
- [0, 0, 1, 0, ..., 0]
- batch size = 100
- split data into 100 batches.
- hidden neurons = 50
- output = 10
- using relu activation function
<img src = "./img/neuralnetwork/tensor2.jpeg" width=400 align = 'right'>
Set batch_size = 100 images
Given each image 64 pixels
input_matrix = 100*64
Set #neurons= 50
w1 = 64*50
hidden_matrix = 100*50
Given #output = 10
w2 = 50*10
output = 100*10
End of explanation
#basic functions
# softmax
def softmax(x):
e_x = np.exp(x - np.max(x)) # to avoid inf
return e_x / e_x.sum(axis=0)
def softmaxByRow(x):
e_x = np.exp(x - x.max(axis=1, keepdims=True))
return e_x / e_x.sum(axis=1, keepdims=True)
# flush print
def flushPrint(d):
sys.stdout.write('\r')
sys.stdout.write(str(d))
sys.stdout.flush()
# the limits of np.exp
np.exp(1000)
# load data
digits = datasets.load_digits()
# display data
fig, ax = plt.subplots(5, 5, figsize=(5, 5))
for i, axi in enumerate(ax.flat):
axi.imshow(digits.images[i], cmap='binary')
axi.set(xticks=[], yticks=[])
Explanation: 1. Loading data
End of explanation
# prepare training sets
N, H, D_in, D_out = 100, 50, 64, 10 # batch size, hidden, input, output dimension
k = 0.9 # the fraction traning data
learning_rate = 1e-6
L = len(digits.data)
l = int(L*k)
L, l
Batches = {}
M = 200 # number of batches
for j in range(M):
index=list(np.random.randint(l, size=N)) # randomly sample N data points
y = np.zeros((N, 10))
y[np.arange(N), list(digits.target[index])] = 1
x=digits.data[index]
Batches[j]=[x,y]
j = 7
x, y = Batches[j]
plt.imshow(x, cmap = 'binary') # 100*64
plt.show()
plt.imshow(y, cmap = 'binary') # 100*10
plt.show()
Explanation: 2. Neural Network (using numpy)
End of explanation
w1 = np.random.randn(D_in, H)/H
w2 = np.random.randn(H, D_out)/H
w1c = w1.copy() # for comprision in viz
w2c = w2.copy()
plt.imshow(w1, cmap = 'binary') # 64*50
plt.title('w1', fontsize = 20)
plt.show()
plt.imshow(x.dot(w1), cmap = 'binary') # 100*50
plt.title('h', fontsize = 20)
plt.show()
Explanation: 3. Randomly initializing weights
End of explanation
h = x.dot(w1)
# relu activation
h_relu = np.maximum(h, 0)
y_pred = h_relu.dot(w2)
plt.imshow(y_pred, cmap = 'binary') # 100*10
plt.title('predicted_relu', fontsize = 20)
plt.show()
# softmax
y_predS=softmaxByRow(y_pred)
plt.imshow(y_predS, cmap = 'binary') # 100*10
plt.title('predicted_softmax', fontsize = 20)
plt.show()
plt.plot(y_pred[0], 'r-o')
plt.plot(y_predS[0], 'g-s')
plt.show()
Explanation: 4. Forwarding
Relu activition function
<img src = './img/neuralnetwork/relu2.jpg' width = '800'>
$$softmax = \frac{e^x}{\sum e^x}$$
End of explanation
Loss=defaultdict(lambda:[])
loss = np.square(y_predS - y).sum()
Loss[j].append([t,loss])
Loss.items()
Explanation: 5. Loss & Backpropagation
End of explanation
# Backprop
grad_y_pred = 2.0 * (y_predS - y)
grad_w2 = h_relu.T.dot(grad_y_pred)
grad_h_relu = grad_y_pred.dot(w2.T)
grad_h = grad_h_relu.copy()
grad_h[h < 0] = 0
grad_w1 = x.T.dot(grad_h)
# Update weights
w1 -= learning_rate * grad_w1
w2 -= learning_rate * grad_w2
fig = plt.figure(figsize=(8, 3))
ax=fig.add_subplot(121)
plt.imshow(w1, cmap = 'binary') # 64*50
plt.title('w1_updated', fontsize = 20)
ax=fig.add_subplot(122)
plt.imshow(w2, cmap = 'binary') # 64*50
plt.title('w2_updated', fontsize = 20)
plt.tight_layout()
Explanation: Our goal in training is to find the best set of weights and biases that minimizes the loss function.
In order to know the appropriate amount to adjust the weights and biases by,
we need to know the derivative of the loss function
with respect to the weights and biases.
<img src = './img/neuralnetwork/graph3.jpeg' width = '900'>
End of explanation
w1 = np.random.randn(D_in, H)/H
w2 = np.random.randn(H, D_out)/H
w1c = w1.copy() # for comprision in viz
w2c = w2.copy()
Loss=defaultdict(lambda:[])
# traning
for j in Batches:
flushPrint(j)
x,y=Batches[j]
for t in range(500):# repeated use of the same batch
# Forward
h = x.dot(w1)
h_relu = np.maximum(h, 0)
y_pred = h_relu.dot(w2)
y_predS=softmaxByRow(y_pred)
# loss
loss = np.square(y_predS - y).sum()
Loss[j].append([t,loss])
# Backprop
grad_y_pred = 2.0 * (y_predS - y)
grad_w2 = h_relu.T.dot(grad_y_pred)
grad_h_relu = grad_y_pred.dot(w2.T)
grad_h = grad_h_relu.copy()
grad_h[h < 0] = 0
grad_w1 = x.T.dot(grad_h)
# Update weights
w1 -= learning_rate * grad_w1
w2 -= learning_rate * grad_w2
fig = plt.figure(figsize=(8, 3))
ax=fig.add_subplot(121)
plt.imshow(w1, cmap = 'binary') # 64*50
plt.title('w1_updated', fontsize = 20)
ax=fig.add_subplot(122)
plt.imshow(w2, cmap = 'binary') # 64*50
plt.title('w2_updated', fontsize = 20)
plt.tight_layout()
# Dispaly loss decreasing
fig = plt.figure(figsize=(5, 4))
cmap = cm.get_cmap('rainbow',M)
for i in Loss:
epochs,loss=zip(*sorted(Loss[i]))
plt.plot(epochs,loss,color=cmap(i),alpha=0.7)
plt.xlabel('Epochs',fontsize=18)
plt.ylabel('Loss',fontsize=18)
ax1 = fig.add_axes([0.2, 0.8, 0.65, 0.03])
cb1 = mpl.colorbar.ColorbarBase(ax1, cmap=cmap,
norm=mpl.colors.Normalize(vmin=0, vmax=M),
orientation='horizontal')
cb1.set_label('N of batches')
# Test
TestData=digits.data[-(L-l):]
PredictData=np.maximum(TestData.dot(w1),0).dot(w2)
compare=np.argmax(PredictData,axis=1)-digits.target[-(L-l):]
Accuracy=list(compare).count(0)/float(len(compare))
Accuracy
Explanation: 6. Put Together
End of explanation
cmap=plt.cm.get_cmap('Accent', 10)
fig = plt.figure(figsize=(12, 4))
fig.add_subplot(141)
plt.imshow(w1c,cmap='Blues')
plt.title('w1 before training')
fig.add_subplot(142)
plt.imshow(w1,cmap='Blues')
plt.title('w1 after training')
fig.add_subplot(143)
plt.imshow(w2c,cmap='Blues')
plt.title('w2 before training')
fig.add_subplot(144)
plt.imshow(w2,cmap='Blues')
plt.title('w2 after training')
plt.tight_layout()
# dimension reduction for viz
pca = PCA(n_components=2)
projectionPixel = pca.fit_transform(w1) # 64*2
projectionLabel = pca.fit_transform(w2.T) # 10*2
G=nx.grid_2d_graph(8,8)
pos=dict(((i,j),(i,j)) for i,j in G.nodes())
index=sorted(pos.keys())
posPixel=dict(zip(index, projectionPixel))
Explanation: Visualization
3.1 Matrices as manifold translators
before and after training
End of explanation
fig = plt.figure(figsize=(16, 8))
#
ax=fig.add_subplot(241)
data3_1=digits.images[3]
x,y,z=zip(*[(i,j,data3_1[i,j]) for i,j in index])
#nx.draw_networkx_edges(G,pos=pos,color='gray',alpha=0.5)
plt.scatter(y,x,s=100,c=z,cmap='binary')
plt.imshow(data3_1,cmap='Blues')
#
ax=fig.add_subplot(245)
line_segments = LineCollection([[posPixel[i],posPixel[j]] for i,j in G.edges()],\
color='gray',zorder=1)
ax.add_collection(line_segments)
x,y,z=zip(*[(projectionPixel[n][0],projectionPixel[n][1],data3_1[xy[0],xy[1]]) \
for n,xy in enumerate(index)])
plt.scatter(x,y,s=100,c=z,cmap='binary',zorder=2)
#
ax=fig.add_subplot(242)
data3_2=digits.images[13]
x,y,z=zip(*[(i,j,data3_2[i,j]) for i,j in index])
#nx.draw_networkx_edges(G,pos=pos,color='gray',alpha=0.5)
plt.scatter(y,x,s=100,c=z,cmap='binary')
plt.imshow(data3_2,cmap='Blues')
#
ax=fig.add_subplot(246)
line_segments = LineCollection([[posPixel[i],posPixel[j]] for i,j in G.edges()],\
color='gray',zorder=1)
ax.add_collection(line_segments)
x,y,z=zip(*[(projectionPixel[n][0],projectionPixel[n][1],data3_2[xy[0],xy[1]]) \
for n,xy in enumerate(index)])
plt.scatter(x,y,s=100,c=z,cmap='binary',zorder=2)
#
ax=fig.add_subplot(243)
data4_1=digits.images[4]
x,y,z=zip(*[(i,j,data4_1[i,j]) for i,j in index])
#nx.draw_networkx_edges(G,pos=pos,color='gray',alpha=0.5)
plt.scatter(y,x,s=100,c=z,cmap='binary')
plt.imshow(data4_1,cmap='Blues')
#
#
ax=fig.add_subplot(247)
line_segments = LineCollection([[posPixel[i],posPixel[j]] for i,j in G.edges()],\
color='gray',zorder=1)
ax.add_collection(line_segments)
x,y,z=zip(*[(projectionPixel[n][0],projectionPixel[n][1],data4_1[xy[0],xy[1]]) \
for n,xy in enumerate(index)])
plt.scatter(x,y,s=100,c=z,cmap='binary',zorder=2)
#
ax=fig.add_subplot(244)
data4_2=digits.images[14]
x,y,z=zip(*[(i,j,data4_2[i,j]) for i,j in index])
#nx.draw_networkx_edges(G,pos=pos,color='gray',alpha=0.5)
plt.scatter(y,x,s=100,c=z,cmap='binary')
plt.imshow(data4_2,cmap='Blues')
#
#
ax=fig.add_subplot(248)
line_segments = LineCollection([[posPixel[i],posPixel[j]] for i,j in G.edges()],\
color='gray',zorder=1)
ax.add_collection(line_segments)
x,y,z=zip(*[(projectionPixel[n][0],projectionPixel[n][1],data4_2[xy[0],xy[1]]) \
for n,xy in enumerate(index)])
plt.scatter(x,y,s=100,c=z,cmap='binary',zorder=2)
#
plt.tight_layout()
plt.show()
Explanation: <img src = "./img/neuralnetwork/space2.jpeg" width=300 align = 'center'>
<img src = "./img/neuralnetwork/space.jpg" width=600 align = 'center'>
https://cs.stanford.edu/people/karpathy/convnetjs/demo/classify2d.html
End of explanation
# dimension reduction for viz
pca = PCA(n_components=2)
iso = Isomap(n_components=2)
tsne = TSNE(n_components=2)
mds = MDS(n_components=2)
#
encodeData = digits.data.dot(w1)
projection0 = pca.fit_transform(digits.data)
projection1 = pca.fit_transform(encodeData)
projection2 = mds.fit_transform(digits.data)
projection3 = iso.fit_transform(digits.data)
projection4 = tsne.fit_transform(digits.data)
#
targ = np.zeros((len(digits.target), 10))
targ[np.arange(len(digits.target)), list(digits.target)] = 1
encodeTarget = w2.dot(targ.T).T
projection11 = pca.fit_transform(encodeTarget)
# viz
cmap=plt.cm.get_cmap('Accent', 10)
fig = plt.figure(figsize=(12, 8))
#
def viz(projection,ax,title):
plt.scatter(projection[:, 0], projection[:, 1],lw=0,s=6,c=digits.target, cmap=cmap)
#plt.colorbar(ticks=range(10), label='digit value')
plt.title(title)
ax.set_axis_bgcolor('black')
#
ax=fig.add_subplot(233)
xs,ys=np.round(projection11,3).T
for x,y,i in set(zip(xs, ys,digits.target)):
plt.scatter(x,y,alpha=0)
plt.text(x,y,str(i),color=cmap(i),size=18)
ax.set_axis_bgcolor('black')
plt.title('PCA encode label (50D)')
#
viz(projection0,fig.add_subplot(231),'PCA raw data (64D)')
viz(projection1,fig.add_subplot(232),'PCA encode data (50D)')
#viz(projection11,fig.add_subplot(233),'PCA encode label (50D)')
viz(projection2,fig.add_subplot(234),'MDS raw data (64D)')
viz(projection3,fig.add_subplot(235),'Isomap raw data (64D)')
viz(projection4,fig.add_subplot(236),'TSNE raw data (64D)')
#
plt.tight_layout()
Explanation: 3.3 Alternative manifold learning methods
End of explanation
# initialize
dtype = torch.FloatTensor
w1 = Variable(torch.randn(D_in, H).type(dtype)/H, requires_grad=True)
w2 = Variable(torch.randn(H, D_out).type(dtype)/H, requires_grad=True)
learning_rate = 1e-6
Loss=defaultdict(lambda:[])
# train
for j in Batches:
flushPrint(j)
x,y=Batches[j]
x = Variable(torch.from_numpy(x).type(dtype), requires_grad=False)
y = Variable(torch.from_numpy(y).type(dtype), requires_grad=False)
for t in range(500):
y_pred = x.mm(w1).clamp(min=0).mm(w2)
softmax = nn.Softmax(dim=1)
y_soft=softmax(y_pred)
loss = (y_soft - y).pow(2).sum()
Loss[j].append([t,loss.data.item()])
loss.backward()
w1.data -= learning_rate * w1.grad.data
w2.data -= learning_rate * w2.grad.data
w1.grad.data.zero_()
w2.grad.data.zero_()
# Dispaly loss decreasing
fig = plt.figure(figsize=(5, 4))
cmap = cm.get_cmap('rainbow',M)
for i in Loss:
epochs,loss=zip(*sorted(Loss[i]))
plt.plot(epochs,loss,color=cmap(i),alpha=0.7)
plt.xlabel('Epochs',fontsize=18)
plt.ylabel('Loss',fontsize=18)
ax1 = fig.add_axes([0.2, 0.8, 0.65, 0.03])
cb1 = mpl.colorbar.ColorbarBase(ax1, cmap=cmap,
norm=mpl.colors.Normalize(vmin=0, vmax=M),
orientation='horizontal')
cb1.set_label('N of batches')
TestData=digits.data[-(L-l):]
xTest = Variable(torch.from_numpy(TestData).type(dtype), requires_grad=False)
PredictData = xTest.mm(w1).clamp(min=0).mm(w2)
compare=np.argmax(PredictData.data.numpy(),axis=1)-digits.target[-(L-l):]
Accuracy=list(compare).count(0)/float(len(compare))
Accuracy
Explanation: Neural Network (using PyTorch)
End of explanation |
3,283 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Twitter API rate limits
Step1: Calculating remaining time (in mins) before api limits reset
I find it helpful to check how many more minutes I have to wait before trying again
Step2: Key function(s) of tweepy wrapper
These fields are are stored in Neo4j (as properties) against a Person node. | Python Code:
### checking rate limit - friends list
limit = api.rate_limit_status()
limit['resources']['friends']['/friends/list']['remaining']
limit['resources']['friends']['/friends/list']
Explanation: Twitter API rate limits
End of explanation
import datetime as dt
given_date =dt.datetime.fromtimestamp(
int(limit['resources']['friends']['/friends/list']['reset'])
).strftime('%Y-%m-%d %H:%M:%S')
difference = dt.datetime.strptime(given_date, "%Y-%m-%d %H:%M:%S")-dt.datetime.today()
min=difference.seconds/60
print ("Minute(s) remaining: ",min)
Explanation: Calculating remaining time (in mins) before api limits reset
I find it helpful to check how many more minutes I have to wait before trying again
End of explanation
# getting user details of a Person aka Narendra Modi
# any other Person details can be fetched as a starting point
suser = api.get_user('narendramodi') # this returns a User model
print (suser.screen_name)
print (suser.followers_count)
print (suser.friends_count)
print(suser.location)
print(suser.created_at)
# get Narendra Modi's friends i.e. whome NM is following
friends = api.friends('narendramodi')
# defining a driver
driver = GraphDatabase.driver("bolt://localhost:7687", auth=basic_auth("neo4j", "welcome123"))
# initiating a server
session = driver.session()
#deleting ALL existing records
session.run("MATCH (n) DETACH DELETE n")
#adding a unique constraint; this ensures that same Person is not added twice
session.run("CREATE CONSTRAINT ON (a:Person) ASSERT a.screen_name IS UNIQUE")
session.close()
# defining a driver
driver = GraphDatabase.driver("bolt://localhost:7687", auth=basic_auth("neo4j", "welcome123"))
session = driver.session()
#adding narnendra modi details
label =suser.screen_name
session.run("CREATE (label:Person {screen_name: {screen_name}, name: {name}, followers_count: {followers_count}, friends_count:{friends_count},location:{location}})",
{"name": suser.name, "screen_name": suser.screen_name, "followers_count":suser.followers_count,"friends_count": suser.friends_count,"location":suser.location }
)
session.close()
# defining a driver
driver = GraphDatabase.driver("bolt://localhost:7687", auth=basic_auth("neo4j", "welcome123"))
session = driver.session()
i =0
#adding all persons - whom narendra modi is following
for user in friends:
#checking if the user already exists
output = session.run("MATCH (a:Person) WHERE a.screen_name ={check_name} return a.screen_name",
{"check_name":user.screen_name})
exists =''
for exists in output:
True
label =user.screen_name
#adding the user if doesnt exist
if exists=='':
session.run("CREATE (label:Person {screen_name: {screen_name}, name: {name}, followers_count: {followers_count}, friends_count:{friends_count},location:{location}})",
{"name": user.name, "screen_name": user.screen_name, "followers_count":user.followers_count,"friends_count": user.friends_count,"location":user.location }
)
else:
print('Person already exists: ',user.screen_name)
session.run("MATCH (a:Person),(b:Person) WHERE a.screen_name = {a_screen_name} AND b.screen_name = {b_screen_name} CREATE (a)-[r:FOLLOWING]->(b)",
{"a_screen_name":suser.screen_name, "b_screen_name":user.screen_name })
#temporary; ending the loop after 6 runs - dont exhaust twitter api limits
i=i+1
if i==6:
break
#finding friends of Modi's friends
print ('sub_friends: '+ user.screen_name) #debugging why rate limits are getting exhausted
sub_friends = api.friends(user.screen_name)
#adding the friends of friends of Modi
for sub_user in sub_friends:
#checking if the sub_user already exists
output = session.run("MATCH (a:Person) WHERE a.screen_name ={check_name} return a.screen_name",
{"check_name":sub_user.screen_name})
exists =''
for exists in output:
True
label =sub_user.screen_name
#adding the user if doesnt exist
if exists=='':
session.run("CREATE (label:Person {screen_name: {screen_name}, name: {name}, followers_count: {followers_count}, friends_count:{friends_count},location:{location}})",
{"name": sub_user.name, "screen_name": sub_user.screen_name, "followers_count":sub_user.followers_count,"friends_count": sub_user.friends_count,"location":sub_user.location }
)
else:
print('Person already exists: ',sub_user.screen_name)
session.run("MATCH (a:Person),(b:Person) WHERE a.screen_name = {a_screen_name} AND b.screen_name = {b_screen_name} CREATE (a)-[r:FOLLOWING]->(b)",
{"a_screen_name":user.screen_name, "b_screen_name":sub_user.screen_name })
session.close()
Explanation: Key function(s) of tweepy wrapper
These fields are are stored in Neo4j (as properties) against a Person node.
End of explanation |
3,284 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced
Step1: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step2: And we'll attach some dummy datasets. See Datasets for more details.
Step3: Available Backends
See the Compute Tutorial for details on adding compute options and using them to create synthetic models.
PHOEBE 1.0 Legacy
For more details, see Comparing PHOEBE 2.0 vs PHOEBE Legacy
Step4: Using Alternate Backends
Adding Compute Options
Adding a set of compute options, via b.add_compute for an alternate backend is just as easy as for the PHOEBE backend. Simply provide the function or name of the function in phoebe.parameters.compute that points to the parameters for that backend.
Here we'll add the default PHOEBE backend as well as the PHOEBE 1.0 (legacy) backend. Note that in order to use an alternate backend, that backend must be installed on your machine.
Step5: Running Compute
Nothing changes when calling b.run_compute - simply provide the compute tag for those options. Do note, however, that not all backends support all dataset types.
But, since the legacy backend doesn't support ck2004 atmospheres and interpolated limb-darkening, we do need to choose a limb-darkening law. We can do this for all passband-component combinations by using set_value_all.
Step6: Running Multiple Backends Simultaneously
Running multiple backends simultaneously is just as simple as running the PHOEBE backend with multiple sets of compute options (see Compute).
We just need to make sure that each dataset is only enabled for one (or none) of the backends that we want to use, and then send a list of the compute tags to run_compute. Here we'll use the PHOEBE backend to compute orbits and the legacy backend to compute light curves.
Step7: The parameters inside the returned model even remember which set of compute options (and therefore, in this case, which backend) were used to compute them. | Python Code:
!pip install -I "phoebe>=2.2,<2.3"
Explanation: Advanced: Alternate Backends
Setup
Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
End of explanation
b.add_dataset('orb', times=np.linspace(0,10,1000), dataset='orb01', component=['primary', 'secondary'])
b.add_dataset('lc', times=np.linspace(0,10,1000), dataset='lc01')
Explanation: And we'll attach some dummy datasets. See Datasets for more details.
End of explanation
b.add_compute('legacy', compute='legacybackend')
print(b.get_compute('legacybackend'))
Explanation: Available Backends
See the Compute Tutorial for details on adding compute options and using them to create synthetic models.
PHOEBE 1.0 Legacy
For more details, see Comparing PHOEBE 2.0 vs PHOEBE Legacy
End of explanation
b.add_compute('phoebe', compute='phoebebackend')
print(b.get_compute('phoebebackend'))
Explanation: Using Alternate Backends
Adding Compute Options
Adding a set of compute options, via b.add_compute for an alternate backend is just as easy as for the PHOEBE backend. Simply provide the function or name of the function in phoebe.parameters.compute that points to the parameters for that backend.
Here we'll add the default PHOEBE backend as well as the PHOEBE 1.0 (legacy) backend. Note that in order to use an alternate backend, that backend must be installed on your machine.
End of explanation
b.set_value_all('ld_mode', 'manual')
b.set_value_all('ld_func', 'logarithmic')
b.run_compute('legacybackend', model='legacyresults')
Explanation: Running Compute
Nothing changes when calling b.run_compute - simply provide the compute tag for those options. Do note, however, that not all backends support all dataset types.
But, since the legacy backend doesn't support ck2004 atmospheres and interpolated limb-darkening, we do need to choose a limb-darkening law. We can do this for all passband-component combinations by using set_value_all.
End of explanation
b.set_value_all('enabled@lc01@phoebebackend', False)
#b.set_value_all('enabled@orb01@legacybackend', False) # don't need this since legacy NEVER computes orbits
print(b.filter(qualifier='enabled'))
b.run_compute(['phoebebackend', 'legacybackend'], model='mixedresults')
Explanation: Running Multiple Backends Simultaneously
Running multiple backends simultaneously is just as simple as running the PHOEBE backend with multiple sets of compute options (see Compute).
We just need to make sure that each dataset is only enabled for one (or none) of the backends that we want to use, and then send a list of the compute tags to run_compute. Here we'll use the PHOEBE backend to compute orbits and the legacy backend to compute light curves.
End of explanation
print(b['mixedresults'].computes)
b['mixedresults@phoebebackend'].datasets
b['mixedresults@legacybackend'].datasets
Explanation: The parameters inside the returned model even remember which set of compute options (and therefore, in this case, which backend) were used to compute them.
End of explanation |
3,285 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
Without getting too detailed Lab is about a simple technique for either increasing or decreasing the playing time of a sound array, without altering the sound's pitch. Increasing the rate refers to decreasing the playing time, while decreasing the rate implies increasing the playing time. Both operations can be solved in an approximate way by butt splicing speech segments of say 45 ms or so. Butt splicing in DSP is analogous to adding or removing segments of magnetic recording tape by end-to-end taping them together to form a new edited tape. Butt splicing of speech sequences thus implies that no transition smoothing is used, just a hard end-to-end connection.
To make a recording of your own speech you will use PyAudio via the module pyaudio_helper. The configuration of PyAudio currently implemented in pyaudio_helper is single channel input/output (I/O) using a callback function. The callback function insure that the audio processing in non-blocking relative to other operations the PC might be trying to deal with, e.g., responding to mouse moves, etc.
Step1: Begin by Making a Short Audio Clip
Make a simple recording to capture some speech for further processing.
Step2: Playback Using the Audio Control
Playback your recording or optionally load a saved .wav file, such as speech.wav.
Step3: Playback and Loop the Recording
Using the PyAudio interface you can turn around and play an array to an audio output device. e.g., your PC speakers. Optionally you can take the array and create a looped array that repeat itself assuming the original length is less than the output stream time. First load an array from a wave file, define an output stream callback, create a loop_audio() object, create a DSP_io() stream object, and finally call the stream() method providing a play time in s.
Step4: Develop the Decrease Playback Time Code Here
The objective is decrease the playback time by a factor of 2 without pitch shifting. To decrease the playback time, yet retain the proper pitch, all we need do is to periodically remove short segments of the original speech vector, butt splice the remaining pieces back together, then play it back at the original recording rate. If the pattern is save 45 ms, discard 45 ms, save 45 ms, etc., the new sound vector will be half as long as the original, thus it will play in half the time. A graphical description of the operation in terms of Python ndarrays is shown below.
The segment length may need to be adjusted for best sound quality. Note I recommend the option order=’F’, that is column-major ordering as found in Fortran arrays, is taken in the reshape.
Note
Step5: Hints
Step6: Eventually playback (class solution)
Step7: Develop the Increase Playback Time Code Here
The objective is increase the playback time by a factor of 2 without pitch shifting. To increase the playback time, yet retain the proper pitch, all we need do is to periodically repeat short segments of the original speech vector, again using a butt splicing technique, then play it back at the original recording rate. If the pattern is say 45 ms, repeat previous 45 ms, save next 45 ms, etc., the new sound vector will be twice as long as the original, thus it will play in twice the time.
Step8: Hints
Step9: Eventually playback (class solution) | Python Code:
Image('images/[email protected]',width='80%')
pah.available_devices()
Explanation: Introduction
Without getting too detailed Lab is about a simple technique for either increasing or decreasing the playing time of a sound array, without altering the sound's pitch. Increasing the rate refers to decreasing the playing time, while decreasing the rate implies increasing the playing time. Both operations can be solved in an approximate way by butt splicing speech segments of say 45 ms or so. Butt splicing in DSP is analogous to adding or removing segments of magnetic recording tape by end-to-end taping them together to form a new edited tape. Butt splicing of speech sequences thus implies that no transition smoothing is used, just a hard end-to-end connection.
To make a recording of your own speech you will use PyAudio via the module pyaudio_helper. The configuration of PyAudio currently implemented in pyaudio_helper is single channel input/output (I/O) using a callback function. The callback function insure that the audio processing in non-blocking relative to other operations the PC might be trying to deal with, e.g., responding to mouse moves, etc.
End of explanation
# Here we configure the callback to capture a one channel input
def callback(in_data, frame_count, time_info, status):
# convert byte data to ndarray
in_data_nda = np.fromstring(in_data, dtype=np.int16)
x = in_data_nda.astype(float32)
# accumulate a new frame of samples
DSP_IO.DSP_capture_add_samples(x)
# The 0 below avoids unwanted feedback to your speakers (we are just capturing)
return ((0*x).astype(int16)).tobytes(), pah.pyaudio.paContinue
# FYI, the ms per speech frame
1024/8000*1000
#pah.DSP_io_stream(stream_callback, <== the name of the callback function
# in_idx=1, out_idx=4, <== valid input and output devices
# frame_length=1024, <== length of the frame in samples
# fs=44100, <== the sampling rate
# Tcapture=0, <== capture buffer length in s; 0 --> 'infinite'
# sleep_time=0.1) <== sleep time for the 'while loop'
DSP_IO = pah.DSP_io_stream(callback,0,1,fs=8000)
DSP_IO.stream(5)
Explanation: Begin by Making a Short Audio Clip
Make a simple recording to capture some speech for further processing.
End of explanation
# Save your recording to a wave file from the data capture buffer
ss.to_wav('my_speech.wav',8000,DSP_IO.data_capture/max(DSP_IO.data_capture))
# If need be reload it into array x
#fs, x = ss.from_wav('my_speech.wav')
Audio('my_speech.wav')
# Load one of several 8 ksps speech files
fs, x = ss.from_wav('speech.wav')
Audio('speech.wav')
Explanation: Playback Using the Audio Control
Playback your recording or optionally load a saved .wav file, such as speech.wav.
End of explanation
fs, x = ss.from_wav('speech.wav')
# Here we configure the callback to play back a wav file
def callback2(in_data, frame_count, time_info, status):
global x_loop
# Note wav is scaled to [-1,1] so need to rescale to int16
y = 32767*x_loop.get_samples(frame_count)
DSP_IO.DSP_capture_add_samples(y)
# Convert from float back to int16
y = y.astype(int16)
return y.tobytes(), pah.pyaudio.paContinue
x_loop = pah.loop_audio(x)
DSP_IO = pah.DSP_io_stream(callback2,0,1,fs=8000,Tcapture=1)
DSP_IO.stream(5)
Explanation: Playback and Loop the Recording
Using the PyAudio interface you can turn around and play an array to an audio output device. e.g., your PC speakers. Optionally you can take the array and create a looped array that repeat itself assuming the original length is less than the output stream time. First load an array from a wave file, define an output stream callback, create a loop_audio() object, create a DSP_io() stream object, and finally call the stream() method providing a play time in s.
End of explanation
Image('images/Speed_Up_Speech.png',width='80%')
Explanation: Develop the Decrease Playback Time Code Here
The objective is decrease the playback time by a factor of 2 without pitch shifting. To decrease the playback time, yet retain the proper pitch, all we need do is to periodically remove short segments of the original speech vector, butt splice the remaining pieces back together, then play it back at the original recording rate. If the pattern is save 45 ms, discard 45 ms, save 45 ms, etc., the new sound vector will be half as long as the original, thus it will play in half the time. A graphical description of the operation in terms of Python ndarrays is shown below.
The segment length may need to be adjusted for best sound quality. Note I recommend the option order=’F’, that is column-major ordering as found in Fortran arrays, is taken in the reshape.
Note: A parameter you must choose is the length of the sub-segments, call it $N_\text{sub}$
The playback speech quality is affected by $N_\text{sub}$
End of explanation
s = arange(0,16)
s
# Consider N_sub = 2
sr = reshape(s,(2,8),order='C')
sr
# Consider N_sub = 2
sr = reshape(s,(2,8),order='F')
sr
srd = sr[:,::2]
srd
srdo = reshape(srd,(1,len(s)//2),order='F')
srdo
Explanation: Hints
End of explanation
Nx = len(x)
Nsub = 400
Nxt = Nsub*int(Nx/Nsub)
Nxt
xr = reshape(x,(Nsub,int(Nxt/Nsub)),order='F')
xrd = xr[:,::2]
xrd0 = reshape(xrd,(1,xrd.shape[0]*xrd.shape[1]),order='F')
ss.to_wav('speed_up.wav',fs,xrd0.flatten())
Audio('speed_up.wav')
Explanation: Eventually playback (class solution)
End of explanation
Image('images/Slow_Down_Speech.png',width='80%')
Explanation: Develop the Increase Playback Time Code Here
The objective is increase the playback time by a factor of 2 without pitch shifting. To increase the playback time, yet retain the proper pitch, all we need do is to periodically repeat short segments of the original speech vector, again using a butt splicing technique, then play it back at the original recording rate. If the pattern is say 45 ms, repeat previous 45 ms, save next 45 ms, etc., the new sound vector will be twice as long as the original, thus it will play in twice the time.
End of explanation
s = arange(0,16)
sru = reshape(s,(2,8),order='F')
sru
srus = vstack((sru,sru))
srus
sruso = reshape(srus,(1,len(s)*2),order='F')
sruso
Explanation: Hints
End of explanation
Nx = len(x)
Nsub = 200
Nxt = Nsub*int(Nx/Nsub)
Nxt
xr = reshape(x,(Nsub,int(Nxt/Nsub)),order='F')
xrd = vstack((xr,xr))
xrd0 = reshape(xrd,(1,xrd.shape[0]*xrd.shape[1]),order='F')
ss.to_wav('slow_dn.wav',fs,xrd0.flatten())
Audio('slow_dn.wav')
Explanation: Eventually playback (class solution)
End of explanation |
3,286 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step1: We define the model, adapted from the Keras CIFAR-10 example
Step2: We train the model using the
RMSprop
optimizer
Step3: Now let's train the model again, using the XLA compiler.
To enable the compiler in the middle of the application, we need to reset the Keras session. | Python Code:
import tensorflow as tf
# Check that GPU is available: cf. https://colab.research.google.com/notebooks/gpu.ipynb
assert(tf.test.gpu_device_name())
tf.keras.backend.clear_session()
tf.config.optimizer.set_jit(False) # Start with XLA disabled.
def load_data():
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
x_train = x_train.astype('float32') / 256
x_test = x_test.astype('float32') / 256
# Convert class vectors to binary class matrices.
y_train = tf.keras.utils.to_categorical(y_train, num_classes=10)
y_test = tf.keras.utils.to_categorical(y_test, num_classes=10)
return ((x_train, y_train), (x_test, y_test))
(x_train, y_train), (x_test, y_test) = load_data()
Explanation: <table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/xla/tutorials/autoclustering_xla"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/g3doc/tutorials/autoclustering_xla.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/g3doc/tutorials/autoclustering_xla.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Classifying CIFAR-10 with XLA
This tutorial trains a TensorFlow model to classify the CIFAR-10 dataset, and we compile it using XLA.
Load and normalize the dataset using the Keras API:
End of explanation
def generate_model():
return tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3, 3), padding='same', input_shape=x_train.shape[1:]),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Conv2D(32, (3, 3)),
tf.keras.layers.Activation('relu'),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Dropout(0.25),
tf.keras.layers.Conv2D(64, (3, 3), padding='same'),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Conv2D(64, (3, 3)),
tf.keras.layers.Activation('relu'),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Dropout(0.25),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(10),
tf.keras.layers.Activation('softmax')
])
model = generate_model()
Explanation: We define the model, adapted from the Keras CIFAR-10 example:
End of explanation
def compile_model(model):
opt = tf.keras.optimizers.RMSprop(lr=0.0001, decay=1e-6)
model.compile(loss='categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
return model
model = compile_model(model)
def train_model(model, x_train, y_train, x_test, y_test, epochs=25):
model.fit(x_train, y_train, batch_size=256, epochs=epochs, validation_data=(x_test, y_test), shuffle=True)
def warmup(model, x_train, y_train, x_test, y_test):
# Warm up the JIT, we do not wish to measure the compilation time.
initial_weights = model.get_weights()
train_model(model, x_train, y_train, x_test, y_test, epochs=1)
model.set_weights(initial_weights)
warmup(model, x_train, y_train, x_test, y_test)
%time train_model(model, x_train, y_train, x_test, y_test)
scores = model.evaluate(x_test, y_test, verbose=1)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])
Explanation: We train the model using the
RMSprop
optimizer:
End of explanation
# We need to clear the session to enable JIT in the middle of the program.
tf.keras.backend.clear_session()
tf.config.optimizer.set_jit(True) # Enable XLA.
model = compile_model(generate_model())
(x_train, y_train), (x_test, y_test) = load_data()
warmup(model, x_train, y_train, x_test, y_test)
%time train_model(model, x_train, y_train, x_test, y_test)
Explanation: Now let's train the model again, using the XLA compiler.
To enable the compiler in the middle of the application, we need to reset the Keras session.
End of explanation |
3,287 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (1000, 1050)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
vocab_to_int = {word: i for i, word in enumerate(set(text))}
int_to_vocab = dict(enumerate(vocab_to_int))
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
punctuation = {
'.': '||period||',
',': '||comma||',
'"': '||quotation_mark||',
';': '||semicolon||',
'!': '||exclamation_point||',
'?': '||question_mark||',
'(': '||left_parenthesis||',
')': '||right_parenthesis||',
'--': '||emdash||',
"\n": '||line_break||'
}
return punctuation
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, shape=(None, None), name="input")
targets = tf.placeholder(tf.int32, shape=(None, None), name="targets")
learning_rate = tf.placeholder(tf.float32, name="learning_rate")
return inputs, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following the tuple (Input, Targets, LearingRate)
End of explanation
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
layer_count = 3
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
stacked_lstm = tf.contrib.rnn.MultiRNNCell([lstm] * layer_count)
initial_state = stacked_lstm.zero_state(batch_size, tf.float32)
initial_state = tf.identity(initial_state, name="initial_state")
return stacked_lstm, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
embedding = tf.Variable(tf.truncated_normal([vocab_size, embed_dim], -1.0, 1.0))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
outputs, state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
state = tf.identity(state, name="final_state")
return outputs, state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
embedding = get_embed(input_data, vocab_size, rnn_size)
outputs, state = build_rnn(cell, embedding)
logits = tf.layers.dense(inputs=outputs, units=vocab_size)
return logits, state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
number_of_batches = len(int_text) // (batch_size * seq_length)
if len(int_text) % number_of_batches <= 1: #if we don't have at least one extra element to predict
number_of_batches -= 1
batches = np.empty(shape=(number_of_batches, 2, batch_size, seq_length))
for batch_number in range(number_of_batches):
base_index = batch_number * seq_length
inputs = np.array([])
targets = np.array([])
for sequence_number in range(batch_size):
start_index = base_index + (sequence_number * seq_length)
end_index = start_index + seq_length
sequence = int_text[start_index:end_index]
target = int_text[(start_index + 1):(end_index + 1)]
batches[batch_number, 0, sequence_number] = sequence
batches[batch_number, 1, sequence_number] = target
return batches
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2 3], [ 7 8 9]],
# Batch of targets
[[ 2 3 4], [ 8 9 10]]
],
# Second Batch
[
# Batch of Input
[[ 4 5 6], [10 11 12]],
# Batch of targets
[[ 5 6 7], [11 12 13]]
]
]
```
End of explanation
# Number of Epochs
num_epochs = 8192
# Sequence Length
seq_length = 32
# Batch Size
batch_size = int(len(int_text) / seq_length // 2) #maximize the batch size to not waste data
# RNN Size
rnn_size = 128
# Learning Rate
learning_rate = 0.001
# Show stats for every n number of batches
show_every_n_batches = 20
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
inputs = loaded_graph.get_tensor_by_name('input:0')
initial_state = loaded_graph.get_tensor_by_name('initial_state:0')
final_state = loaded_graph.get_tensor_by_name('final_state:0')
probs = loaded_graph.get_tensor_by_name('probs:0')
return inputs, initial_state, final_state, probs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
weighted_index = np.searchsorted(np.cumsum(probabilities), np.random.rand())
return int_to_vocab[int(weighted_index)]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
3,288 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Protein binding & undfolding – a four-state model
In this notebook we will look into a the kinetics of a model system describing competing protein folding, aggregation and ligand binding. Using ChemPy we can define thermodynamic and kinetic parameters, and obtain
a representation of a system of ODEs which may be integrated efficiently. Since we use SymPy we can also
generate publication quality latex-expressions of our mathematical model directly from our source code. No need to write the equations multiple times in Python/Latex (or even C++ if the integration is to be performed a large number of times such as during parameter estimation).
First we will perform our imports
Step1: Next we will define our substances. Note how we specify the composition, this will allow ChemPy to raise an error if any of our reactions we enter later would violate mass-conservation. It will also allow us to reduce the number of unknowns in our ODE-system by using the linear invariants from the mass-conservation.
Step2: We will model thermodynamic properties using enthalpy (H), entropy (S) and heat capacity (Cp). Kinetic paramaters (rate constants) are assumed to follow the Eyring equation
Step3: Next we define our free parameters
Step4: We will have two reversible reactions, and one irreversible reaction
Step5: We formulate a system of 5 reactions honoring our reversible equilibria and our irreversible reaction
Step6: We can query the ReactionSystem instance for what substances contain what components
Step7: We can look at our ReactionSystem as a graph if we wish
Step8: ...or as a Table if that suits us better (note that "A" ha green highlighting, denoting it's a terminal product)
Step9: Try hovering over the names to have them highlighted (this is particularly useful when working with large reaction sets).
We ca also generate tables representing the unimolecular reactions involing each substance, or the matrix showing the bimolecular reactions
Step10: Exporting expressions as LaTeX is quite straightforward
Step11: We have the melting temperature $T_m$ as a free parameter, however, the model is expressed in terms of $\Delta_u S ^\circ$ so will need to derive the latter from the former
Step12: If we want to see the numerical values for the rate of the individual reactions it is quite easy
Step13: By using pyodesys we can generate a system of ordinary differential equations
Step14: Numerical integration of ODE systems require a guess for the inital step-size. We can derive an upper bound for an "Euler-forward step" from initial concentrations and restrictions on mass-conservation
Step15: Now let's put our ODE-system to work
Step16: pyodesys even allows us to generate C++ code which is compiled to a fast native extension module
Step17: Note how much smaller "time_cpu" was here
Step18: We have one complication, due to linear dependencies in our formulation of the system of ODEs our jacobian is singular
Step19: Since implicit methods (which are required for stiff cases often encountered in kinetic modelling) uses the Jacboian (or rather I - γJ) in the modified Newton's method we may get failures during integration (depending on step size and scaling). What we can do is to identify linear dependencies based on composition of the materials and exploit the invariants to reduce the dimensionality of the system of ODEs
Step20: That made sense
Step21: one can appreciate that one does not need to enter such expressions manually (at least for larger systems). That is both tedious and error prone.
Let's see how we can use pyodesys to leverage this information on redundancy
Step22: above we chose to get rid of 'L' and 'N', but we could also have removed 'A' instead of 'N'
Step23: We can also have the solver return to use when some precondition is fulfilled, e.g. when the concentraion of 'N' and 'A' are equal
Step24: From this point in time onwards we could for example choose to continue our integration using another formulation of the ODE-system
Step25: Let's compare the total number steps needed for our different approaches
Step26: In this case it did not earn us much, one reason is that we actually don't need to find the root with as high accuracy as we do. But having the option is still useful.
Using pyodesys and SymPy we can perform a variable transformation and solve the transformed system if we so wish
Step27: We can even apply the transformation our reduced systems (doing so by hand is excessively painful and error prone)
Step28: Finally, let's take a look at the C++ code which was generated for us | Python Code:
import logging; logger = logging.getLogger('matplotlib'); logger.setLevel(logging.INFO) # or notebook filled with logging
from collections import OrderedDict, defaultdict
import math
import re
import time
from IPython.display import Image, Latex, display
import matplotlib.pyplot as plt
import sympy
from pyodesys.symbolic import ScaledSys
from pyodesys.native.cvode import NativeCvodeSys
from chempy import Substance, Equilibrium, Reaction, ReactionSystem
from chempy.kinetics.ode import get_odesys
from chempy.kinetics.rates import MassAction
from chempy.printing.tables import UnimolecularTable, BimolecularTable
from chempy.thermodynamics.expressions import EqExpr
from chempy.util.graph import rsys2graph
from chempy.util.pyutil import defaultkeydict
%matplotlib inline
Explanation: Protein binding & undfolding – a four-state model
In this notebook we will look into a the kinetics of a model system describing competing protein folding, aggregation and ligand binding. Using ChemPy we can define thermodynamic and kinetic parameters, and obtain
a representation of a system of ODEs which may be integrated efficiently. Since we use SymPy we can also
generate publication quality latex-expressions of our mathematical model directly from our source code. No need to write the equations multiple times in Python/Latex (or even C++ if the integration is to be performed a large number of times such as during parameter estimation).
First we will perform our imports:
End of explanation
substances = OrderedDict([
('N', Substance('N', composition={'protein': 1}, latex_name='[N]')),
('U', Substance('U', composition={'protein': 1}, latex_name='[U]')),
('A', Substance('A', composition={'protein': 1}, latex_name='[A]')),
('L', Substance('L', composition={'ligand': 1}, latex_name='[L]')),
('NL', Substance('NL', composition={'protein': 1, 'ligand': 1}, latex_name='[NL]')),
])
Explanation: Next we will define our substances. Note how we specify the composition, this will allow ChemPy to raise an error if any of our reactions we enter later would violate mass-conservation. It will also allow us to reduce the number of unknowns in our ODE-system by using the linear invariants from the mass-conservation.
End of explanation
def _gibbs(args, T, R, backend, **kwargs):
H, S, Cp, Tref = args
H2 = H + Cp*(T - Tref)
S2 = S + Cp*backend.log(T/Tref)
return backend.exp(-(H2 - T*S2)/(R*T))
def _eyring(args, T, R, k_B, h, backend, **kwargs):
H, S = args
return k_B/h*T*backend.exp(-(H - T*S)/(R*T))
Gibbs = EqExpr.from_callback(_gibbs, parameter_keys=('temperature', 'R'), argument_names=('H', 'S', 'Cp', 'Tref'))
Eyring = MassAction.from_callback(_eyring, parameter_keys=('temperature', 'R', 'k_B', 'h'), argument_names=('H', 'S'))
Explanation: We will model thermodynamic properties using enthalpy (H), entropy (S) and heat capacity (Cp). Kinetic paramaters (rate constants) are assumed to follow the Eyring equation:
End of explanation
thermo_dis = Gibbs(unique_keys=('He_dis', 'Se_dis', 'Cp_dis', 'Tref_dis'))
thermo_u = Gibbs(unique_keys=('He_u', 'Se_u', 'Cp_u', 'Tref_u')) # ([He_u_R, Se_u_R, Cp_u_R, Tref])
kinetics_agg = Eyring(unique_keys=('Ha_agg', 'Sa_agg')) # EyringMassAction([Ha_agg, Sa_agg])
kinetics_as = Eyring(unique_keys=('Ha_as', 'Sa_as'))
kinetics_f = Eyring(unique_keys=('Ha_f', 'Sa_f'))
Explanation: Next we define our free parameters:
End of explanation
eq_dis = Equilibrium({'NL'}, {'N', 'L'}, thermo_dis, name='ligand-protein dissociation')
eq_u = Equilibrium({'N'}, {'U'}, thermo_u, {'L'}, {'L'}, name='protein unfolding')
r_agg = Reaction({'U'}, {'A'}, kinetics_agg, {'L'}, {'L'}, name='protein aggregation')
Explanation: We will have two reversible reactions, and one irreversible reaction:
End of explanation
rsys = ReactionSystem(
eq_dis.as_reactions(kb=kinetics_as, new_name='ligand-protein association') +
eq_u.as_reactions(kb=kinetics_f, new_name='protein folding') +
(r_agg,), substances, name='4-state CETSA system')
Explanation: We formulate a system of 5 reactions honoring our reversible equilibria and our irreversible reaction:
End of explanation
vecs, comp = rsys.composition_balance_vectors()
names = rsys.substance_names()
dict(zip(comp, [dict(zip(names, v)) for v in vecs]))
Explanation: We can query the ReactionSystem instance for what substances contain what components:
End of explanation
rsys2graph(rsys, '4state.png', save='.', include_inactive=False)
Image('4state.png')
Explanation: We can look at our ReactionSystem as a graph if we wish:
End of explanation
rsys
Explanation: ...or as a Table if that suits us better (note that "A" ha green highlighting, denoting it's a terminal product)
End of explanation
uni, not_uni = UnimolecularTable.from_ReactionSystem(rsys)
bi, not_bi = BimolecularTable.from_ReactionSystem(rsys)
assert not (not_bi & not_uni), "Only uni- & bi-molecular reactions expected"
uni
bi
Explanation: Try hovering over the names to have them highlighted (this is particularly useful when working with large reaction sets).
We ca also generate tables representing the unimolecular reactions involing each substance, or the matrix showing the bimolecular reactions:
End of explanation
def pretty_replace(s, subs=None):
if subs is None:
subs = {
'Ha_(\w+)': r'\\Delta_{\1}H^{\\neq}',
'Sa_(\w+)': r'\\Delta_{\1}S^{\\neq}',
'He_(\w+)': r'\\Delta_{\1}H^\\circ',
'Se_(\w+)': r'\\Delta_{\1}S^\\circ',
'Cp_(\w+)': r'\\Delta_{\1}\,C_p',
'Tref_(\w+)': r'T^{\\circ}_{\1}',
}
for pattern, repl in subs.items():
s = re.sub(pattern, repl, s)
return s
def mk_Symbol(key):
if key in substances:
arg = substances[key].latex_name
else:
arg = pretty_replace(key.replace('temperature', 'T'))
return sympy.Symbol(arg)
autosymbols = defaultkeydict(mk_Symbol)
rnames = {}
for rxn in rsys.rxns:
rnames[rxn.name] = rxn.name.replace(' ', '~').replace('-','-')
rate_expr_str = sympy.latex(rxn.rate_expr()(autosymbols, backend=sympy, reaction=rxn))
lstr = r'$r(\mathrm{%s}) = %s$' % (rnames[rxn.name], rate_expr_str)
display(Latex(lstr))
ratexs = [autosymbols['r(\mathrm{%s})' % rnames[rxn.name]] for rxn in rsys.rxns]
rates = rsys.rates(autosymbols, backend=sympy, ratexs=ratexs)
for k, v in rates.items():
display(Latex(r'$\frac{[%s]}{dt} = %s$' % (k, sympy.latex(v))))
default_c0 = defaultdict(float, {'N': 1e-9, 'L': 1e-8})
params = dict(
R=8.314472, # or N_A & k_B
k_B=1.3806504e-23,
h=6.62606896e-34, # k_B/h == 2.083664399411865e10 K**-1 * s**-1
He_dis=-45e3,
Se_dis=-400,
Cp_dis=1.78e3,
Tref_dis=298.15,
He_u=60e3,
Cp_u=20.5e3,
Tref_u=298.15,
Ha_agg=106e3,
Sa_agg=70,
Ha_as=4e3,
Sa_as=-10,
Ha_f=90e3,
Sa_f=50,
temperature=50 + 273.15
)
Explanation: Exporting expressions as LaTeX is quite straightforward:
End of explanation
def Se0_from_Tm(Tm, token):
dH0, T0, dCp = params['He_'+token], params['Tref_'+token], params['Cp_'+token]
return dH0/Tm + (Tm-T0)*dCp/Tm - dCp*math.log(Tm/T0)
params['Se_u'] = Se0_from_Tm(48.2+273.15, 'u')
params['Se_u']
Explanation: We have the melting temperature $T_m$ as a free parameter, however, the model is expressed in terms of $\Delta_u S ^\circ$ so will need to derive the latter from the former:
$$
\begin{cases}
\Delta G = 0 \
\Delta G = \Delta H - T_m\Delta_u S
\end{cases}
$$
$$
\begin{cases}
\Delta H = \Delta H^\circ + \Delta C_p \left( T_m - T^\circ \right) \
\Delta S = \Delta S^\circ + \Delta C_p \ln\left( \frac{T_m}{T^\circ} \right)
\end{cases}
$$
this gives us the following equation:
$$
\Delta H^\circ + \Delta C_p \left( T_m - T^\circ \right) = T_m \left( \Delta S^\circ + \Delta C_p \ln\left( \frac{T_m}{T^\circ} \right) \right)
$$
Solving for $\Delta S^\circ$:
$$
\Delta S^\circ = T_m^{-1}\left( \Delta H^\circ + \Delta C_p \left( T_m - T^\circ \right) \right) - \Delta C_p \ln\left( \frac{T_m}{T^\circ} \right)
$$
End of explanation
params_c0 = default_c0.copy()
params_c0.update(params)
for rxn in rsys.rxns:
print('%s: %.5g' % (rxn.name, rxn.rate_expr()(params_c0, reaction=rxn)))
Explanation: If we want to see the numerical values for the rate of the individual reactions it is quite easy:
End of explanation
odesys, extra = get_odesys(rsys, include_params=False, SymbolicSys=ScaledSys, dep_scaling=1e9)
len(odesys.exprs) # how many (symbolic) expressions are there in this representation?
Explanation: By using pyodesys we can generate a system of ordinary differential equations:
End of explanation
h0max = extra['max_euler_step_cb'](0, default_c0, params)
h0max
Explanation: Numerical integration of ODE systems require a guess for the inital step-size. We can derive an upper bound for an "Euler-forward step" from initial concentrations and restrictions on mass-conservation:
End of explanation
def integrate_and_plot(system, c0=None, first_step=None, t0=0, stiffness=False, nsteps=9000, **kwargs):
if c0 is None:
c0 = default_c0
if first_step is None:
first_step = h0max*1e-11
tend = 3600*24
t_py = time.time()
kwargs['atol'] = kwargs.get('atol', 1e-11)
kwargs['rtol'] = kwargs.get('rtol', 1e-11)
res = system.integrate([t0, tend], c0, params, integrator='cvode', nsteps=nsteps,
first_step=first_step, **kwargs)
t_py = time.time() - t_py
if stiffness:
plt.subplot(1, 2, 1)
_ = system.plot_result(xscale='log', yscale='log')
_ = plt.legend(loc='best')
plt.gca().set_ylim([1e-16, 1e-7])
plt.gca().set_xlim([1e-11, tend])
if stiffness:
if stiffness is True:
stiffness = 0
ratios = odesys.stiffness()
plt.subplot(1, 2, 2)
plt.yscale('linear')
plt.plot(odesys._internal[0][stiffness:], ratios[stiffness:])
for k in ('time_wall', 'time_cpu'):
print('%s = %.3g' % (k, res[2][k]), end=', ')
print('time_python = %.3g' % t_py)
return res
_, _, info = integrate_and_plot(odesys)
assert info['internal_yout'].shape[1] == 5
{k: v for k, v in info.items() if not k.startswith('internal')}
Explanation: Now let's put our ODE-system to work:
End of explanation
native = NativeCvodeSys.from_other(odesys, first_step_expr=0*odesys.indep)
_, _, info_native = integrate_and_plot(native)
{k: v for k, v in info_native.items() if not k.startswith('internal')}
Explanation: pyodesys even allows us to generate C++ code which is compiled to a fast native extension module:
End of explanation
info['time_wall']/info_native['time_wall']
from chempy.kinetics._native import get_native
native2 = get_native(rsys, odesys, 'cvode')
_, _, info_native2 = integrate_and_plot(native2, first_step=0.0)
{k: v for k, v in info_native2.items() if not k.startswith('internal')}
Explanation: Note how much smaller "time_cpu" was here
End of explanation
cses, (jac_in_cse,) = odesys.be.cse(odesys.get_jac())
jac_in_cse
odesys.jacobian_singular()
Explanation: We have one complication, due to linear dependencies in our formulation of the system of ODEs our jacobian is singular:
End of explanation
A, comp_names = rsys.composition_balance_vectors()
A, comp_names, list(rsys.substances.keys())
Explanation: Since implicit methods (which are required for stiff cases often encountered in kinetic modelling) uses the Jacboian (or rather I - γJ) in the modified Newton's method we may get failures during integration (depending on step size and scaling). What we can do is to identify linear dependencies based on composition of the materials and exploit the invariants to reduce the dimensionality of the system of ODEs:
End of explanation
y0 = {odesys[k]: sympy.Symbol(k+'0') for k in rsys.substances.keys()}
analytic_L_N = extra['linear_dependencies'](['L', 'N'])
analytic_L_N(None, y0, None, sympy)
assert len(analytic_L_N(None, y0, None, sympy)) > 0 # ensure the callback is idempotent
analytic_L_N(None, y0, None, sympy), list(enumerate(odesys.names))
Explanation: That made sense: two different components can give us (up to) two linear invariants.
Let's look what those invariants looks like symbolically:
End of explanation
from pyodesys.symbolic import PartiallySolvedSystem
no_invar = dict(linear_invariants=None, linear_invariant_names=None)
psysLN = PartiallySolvedSystem(odesys, analytic_L_N, **no_invar)
print(psysLN.be.cse(psysLN.get_jac())[1][0])
psysLN['L'], psysLN.jacobian_singular(), len(psysLN.exprs)
Explanation: one can appreciate that one does not need to enter such expressions manually (at least for larger systems). That is both tedious and error prone.
Let's see how we can use pyodesys to leverage this information on redundancy:
End of explanation
psysLA = PartiallySolvedSystem(odesys, extra['linear_dependencies'](['L', 'A']), **no_invar)
print(psysLA.be.cse(psysLA.get_jac())[1][0])
psysLA['L'], psysLA.jacobian_singular()
plt.figure(figsize=(12,4))
plt.subplot(1, 2, 1)
_, _, info_LN = integrate_and_plot(psysLN, first_step=0.0)
assert info_LN['internal_yout'].shape[1] == 3
plt.subplot(1, 2, 2)
_, _, info_LA = integrate_and_plot(psysLA, first_step=0.0)
assert info_LA['internal_yout'].shape[1] == 3
({k: v for k, v in info_LN.items() if not k.startswith('internal')},
{k: v for k, v in info_LA.items() if not k.startswith('internal')})
Explanation: above we chose to get rid of 'L' and 'N', but we could also have removed 'A' instead of 'N':
End of explanation
from pyodesys.symbolic import SymbolicSys
psys_root = SymbolicSys.from_other(psysLN, roots=[psysLN['N'] - psysLN['A']])
psys_root.roots
psysLN['N']
psysLN.analytic_exprs
psysLN.names
psysLN.dep
tout1, Cout1, info_root = integrate_and_plot(psys_root, first_step=0.0, return_on_root=True)
print('Time at which concnetrations of N & A are equal: %.4g' % (tout1[-1]))
Explanation: We can also have the solver return to use when some precondition is fulfilled, e.g. when the concentraion of 'N' and 'A' are equal:
End of explanation
xout2, yout2, info_LA = integrate_and_plot(psysLA, first_step=0.0, t0=tout1[-1], c0=dict(zip(odesys.names, Cout1[-1, :])))
Explanation: From this point in time onwards we could for example choose to continue our integration using another formulation of the ODE-system:
End of explanation
print('\troot\tLA\troot+LA\tLN')
for k in 'n_steps nfev njev'.split():
print('\t'.join(map(str, (k, info_root[k], info_LA[k], info_root[k] + info_LA[k], info_LN[k]))))
Explanation: Let's compare the total number steps needed for our different approaches:
End of explanation
from pyodesys.symbolic import symmetricsys
logexp = lambda x: sympy.log(x + 1e-20), lambda x: sympy.exp(x) - 1e-20
def psimp(exprs):
return [sympy.powsimp(expr.expand(), force=True) for expr in exprs]
LogLogSys = symmetricsys(logexp, logexp, exprs_process_cb=psimp)
unscaled_odesys, unscaled_extra = get_odesys(rsys, include_params=False)
tsys = LogLogSys.from_other(unscaled_odesys)
unscaledLN = PartiallySolvedSystem(unscaled_odesys, unscaled_extra['linear_dependencies'](['L', 'N']), **no_invar)
unscaledLA = PartiallySolvedSystem(unscaled_odesys, unscaled_extra['linear_dependencies'](['L', 'A']), **no_invar)
assert sorted(unscaledLN.free_names) == sorted(['U', 'A', 'NL'])
assert sorted(unscaledLA.free_names) == sorted(['U', 'N', 'NL'])
tsysLN = LogLogSys.from_other(unscaledLN)
tsysLA = LogLogSys.from_other(unscaledLA)
_, _, info_t = integrate_and_plot(tsys, first_step=0.0)
{k: info_t[k] for k in ('nfev', 'njev', 'n_steps')}
Explanation: In this case it did not earn us much, one reason is that we actually don't need to find the root with as high accuracy as we do. But having the option is still useful.
Using pyodesys and SymPy we can perform a variable transformation and solve the transformed system if we so wish:
End of explanation
native_tLN = NativeCvodeSys.from_other(tsysLN)
_, _, info_tLN = integrate_and_plot(native_tLN, first_step=1e-9, nsteps=18000, atol=1e-9, rtol=1e-9)
{k: info_tLN[k] for k in ('nfev', 'njev', 'n_steps')}
_, _, info_tLN = integrate_and_plot(tsysLN, first_step=1e-9, nsteps=18000, atol=1e-8, rtol=1e-8)
{k: info_tLN[k] for k in ('nfev', 'njev', 'n_steps')}
_, _, info_tLA = integrate_and_plot(tsysLA, first_step=0.0)
{k: info_tLA[k] for k in ('nfev', 'njev', 'n_steps')}
Explanation: We can even apply the transformation our reduced systems (doing so by hand is excessively painful and error prone):
End of explanation
print(open(next(filter(lambda s: s.endswith('.cpp'), native2._native._written_files))).read())
Explanation: Finally, let's take a look at the C++ code which was generated for us:
End of explanation |
3,289 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Model Averaging
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Build Model
Step3: Prepare Dataset
Step4: We will be comparing three optimizers here
Step5: Both MovingAverage and StochasticAverage optimizers use ModelAverageCheckpoint.
Step6: Train Model
Vanilla SGD Optimizer
Step7: Moving Average SGD
Step8: Stocastic Weight Average SGD | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
!pip install -U tensorflow-addons
import tensorflow as tf
import tensorflow_addons as tfa
import numpy as np
import os
Explanation: Model Averaging
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/addons/tutorials/average_optimizers_callback"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/addons/blob/master/docs/tutorials/average_optimizers_callback.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/addons/blob/master/docs/tutorials/average_optimizers_callback.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/addons/docs/tutorials/average_optimizers_callback.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Overview
This notebook demonstrates how to use Moving Average Optimizer along with the Model Average Checkpoint from tensorflow addons package.
Moving Averaging
The advantage of Moving Averaging is that they are less prone to rampant loss shifts or irregular data representation in the latest batch. It gives a smooothened and a more general idea of the model training until some point.
Stochastic Averaging
Stochastic Weight Averaging converges to wider optima. By doing so, it resembles geometric ensembeling. SWA is a simple method to improve model performance when used as a wrapper around other optimizers and averaging results from different points of trajectory of the inner optimizer.
Model Average Checkpoint
callbacks.ModelCheckpoint doesn't give you the option to save moving average weights in the middle of training, which is why Model Average Optimizers required a custom callback. Using the update_weights parameter, ModelAverageCheckpoint allows you to:
1. Assign the moving average weights to the model, and save them.
2. Keep the old non-averaged weights, but the saved model uses the average weights.
Setup
End of explanation
def create_model(opt):
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer=opt,
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
return model
Explanation: Build Model
End of explanation
#Load Fashion MNIST dataset
train, test = tf.keras.datasets.fashion_mnist.load_data()
images, labels = train
images = images/255.0
labels = labels.astype(np.int32)
fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels))
fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32)
test_images, test_labels = test
Explanation: Prepare Dataset
End of explanation
#Optimizers
sgd = tf.keras.optimizers.SGD(0.01)
moving_avg_sgd = tfa.optimizers.MovingAverage(sgd)
stocastic_avg_sgd = tfa.optimizers.SWA(sgd)
Explanation: We will be comparing three optimizers here:
Unwrapped SGD
SGD with Moving Average
SGD with Stochastic Weight Averaging
And see how they perform with the same model.
End of explanation
#Callback
checkpoint_path = "./training/cp-{epoch:04d}.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_dir,
save_weights_only=True,
verbose=1)
avg_callback = tfa.callbacks.AverageModelCheckpoint(filepath=checkpoint_dir,
update_weights=True)
Explanation: Both MovingAverage and StochasticAverage optimizers use ModelAverageCheckpoint.
End of explanation
#Build Model
model = create_model(sgd)
#Train the network
model.fit(fmnist_train_ds, epochs=5, callbacks=[cp_callback])
#Evalute results
model.load_weights(checkpoint_dir)
loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2)
print("Loss :", loss)
print("Accuracy :", accuracy)
Explanation: Train Model
Vanilla SGD Optimizer
End of explanation
#Build Model
model = create_model(moving_avg_sgd)
#Train the network
model.fit(fmnist_train_ds, epochs=5, callbacks=[avg_callback])
#Evalute results
model.load_weights(checkpoint_dir)
loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2)
print("Loss :", loss)
print("Accuracy :", accuracy)
Explanation: Moving Average SGD
End of explanation
#Build Model
model = create_model(stocastic_avg_sgd)
#Train the network
model.fit(fmnist_train_ds, epochs=5, callbacks=[avg_callback])
#Evalute results
model.load_weights(checkpoint_dir)
loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2)
print("Loss :", loss)
print("Accuracy :", accuracy)
Explanation: Stocastic Weight Average SGD
End of explanation |
3,290 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Creating a custom prediction routine with scikit-learn
<table align="left">
<td>
<a href="https
Step1: Authenticate your GCP account
If you are using AI Platform Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step2: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
To deploy a custom prediction routine, you must upload your trained model
artifacts and your custom code to Cloud Storage.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Cloud
AI Platform services are
available. You may
not use a Multi-Regional Storage bucket for training with AI Platform.
Step3: Only if your bucket doesn't already exist
Step4: Finally, validate access to your Cloud Storage bucket by examining its contents
Step5: Building and training a scikit-learn model
Often, you can't use your data in its raw form to train a machine learning model. Even when you can, preprocessing the data before using it for training can sometimes improve your model.
Assuming that you expect the input for prediction to have the same format as your training data, you must apply identical preprocessing during training and prediction to ensure that your model makes consistent predictions.
In this section, create a preprocessing module and use it as part of training. Then export a preprocessor with characteristics learned during training to use later in your custom prediction routine.
Install dependencies for local training
Training locally (in the notebook) requires several dependencies
Step6: Write your preprocessor
Scaling training data so each numerical feature column has a mean of 0 and a standard deviation of 1 can improve your model.
Create preprocess.py, which contains a class to do this scaling
Step7: Notice that an instance of MySimpleScaler saves the means and standard deviations of each feature column on first use. Then it uses these summary statistics to scale data it encounters afterward.
This lets you store characteristics of the training distribution and use them for identical preprocessing at prediction time.
Train your model
Next, use preprocess.MySimpleScaler to preprocess the iris data, then train a model using scikit-learn.
At the end, export your trained model as a joblib (.joblib) file and export your MySimpleScaler instance as a pickle (.pkl) file
Step8: Deploying a custom prediction routine
To deploy a custom prediction routine to serve predictions from your trained model, do the following
Step9: Notice that, in addition to using the preprocessor that you defined during training, this predictor performs a postprocessing step that converts the prediction output from class indexes (0, 1, or 2) into label strings (the name of the flower type).
However, if the predictor receives a probabilities keyword argument with the value True, it returns a probability array instead, denoting the probability that each of the three classes is the correct label (according to the model). The last part of this tutorial shows how to provide a keyword argument during prediction.
Package your custom code
You must package predictor.py and preprocess.py as a .tar.gz source distribution package and provide the package to AI Platform so it can use your custom code to serve predictions.
Write the following setup.py to define your package
Step10: Then run the following command to createdist/my_custom_code-0.1.tar.gz
Step11: Upload model artifacts and custom code to Cloud Storage
Before you can deploy your model for serving, AI Platform needs access to the following files in Cloud Storage
Step12: Deploy your custom prediction routine
Create a model resource and a version resource to deploy your custom prediction routine. First define variables with your resource names
Step13: Then create your model
Step14: Next, create a version. In this step, provide paths to the artifacts and custom code you uploaded to Cloud Storage
Step15: Learn more about the options you must specify when you deploy a custom prediction routine.
Serving online predictions
Try out your deployment by sending an online prediction request. First, install the Google APIs Client Library for Python
Step16: Then send two instances of iris data to your deployed version
Step17: Note
Step18: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP project you used for the tutorial.
Alternatively, you can clean up individual resources by running the following commands | Python Code:
PROJECT_ID = "<your-project-id>" #@param {type:"string"}
! gcloud config set project $PROJECT_ID
Explanation: Creating a custom prediction routine with scikit-learn
<table align="left">
<td>
<a href="https://cloud.google.com/ml-engine/docs/scikit/custom-prediction-routine-scikit-learn">
<img src="https://cloud.google.com/_static/images/cloud/icons/favicons/onecloud/super_cloud.png"
alt="Google Cloud logo" width="32px"> Read on cloud.google.com
</a>
</td>
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/cloudml-samples/blob/main/notebooks/scikit-learn/custom-prediction-routine-scikit-learn.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/cloudml-samples/blob/main/notebooks/scikit-learn/custom-prediction-routine-scikit-learn.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
Beta
This is a beta release of custom prediction routines. This feature might be changed in backward-incompatible ways and is not subject to any SLA or deprecation policy.
Overview
This tutorial shows how to deploy a trained scikit-learn model to AI Platform and serve predictions using a custom prediction routine. This lets you customize how AI Platform responds to each prediction request.
In this example, you will use a custom prediction routine to preprocess
prediction input by scaling it, and to postprocess prediction output by converting class numbers to label strings.
The tutorial walks through several steps:
Training a simple scikit-learn model locally (in this notebook)
Creating and deploy a custom prediction routine to AI Platform
Serving prediction requests from that deployment
Dataset
This tutorial uses R.A. Fisher's Iris dataset, a small dataset that is popular for trying out machine learning techniques. Each instance has four numerical features, which are different measurements of a flower, and a target label that
marks it as one of three types of iris: Iris setosa, Iris versicolour, or Iris virginica.
This tutorial uses the copy of the Iris dataset included in the
scikit-learn library.
Objective
The goal is to train a model that uses a flower's measurements as input to predict what type of iris it is.
This tutorial focuses more on using this model with AI Platform than on
the design of the model itself.
Costs
This tutorial uses billable components of Google Cloud Platform (GCP):
AI Platform
Cloud Storage
Learn about AI Platform
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Before you begin
You must do several things before you can train and deploy a model in AI Platform:
Set up your local development environment.
Set up a GCP project with billing and the necessary
APIs enabled.
Authenticate your GCP account in this notebook.
Create a Cloud Storage bucket to store your training package and your
trained model.
Set up your local development environment
If you are using Colab or AI Platform Notebooks, your environment already
meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements.
You need the following:
The Google Cloud SDK
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Google Cloud guide to Setting up a Python development
environment and the Jupyter
installation guide provide detailed instructions
for meeting these requirements. The following steps provide a condensed set of
instructions:
Install and initialize the Cloud SDK.
Install Python 3.
Install
virtualenv
and create a virtual environment that uses Python 3.
Activate that environment and run pip install jupyter in a shell to install
Jupyter.
Run jupyter notebook in a shell to launch Jupyter.
Open this notebook in the Jupyter Notebook Dashboard.
Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project.
Make sure that billing is enabled for your project.
Enable the AI Platform ("Cloud Machine Learning Engine") and Compute Engine
APIs.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
if 'google.colab' in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
else:
%env GOOGLE_APPLICATION_CREDENTIALS '<path-to-your-service-account-key.json>'
Explanation: Authenticate your GCP account
If you are using AI Platform Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the GCP Console, go to the Create service account key
page.
From the Service account drop-down list, select New service account.
In the Service account name field, enter a name.
From the Role drop-down list, select
Machine Learning Engine > AI Platform Admin and
Storage > Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your
local environment.
Enter the path to your service account key as the
GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "<your-bucket-name>" #@param {type:"string"}
REGION = "us-central1" #@param {type:"string"}
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
To deploy a custom prediction routine, you must upload your trained model
artifacts and your custom code to Cloud Storage.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Cloud
AI Platform services are
available. You may
not use a Multi-Regional Storage bucket for training with AI Platform.
End of explanation
! gsutil mb -l $REGION gs://$BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al gs://$BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
! pip install numpy>=1.16.0 scikit-learn==0.20.2
Explanation: Building and training a scikit-learn model
Often, you can't use your data in its raw form to train a machine learning model. Even when you can, preprocessing the data before using it for training can sometimes improve your model.
Assuming that you expect the input for prediction to have the same format as your training data, you must apply identical preprocessing during training and prediction to ensure that your model makes consistent predictions.
In this section, create a preprocessing module and use it as part of training. Then export a preprocessor with characteristics learned during training to use later in your custom prediction routine.
Install dependencies for local training
Training locally (in the notebook) requires several dependencies:
End of explanation
%%writefile preprocess.py
import numpy as np
class MySimpleScaler(object):
def __init__(self):
self._means = None
self._stds = None
def preprocess(self, data):
if self._means is None: # during training only
self._means = np.mean(data, axis=0)
if self._stds is None: # during training only
self._stds = np.std(data, axis=0)
if not self._stds.all():
raise ValueError('At least one column has standard deviation of 0.')
return (data - self._means) / self._stds
Explanation: Write your preprocessor
Scaling training data so each numerical feature column has a mean of 0 and a standard deviation of 1 can improve your model.
Create preprocess.py, which contains a class to do this scaling:
End of explanation
import pickle
from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier
from sklearn.externals import joblib
from preprocess import MySimpleScaler
iris = load_iris()
scaler = MySimpleScaler()
X = scaler.preprocess(iris.data)
y = iris.target
model = RandomForestClassifier()
model.fit(X, y)
joblib.dump(model, 'model.joblib')
with open ('preprocessor.pkl', 'wb') as f:
pickle.dump(scaler, f)
Explanation: Notice that an instance of MySimpleScaler saves the means and standard deviations of each feature column on first use. Then it uses these summary statistics to scale data it encounters afterward.
This lets you store characteristics of the training distribution and use them for identical preprocessing at prediction time.
Train your model
Next, use preprocess.MySimpleScaler to preprocess the iris data, then train a model using scikit-learn.
At the end, export your trained model as a joblib (.joblib) file and export your MySimpleScaler instance as a pickle (.pkl) file:
End of explanation
%%writefile predictor.py
import os
import pickle
import numpy as np
from sklearn.datasets import load_iris
from sklearn.externals import joblib
class MyPredictor(object):
def __init__(self, model, preprocessor):
self._model = model
self._preprocessor = preprocessor
self._class_names = load_iris().target_names
def predict(self, instances, **kwargs):
inputs = np.asarray(instances)
preprocessed_inputs = self._preprocessor.preprocess(inputs)
if kwargs.get('probabilities'):
probabilities = self._model.predict_proba(preprocessed_inputs)
return probabilities.tolist()
else:
outputs = self._model.predict(preprocessed_inputs)
return [self._class_names[class_num] for class_num in outputs]
@classmethod
def from_path(cls, model_dir):
model_path = os.path.join(model_dir, 'model.joblib')
model = joblib.load(model_path)
preprocessor_path = os.path.join(model_dir, 'preprocessor.pkl')
with open(preprocessor_path, 'rb') as f:
preprocessor = pickle.load(f)
return cls(model, preprocessor)
Explanation: Deploying a custom prediction routine
To deploy a custom prediction routine to serve predictions from your trained model, do the following:
Create a custom predictor to handle requests
Package your predictor and your preprocessing module
Upload your model artifacts and your custom code to Cloud Storage
Deploy your custom prediction routine to AI Platform
Create a custom predictor
To deploy a custom prediction routine, you must create a class that implements
the Predictor interface. This tells AI Platform how to load your model and how to handle prediction requests.
Write the following code to predictor.py:
End of explanation
%%writefile setup.py
from setuptools import setup
setup(
name='my_custom_code',
version='0.1',
scripts=['predictor.py', 'preprocess.py'])
Explanation: Notice that, in addition to using the preprocessor that you defined during training, this predictor performs a postprocessing step that converts the prediction output from class indexes (0, 1, or 2) into label strings (the name of the flower type).
However, if the predictor receives a probabilities keyword argument with the value True, it returns a probability array instead, denoting the probability that each of the three classes is the correct label (according to the model). The last part of this tutorial shows how to provide a keyword argument during prediction.
Package your custom code
You must package predictor.py and preprocess.py as a .tar.gz source distribution package and provide the package to AI Platform so it can use your custom code to serve predictions.
Write the following setup.py to define your package:
End of explanation
! python setup.py sdist --formats=gztar
Explanation: Then run the following command to createdist/my_custom_code-0.1.tar.gz:
End of explanation
! gsutil cp ./dist/my_custom_code-0.1.tar.gz gs://$BUCKET_NAME/custom_prediction_routine_tutorial/my_custom_code-0.1.tar.gz
! gsutil cp model.joblib preprocessor.pkl gs://$BUCKET_NAME/custom_prediction_routine_tutorial/model/
Explanation: Upload model artifacts and custom code to Cloud Storage
Before you can deploy your model for serving, AI Platform needs access to the following files in Cloud Storage:
model.joblib (model artifact)
preprocessor.pkl (model artifact)
my_custom_code-0.1.tar.gz (custom code)
Model artifacts must be stored together in a model directory, which your
Predictor can access as the model_dir argument in its from_path class
method. The custom
code does not need to be in the same directory. Run the following commands to
upload your files:
End of explanation
MODEL_NAME = 'IrisPredictor'
VERSION_NAME = 'v1'
Explanation: Deploy your custom prediction routine
Create a model resource and a version resource to deploy your custom prediction routine. First define variables with your resource names:
End of explanation
! gcloud ai-platform models create $MODEL_NAME \
--regions $REGION
Explanation: Then create your model:
End of explanation
# --quiet automatically installs the beta component if it isn't already installed
! gcloud --quiet beta ai-platform versions create $VERSION_NAME \
--model $MODEL_NAME \
--runtime-version 1.13 \
--python-version 3.5 \
--origin gs://$BUCKET_NAME/custom_prediction_routine_tutorial/model/ \
--package-uris gs://$BUCKET_NAME/custom_prediction_routine_tutorial/my_custom_code-0.1.tar.gz \
--prediction-class predictor.MyPredictor
Explanation: Next, create a version. In this step, provide paths to the artifacts and custom code you uploaded to Cloud Storage:
End of explanation
! pip install --upgrade google-api-python-client
Explanation: Learn more about the options you must specify when you deploy a custom prediction routine.
Serving online predictions
Try out your deployment by sending an online prediction request. First, install the Google APIs Client Library for Python:
End of explanation
import googleapiclient.discovery
instances = [
[6.7, 3.1, 4.7, 1.5],
[4.6, 3.1, 1.5, 0.2],
]
service = googleapiclient.discovery.build('ml', 'v1')
name = 'projects/{}/models/{}/versions/{}'.format(PROJECT_ID, MODEL_NAME, VERSION_NAME)
response = service.projects().predict(
name=name,
body={'instances': instances}
).execute()
if 'error' in response:
raise RuntimeError(response['error'])
else:
print(response['predictions'])
Explanation: Then send two instances of iris data to your deployed version:
End of explanation
response = service.projects().predict(
name=name,
body={'instances': instances, 'probabilities': True}
).execute()
if 'error' in response:
raise RuntimeError(response['error'])
else:
print(response['predictions'])
Explanation: Note: This code uses the credentials you set up during the authentication step to make the online prediction request.
Sending keyword arguments
When you send a prediction request to a custom prediction routine, you can provide additional fields on your request body. The Predictor's predict method receives these as fields of the **kwargs dictionary.
The following code sends the same request as before, but this time it adds a probabilities field to the request body:
End of explanation
# Delete version resource
! gcloud ai-platform versions delete $VERSION_NAME --quiet --model $MODEL_NAME
# Delete model resource
! gcloud ai-platform models delete $MODEL_NAME --quiet
# Delete Cloud Storage objects that were created
! gsutil -m rm -r gs://$BUCKET_NAME/custom_prediction_routine_tutorial
Explanation: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP project you used for the tutorial.
Alternatively, you can clean up individual resources by running the following commands:
End of explanation |
3,291 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> Getting started with TensorFlow </h1>
In this notebook, you play around with the TensorFlow Python API.
Step1: <h2> Adding two tensors </h2>
First, let's try doing this using numpy, the Python numeric package. numpy code is immediately evaluated.
Step2: The equivalent code in TensorFlow consists of two steps
Step3: c is an Op ("Add") that returns a tensor of shape (3,) and holds int32. The shape is inferred from the computation graph.
Try the following in the cell above
Step4: <h2> Using a feed_dict </h2>
Same graph, but without hardcoding inputs at build stage
Step5: <h2> Heron's Formula in TensorFlow </h2>
The area of triangle whose three side lengths are $(a, b, c)$ is $\sqrt{s(s-a)(s-b)(s-c)}$ where $s=\frac{a+b+c}{2}$
Look up the available operations at
Step6: Extend your code to be able to compute the area for several triangles at once.
You should get
Step7: <h2> Placeholder and feed_dict </h2>
More common is to define the input to a program as a placeholder and then to feed in the inputs. The difference between the code below and the code above is whether the "area" graph is coded up with the input values or whether the "area" graph is coded up with a placeholder through which inputs will be passed in at run-time.
Step8: tf.eager
tf.eager allows you to avoid the build-then-run stages. However, most production code will follow the lazy evaluation paradigm because the lazy evaluation paradigm is what allows for multi-device support and distribution.
<p>
One thing you could do is to develop using tf.eager and then comment out the eager execution and add in the session management code.
<b> You will need to restart your session to try this out.</b> | Python Code:
import tensorflow as tf
import numpy as np
print(tf.__version__)
Explanation: <h1> Getting started with TensorFlow </h1>
In this notebook, you play around with the TensorFlow Python API.
End of explanation
a = np.array([5, 3, 8])
b = np.array([3, -1, 2])
c = np.add(a, b)
print(c)
Explanation: <h2> Adding two tensors </h2>
First, let's try doing this using numpy, the Python numeric package. numpy code is immediately evaluated.
End of explanation
a = tf.constant([5, 3, 8])
b = tf.constant([3, -1, 2])
c = tf.add(a, b)
print(c)
Explanation: The equivalent code in TensorFlow consists of two steps:
<p>
<h3> Step 1: Build the graph </h3>
End of explanation
with tf.Session() as sess:
result = sess.run(c)
print(result)
Explanation: c is an Op ("Add") that returns a tensor of shape (3,) and holds int32. The shape is inferred from the computation graph.
Try the following in the cell above:
<ol>
<li> Change the 5 to 5.0, and similarly the other five numbers. What happens when you run this cell? </li>
<li> Add an extra number to a, but leave b at the original (3,) shape. What happens when you run this cell? </li>
<li> Change the code back to a version that works </li>
</ol>
<p/>
<h3> Step 2: Run the graph
End of explanation
a = tf.placeholder(dtype=tf.int32, shape=(None,)) # batchsize x scalar
b = tf.placeholder(dtype=tf.int32, shape=(None,))
c = tf.add(a, b)
with tf.Session() as sess:
result = sess.run(c, feed_dict={
a: [3, 4, 5],
b: [-1, 2, 3]
})
print(result)
Explanation: <h2> Using a feed_dict </h2>
Same graph, but without hardcoding inputs at build stage
End of explanation
def compute_area(sides):
#TODO: Write TensorFlow code to compute area of a triangle
# given its side lengths
return area
with tf.Session() as sess:
area = compute_area(tf.constant([5.0, 3.0, 7.1]))
result = sess.run(area)
print(result)
Explanation: <h2> Heron's Formula in TensorFlow </h2>
The area of triangle whose three side lengths are $(a, b, c)$ is $\sqrt{s(s-a)(s-b)(s-c)}$ where $s=\frac{a+b+c}{2}$
Look up the available operations at: https://www.tensorflow.org/api_docs/python/tf.
You'll need the tf.sqrt() operation. Remember tf.add(), tf.subtract() and tf.multiply() are overloaded with the +,- and * operators respectively.
You should get: 6.278497
End of explanation
def compute_area(sides):
#TODO: Write TensorFlow code to compute area of a
# SET of triangles given by their side lengths
return list_of_areas
with tf.Session() as sess:
# pass in two triangles
area = compute_area(tf.constant([
[5.0, 3.0, 7.1],
[2.3, 4.1, 4.8]
]))
result = sess.run(area)
print(result)
Explanation: Extend your code to be able to compute the area for several triangles at once.
You should get: [6.278497 4.709139]
End of explanation
with tf.Session() as sess:
#TODO: Rather than feeding the side values as a constant,
# use a placeholder and fill it using feed_dict instead.
result = sess.run(...)
print(result)
Explanation: <h2> Placeholder and feed_dict </h2>
More common is to define the input to a program as a placeholder and then to feed in the inputs. The difference between the code below and the code above is whether the "area" graph is coded up with the input values or whether the "area" graph is coded up with a placeholder through which inputs will be passed in at run-time.
End of explanation
import tensorflow as tf
tf.enable_eager_execution()
#TODO: Using your non-placeholder solution,
# try it now using tf.eager by removing the session
Explanation: tf.eager
tf.eager allows you to avoid the build-then-run stages. However, most production code will follow the lazy evaluation paradigm because the lazy evaluation paradigm is what allows for multi-device support and distribution.
<p>
One thing you could do is to develop using tf.eager and then comment out the eager execution and add in the session management code.
<b> You will need to restart your session to try this out.</b>
End of explanation |
3,292 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matplotlib Exercise 2
Imports
Step1: Exoplanet properties
Over the past few decades, astronomers have discovered thousands of extrasolar planets. The following paper describes the properties of some of these planets.
http
Step2: Use np.genfromtxt with a delimiter of ',' to read the data into a NumPy array called data
Step3: Make a histogram of the distribution of planetary masses. This will reproduce Figure 2 in the original paper.
Customize your plot to follow Tufte's principles of visualizations.
Customize the box, grid, spines and ticks to match the requirements of this data.
Pick the number of bins for the histogram appropriately.
Step4: Make a scatter plot of the orbital eccentricity (y) versus the semimajor axis. This will reproduce Figure 4 of the original paper. Use a log scale on the x axis.
Customize your plot to follow Tufte's principles of visualizations.
Customize the box, grid, spines and ticks to match the requirements of this data. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import math
Explanation: Matplotlib Exercise 2
Imports
End of explanation
!head -n 30 open_exoplanet_catalogue.txt
Explanation: Exoplanet properties
Over the past few decades, astronomers have discovered thousands of extrasolar planets. The following paper describes the properties of some of these planets.
http://iopscience.iop.org/1402-4896/2008/T130/014001
Your job is to reproduce Figures 2 and 4 from this paper using an up-to-date dataset of extrasolar planets found on this GitHub repo:
https://github.com/OpenExoplanetCatalogue/open_exoplanet_catalogue
A text version of the dataset has already been put into this directory. The top of the file has documentation about each column of data:
End of explanation
data = np.genfromtxt("open_exoplanet_catalogue.txt", delimiter = ',')
data[0:20,2]
#raise NotImplementedError()
assert data.shape==(1993,24)
Explanation: Use np.genfromtxt with a delimiter of ',' to read the data into a NumPy array called data:
End of explanation
clean = np.array([x for x in data[:,2] if not math.isnan(x)])
plt.hist(clean, range = (0,14), bins = 50)
plt.xlabel("Planetary masses (Jupiter masses)")
plt.ylabel("Frequency (Number of Planets)")
plt.title("Histogram of Exoplanet Masses")
#raise NotImplementedError()
assert True # leave for grading
Explanation: Make a histogram of the distribution of planetary masses. This will reproduce Figure 2 in the original paper.
Customize your plot to follow Tufte's principles of visualizations.
Customize the box, grid, spines and ticks to match the requirements of this data.
Pick the number of bins for the histogram appropriately.
End of explanation
plt.scatter([math.log(x) for x in data[:,5]], data[:,6])
plt.xlabel("Semimajor Axis")
plt.ylabel("Orbital Eccentricity")
plt.title("Eccentricity vs. Semimajor Axis")
#raise NotImplementedError()
assert True # leave for grading
Explanation: Make a scatter plot of the orbital eccentricity (y) versus the semimajor axis. This will reproduce Figure 4 of the original paper. Use a log scale on the x axis.
Customize your plot to follow Tufte's principles of visualizations.
Customize the box, grid, spines and ticks to match the requirements of this data.
End of explanation |
3,293 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have a dataframe with one of its column having a list at each index. I want to concatenate these lists into one string like '1,2,3,4,5'. I am using | Problem:
import pandas as pd
df = pd.DataFrame(dict(col1=[[1, 2, 3]] * 2))
def g(df):
L = df.col1.sum()
L = map(lambda x:str(x), L)
return ','.join(L)
result = g(df.copy()) |
3,294 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Latitude-dependent grey radiation
Here is a quick example of using the climlab.GreyRadiationModel with a latitude dimension and seasonally varying insolation.
Step1: Testing out multi-dimensional Band Models
Step2: This is now working. Will need to do some model tuning.
And start to add dynamics!
Adding meridional diffusion!
Step3: This works as long as K is a constant.
The diffusion operation is broadcast over all vertical levels without any special code.
Step4: Band model with diffusion | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import climlab
from climlab import constants as const
model = climlab.GreyRadiationModel(name='Grey Radiation', num_lev=30, num_lat=90)
print(model)
model.to_xarray()
insolation = climlab.radiation.DailyInsolation(domains=model.Ts.domain)
model.add_subprocess('insolation', insolation)
model.subprocess.SW.flux_from_space = insolation.insolation
print(model)
model.compute_diagnostics()
plt.plot(model.lat, model.SW_down_TOA)
model.Tatm.shape
model.integrate_years(1)
plt.plot(model.lat, model.Ts)
model.integrate_years(1)
plt.plot(model.lat, model.timeave['Ts'])
def plot_temp_section(model, timeave=True):
fig = plt.figure()
ax = fig.add_subplot(111)
if timeave:
field = model.timeave['Tatm'].transpose()
else:
field = model.Tatm.transpose()
cax = ax.contourf(model.lat, model.lev, field)
ax.invert_yaxis()
ax.set_xlim(-90,90)
ax.set_xticks([-90, -60, -30, 0, 30, 60, 90])
fig.colorbar(cax)
plot_temp_section(model)
model2 = climlab.RadiativeConvectiveModel(name='RCM', num_lev=30, num_lat=90)
insolation = climlab.radiation.DailyInsolation(domains=model2.Ts.domain)
model2.add_subprocess('insolation', insolation)
model2.subprocess.SW.flux_from_space = insolation.insolation
model2.integrate_years(1)
model2.integrate_years(1)
plot_temp_section(model2)
Explanation: Latitude-dependent grey radiation
Here is a quick example of using the climlab.GreyRadiationModel with a latitude dimension and seasonally varying insolation.
End of explanation
# Put in some ozone
import xarray as xr
ozonepath = "http://thredds.atmos.albany.edu:8080/thredds/dodsC/CLIMLAB/ozone/apeozone_cam3_5_54.nc"
ozone = xr.open_dataset(ozonepath)
ozone
# Dimensions of the ozone file
lat = ozone.lat
lon = ozone.lon
lev = ozone.lev
# Taking annual, zonal average of the ozone data
O3_zon = ozone.OZONE.mean(dim=("time","lon"))
# make a model on the same grid as the ozone
model3 = climlab.BandRCModel(model='Band RCM', lev=lev, lat=lat)
insolation = climlab.radiation.DailyInsolation(domains=model3.Ts.domain)
model3.add_subprocess('insolation', insolation)
model3.subprocess.SW.flux_from_space = insolation.insolation
print(model3)
# Put in the ozone
model3.absorber_vmr['O3'] = O3_zon.transpose()
print(model3.absorber_vmr['O3'].shape)
print(model3.Tatm.shape)
model3.step_forward()
model3.integrate_years(1.)
model3.integrate_years(1.)
plot_temp_section(model3)
Explanation: Testing out multi-dimensional Band Models
End of explanation
print(model2)
diffmodel = climlab.process_like(model2)
diffmodel.name = "RCM with heat transport"
# thermal diffusivity in W/m**2/degC
D = 0.05
# meridional diffusivity in m**2/s
K = D / diffmodel.Tatm.domain.heat_capacity[0] * const.a**2
print(K)
d = climlab.dynamics.MeridionalDiffusion(K=K, state={'Tatm': diffmodel.Tatm}, **diffmodel.param)
diffmodel.add_subprocess('diffusion', d)
print(diffmodel)
diffmodel.step_forward()
diffmodel.integrate_years(1)
diffmodel.integrate_years(1)
plot_temp_section(model2)
plot_temp_section(diffmodel)
Explanation: This is now working. Will need to do some model tuning.
And start to add dynamics!
Adding meridional diffusion!
End of explanation
def inferred_heat_transport( energy_in, lat_deg ):
'''Returns the inferred heat transport (in PW) by integrating the net energy imbalance from pole to pole.'''
from scipy import integrate
from climlab import constants as const
lat_rad = np.deg2rad( lat_deg )
return ( 1E-15 * 2 * np.math.pi * const.a**2 * integrate.cumtrapz( np.cos(lat_rad)*energy_in,
x=lat_rad, initial=0. ) )
# Plot the northward heat transport in this model
Rtoa = np.squeeze(diffmodel.timeave['ASR'] - diffmodel.timeave['OLR'])
plt.plot(diffmodel.lat, inferred_heat_transport(Rtoa, diffmodel.lat))
plt.grid()
Explanation: This works as long as K is a constant.
The diffusion operation is broadcast over all vertical levels without any special code.
End of explanation
diffband = climlab.process_like(model3)
diffband.name = "Band RCM with heat transport"
d = climlab.dynamics.MeridionalDiffusion(K=K, state={'Tatm': diffband.Tatm}, **diffband.param)
diffband.add_subprocess('diffusion', d)
print(diffband)
diffband.integrate_years(1)
diffband.integrate_years(1)
plot_temp_section(model3)
plot_temp_section(diffband)
plt.plot(diffband.lat, diffband.timeave['ASR'] - diffband.timeave['OLR'])
# Plot the northward heat transport in this model
Rtoa = np.squeeze(diffband.timeave['ASR'] - diffband.timeave['OLR'])
plt.plot(diffband.lat, inferred_heat_transport(Rtoa, diffband.lat))
Explanation: Band model with diffusion
End of explanation |
3,295 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classification - MNIST dataset
Exploring the popular MNIST dataset.
Tensorflow provides a function to ingest the data.
Step1: A little exploration
Step2: Lets look at a random image and its label
Step3: Logistic Regression - Softmax
Now let's build a softmax classifier (linear) to classify MNIST images. We will use Mini-batch gradient descent for optimization
First, declare some of the hyperparameters that will be used by our softmax
Step4: Step 1
Step5: Step 2
Step6: Step 3
Step7: Step 5
Step8: Step 6
Step9: Now lets run our graph as usual | Python Code:
# Necessary imports
import time
from IPython import display
import numpy as np
from matplotlib.pyplot import imshow
from PIL import Image, ImageOps
import tensorflow as tf
%matplotlib inline
from tensorflow.examples.tutorials.mnist import input_data
# Read the mnist dataset
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
Explanation: Classification - MNIST dataset
Exploring the popular MNIST dataset.
Tensorflow provides a function to ingest the data.
End of explanation
# Explore mnist
print("Shape of MNIST Images.\nShape = (num_examples * num_features/pixels)\n")
print("Train : ", mnist.train.images.shape)
print("Validation : ", mnist.validation.images.shape)
print("Train : ", mnist.test.images.shape)
print("-"*25)
print("Shape of MNIST Labels.\nShape = (num_examples * num_labels/classes)\n")
print("Train : ", mnist.train.labels.shape)
print("Validation : ", mnist.validation.labels.shape)
print("Train : ", mnist.test.labels.shape)
Explanation: A little exploration
End of explanation
# Pull out a random image & its label
random_image_index = 200
random_image = mnist.train.images[random_image_index]
random_image_label = mnist.train.labels[random_image_index]
# Print the label and the image as grayscale
print("Image label: %d"%(random_image_label.argmax()))
pil_image = Image.fromarray(((random_image.reshape(28,28)) * 256).astype('uint8'), "L")
imshow(ImageOps.invert(pil_image), cmap='gray')
Explanation: Lets look at a random image and its label
End of explanation
# Softmax hyperparameters
learning_rate = 0.5
training_epochs = 5
batch_size = 100
Explanation: Logistic Regression - Softmax
Now let's build a softmax classifier (linear) to classify MNIST images. We will use Mini-batch gradient descent for optimization
First, declare some of the hyperparameters that will be used by our softmax
End of explanation
# Create placeholders
x = tf.placeholder(tf.float32, shape=(None, 784))
y = tf.placeholder(tf.float32, shape=(None, 10))
Explanation: Step 1: Create placeholders to hold the images.
Using None for a dimension in shape means it can be any number.
End of explanation
# Model parameters that have to be learned
# Initialize with zeros
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
Explanation: Step 2: Create variables to hold the weight matrix and the bias vector
End of explanation
# Get all the logits i.e. W * X + b for each of the class
logits = tf.matmul(x, W) + b
# Take a softmax of the logits.
y_predicted = tf.nn.softmax(logits)
# Make sure you reduce the sum across columns.
# The y_predicted has a shape of number_of_examples * 10
# Cross entropy should first sum across columns to get individual cost and then average this error over all examples
cross_entropy_loss = tf.reduce_mean(- tf.reduce_sum(y * tf.log(y_predicted ), axis=1))
# This can apparently be numerically unstable.
# Tensorflow provides a function that computes the logits, applies softmax and computes the cross entropy
# The example above is split only for pedagogical purposes
# logits = tf.matmul(x, W) + b
# cross_entropy_loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=logits))
Explanation: Step 3: Lets compute the label distribution. Apply the linear function W * X + b for each of the 10 classes. Then apply the softmax function to get a probability distribution of likelihoods for classes.
Recall that softmax(x) = exp(x)/ sum_i(exp(i)) where i represents each class
Step 4: Compute the loss function as the cross entropy between the predicted distribution of the labels and its true distribution.
Cross entropy H(Y) = - sum_i( true_dist(i) * log (computed_dist(i))
End of explanation
# Create an optimizer with the learning rate
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
# Use the optimizer to minimize the loss
train_step = optimizer.minimize(cross_entropy_loss)
Explanation: Step 5: Lets create a gradient descent optimizer to minimize the cross entropy loss
End of explanation
# First create the correct prediction by taking the maximum value from the prediction class
# and checking it with the actual class. The result is a boolean column vector
correct_predictions = tf.equal(tf.argmax(y_predicted, 1), tf.argmax(y, 1))
# Calculate the accuracy over all the images
# Cast the boolean vector into float (1s & 0s) and then compute the average.
accuracy = tf.reduce_mean(tf.cast(correct_predictions, tf.float32))
Explanation: Step 6: Lets compute the accuracy
End of explanation
# Initializing global variables
init = tf.global_variables_initializer()
# Create a session to run the graph
with tf.Session() as sess:
# Run initialization
sess.run(init)
# For the set number of epochs
for epoch in range(training_epochs):
# Compute the total number of batches
num_batches = int(mnist.train.num_examples/batch_size)
# Iterate over all the examples (1 epoch)
for batch_num in range(num_batches):
# Get a batch of examples
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# Now run the session
curr_loss, cur_accuracy, _ = sess.run([cross_entropy_loss, accuracy, train_step],
feed_dict={x: batch_xs, y: batch_ys})
if batch_num % 50 == 0:
display.clear_output(wait=True)
time.sleep(0.1)
# Print the loss
print("Epoch: %d/%d. Batch #: %d/%d. Current loss: %.5f. Train Accuracy: %.2f"
%(epoch, training_epochs, batch_num, num_batches, curr_loss, cur_accuracy))
# Run the session to compute the value and print it
test_accuracy = sess.run(accuracy,
feed_dict={x: mnist.test.images,
y: mnist.test.labels})
print("Test Accuracy: %.2f"%test_accuracy)
Explanation: Now lets run our graph as usual
End of explanation |
3,296 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Step9: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
Step10: Thinking about your results
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
# Hidden layer activation function is the sigmoid function f(x) = 1/(1 + exp(-1))
self.activation_function = lambda x: 1/ (1 + np.exp(-x))
self.activation_derivative = lambda x: x * (1 - x)
# Output layer activation function is f(x) = x
self.output_activation_function = lambda x: x
self.output_activation_derivative = lambda x: 1
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin = 2).T
### Forward pass ###
# signals into hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)
# signals from hidden layer
hidden_outputs = self.activation_function(hidden_inputs)
# signals into final output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)
# signals from final output layer
final_outputs = self.output_activation_function(final_inputs)
### Backward pass ###
# Output layer error is the difference between desired target and actual output.
error = targets - final_outputs
output_errors = error * self.output_activation_derivative(final_inputs)
# errors (back-)propagated to the hidden layer
hidden_errors = np.dot(output_errors, self.weights_hidden_to_output)
# hidden layer gradients
hidden_grad = self.activation_derivative(hidden_outputs)
# update hidden-to-output weights with gradient descent step
self.weights_hidden_to_output += self.lr * np.dot(output_errors, hidden_outputs.T)
# update input-to-hidden weights with gradient descent step
self.weights_input_to_hidden += self.lr * np.dot(hidden_errors.T * hidden_grad, inputs.T)
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
# signals into hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)
# signals from hidden layer
hidden_outputs = self.activation_function(hidden_inputs)
# signals into final output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)
# signals from final output layer
final_outputs = self.output_activation_function(final_inputs)
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import sys
### Set the hyperparameters here ###
epochs = 6000
learning_rate = 0.01
hidden_nodes = 28
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(ymax=0.5)
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Thinking about your results
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter
Your answer below
The model does fairly well predicting the Bikeshare data in D.C. until around December 22. It begins to do well again around December 27. It begins to fail around December 22 because this begins the Christmas week. Although the dataset includes a "holiday" variable, there is no discrete indicator for Christmas week -- i.e., the effect of Christmas on the data extends past the 25th. Also, this could coincide with other holidays (e.g., Hanukkah, Kwanzaa) that may or may not have a similar effect as well as not occuring on a single day (e.g., Hanukkah). An improvement for the model would take into account the aforementioned "holiday effect".
Unit tests
Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.
End of explanation |
3,297 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook was created by Sergey Tomin for Workshop
Step1: Tutorials
Preliminaries
Step2: <a id="tutorial1"></a>
Tutorial N1. Double Bend Achromat.
We designed a simple lattice to demonstrate the basic concepts and syntax of the optics functions calculation.
Also, we chose DBA to demonstrate the periodic solution for the optical functions calculation.
Step3: Creating lattice
Ocelot has following elements
Step4: hint
Step5: Optical function calculation
Uses | Python Code:
from IPython.display import Image
#Image(filename='gui_example.png')
Explanation: This notebook was created by Sergey Tomin for Workshop: Designing future X-ray FELs. Source and license info is on GitHub. August 2016.
An Introduction to Ocelot
Ocelot is a multiphysics simulation toolkit designed for studying FEL and storage ring based light sources. Ocelot is written in Python. Its central concept is the writing of python's scripts for simulations with the usage of Ocelot's modules and functions and the standard Python libraries.
Ocelot includes following main modules:
* Charged particle beam dynamics module (CPBD)
- optics
- tracking
- matching
- collective effects (description can be found here )
- Space Charge (true 3D Laplace solver)
- CSR (Coherent Synchrotron Radiation) (1D model with arbitrary number of dipoles) (under development).
- Wakefields (Taylor expansion up to second order for arbitrary geometry).
- MOGA (Multi Objective Genetics Algorithm). (under development but we have already applied it for a storage ring application)
* Native module for spontaneous radiation calculation
* FEL calculations: interface to GENESIS and pre/post-processing
* Modules for online beam control and online optimization of accelerator performances. Work1, work2, work3, work4.
Ocelot extensively uses Python's NumPy (Numerical Python) and SciPy (Scientific Python) libraries, which enable efficient in-core numerical and scientific computation within Python and give you access to various mathematical and optimization techniques and algorithms. To produce high quality figures Python's matplotlib library is used.
It is an open source project and it is being developed by physicists from The European XFEL, DESY (Germany), NRC Kurchatov Institute (Russia).
We still have no documentation but you can find a lot of examples in ocelot/demos/
Ocelot user profile
Ocelot is designed for researchers who want to have the flexibility that is given by high-level languages such as Matlab, Python (with Numpy and SciPy) or Mathematica.
However if someone needs a GUI it can be developed using Python's libraries like a PyQtGraph or PyQt.
For example, you can see GUI for SASE optimization (uncomment and run next block)
End of explanation
import IPython
print('IPython:', IPython.__version__)
import numpy
print('numpy:', numpy.__version__)
import scipy
print('scipy:', scipy.__version__)
import matplotlib
print('matplotlib:', matplotlib.__version__)
import ocelot
print('ocelot:', ocelot.__version__)
Explanation: Tutorials
Preliminaries: Setup & introduction
Beam dynamics
Tutorial N1. Linear optics.. Web version.
Linear optics. Double Bend Achromat (DBA). Simple example of usage OCELOT functions to get periodic solution for a storage ring cell.
Tutorial N2. Tracking.. Web version.
Linear optics of the European XFEL Injector.
Tracking. First and second order.
Tutorial N3. Space Charge.. Web version.
Tracking through RF cavities with SC effects and RF focusing.
Tutorial N4. Wakefields.. Web version.
Tracking through corrugated structure (energy chirper) with Wakefields
Tutorial N5. CSR.. Web version.
Tracking trough bunch compressor with CSR effect.
Tutorial N6. RF Coupler Kick.. Web version.
Coupler Kick. Example of RF coupler kick influence on trajjectory and optics.
Tutorial N7. Lattice design.. Web version.
Lattice design, twiss matching, twiss backtracking
Preliminaries
The tutorial includes 7 simple examples dediacted to beam dynamics and optics. However, you should have a basic understanding of Computer Programming terminologies. A basic understanding of Python language is a plus.
This tutorial requires the following packages:
Python 3.4-3.6 (python 2.7 can work as well)
numpy version 1.8 or later: http://www.numpy.org/
scipy version 0.15 or later: http://www.scipy.org/
matplotlib version 1.5 or later: http://matplotlib.org/
ipython version 2.4 or later, with notebook support: http://ipython.org
Optional to speed up python
- numexpr (version 2.6.1)
- pyfftw (version 0.10)
The easiest way to get these is to download and install the (very large) Anaconda software distribution.
Alternatively, you can download and install miniconda.
The following command will install all required packages:
$ conda install numpy scipy matplotlib ipython-notebook
Ocelot installation
you have to download from GitHub zip file.
Unzip ocelot-master.zip to your working folder /your_working_dir/.
Rename folder ../your_working_dir/ocelot-master to /your_working_dir/ocelot.
Add ../your_working_dir/ to PYTHONPATH
Windows 7: go to Control Panel -> System and Security -> System -> Advance System Settings -> Environment Variables.
and in User variables add /your_working_dir/ to PYTHONPATH. If variable PYTHONPATH does not exist, create it
Variable name: PYTHONPATH
Variable value: ../your_working_dir/
- Linux:
$ export PYTHONPATH=/your_working_dir/:$PYTHONPATH
To launch "ipython notebook" or "jupyter notebook"
in command line run following commands:
$ ipython notebook
or
$ ipython notebook --notebook-dir="path_to_your_directory"
or
$ jupyter notebook --notebook-dir="path_to_your_directory"
Checking your installation
You can run the following code to check the versions of the packages on your system:
(in IPython notebook, press shift and return together to execute the contents of a cell)
End of explanation
from __future__ import print_function
# the output of plotting commands is displayed inline within frontends,
# directly below the code cell that produced it
%matplotlib inline
# import from Ocelot main modules and functions
from ocelot import *
# import from Ocelot graphical modules
from ocelot.gui.accelerator import *
Explanation: <a id="tutorial1"></a>
Tutorial N1. Double Bend Achromat.
We designed a simple lattice to demonstrate the basic concepts and syntax of the optics functions calculation.
Also, we chose DBA to demonstrate the periodic solution for the optical functions calculation.
End of explanation
# defining of the drifts
D1 = Drift(l=2.)
D2 = Drift(l=0.6)
D3 = Drift(l=0.3)
D4 = Drift(l=0.7)
D5 = Drift(l=0.9)
D6 = Drift(l=0.2)
# defining of the quads
Q1 = Quadrupole(l=0.4, k1=-1.3)
Q2 = Quadrupole(l=0.8, k1=1.4)
Q3 = Quadrupole(l=0.4, k1=-1.7)
Q4 = Quadrupole(l=0.5, k1=1.3)
# defining of the bending magnet
B = Bend(l=2.7, k1=-.06, angle=2*pi/16., e1=pi/16., e2=pi/16.)
# defining of the sextupoles
SF = Sextupole(l=0.01, k2=1.5) #random value
SD = Sextupole(l=0.01, k2=-1.5) #random value
# cell creating
cell = (D1, Q1, D2, Q2, D3, Q3, D4, B, D5, SD, D5, SF, D6, Q4, D6, SF, D5, SD, D5, B, D4, Q3, D3, Q2, D2, Q1, D1)
Explanation: Creating lattice
Ocelot has following elements: Drift, Quadrupole, Sextupole, Octupole, Bend, SBend, RBend, Edge, Multipole, Hcor, Vcor, Solenoid, Cavity, Monitor, Marker, Undulator.
End of explanation
lat = MagneticLattice(cell)
# to see total lenth of the lattice
print("length of the cell: ", lat.totalLen, "m")
Explanation: hint: to see a simple description of the function put cursor inside () and press Shift-Tab or you can type sign ? before function. To extend dialog window press +* *
The cell is a list of the simple objects which contain a physical information of lattice elements such as length, strength, voltage and so on. In order to create a transport map for every element and bind it with lattice object we have to create new Ocelot object - MagneticLattice() which makes these things automatically.
MagneticLattice(sequence, start=None, stop=None, method=MethodTM()):
* sequence - list of the elements,
other paramenters we will consider in tutorial N2.
End of explanation
tws=twiss(lat)
# to see twiss paraments at the begining of the cell, uncomment next line
# print(tws[0])
# to see twiss paraments at the end of the cell, uncomment next line
print(tws[0])
# plot optical functions.
plot_opt_func(lat, tws, top_plot = ["Dx", "Dy"], legend=False, font_size=10)
plt.show()
# you also can use standard matplotlib functions for plotting
#s = [tw.s for tw in tws]
#bx = [tw.beta_x for tw in tws]
#plt.plot(s, bx)
#plt.show()
# you can play with quadrupole strength and try to make achromat
Q4.k1 = 1.18
# to make achromat uncomment next line
# Q4.k1 = 1.18543769836
# To use matching function, please see ocelot/demos/ebeam/dba.py
# updating transfer maps after changing element parameters.
lat.update_transfer_maps()
# recalculate twiss parameters
tws=twiss(lat, nPoints=1000)
plot_opt_func(lat, tws, legend=False)
plt.show()
Explanation: Optical function calculation
Uses:
* twiss() function and,
* Twiss() object contains twiss parameters and other information at one certain position (s) of lattice
To calculate twiss parameters you have to run twiss(lattice, tws0=None, nPoints=None) function. If you want to get a periodic solution leave tws0 by default.
You can change the number of points over the cell, If nPoints=None, then twiss parameters are calculated at the end of each element.
twiss() function returns list of Twiss() objects.
You will see the Twiss object contains more information than just twiss parameters.
End of explanation |
3,298 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2016 NHL Hockey Data Set - Sasha Kandrach
Step1: Here's all of our data
Step2: Here are each of the columns in the data set
Step3: Let's count how many players are from each country
Step4: Let's count how many players are from each country
Step5: And now let's look at the top ten highest birth cities
Step6: Let's look at how many of those Toronto-born players were drafted before 2006
Step7: Let's look at how many of those Edmonton-born players were drafted before 2006
Step8: Let's look at how many of those Minneapolis-born players were drafted before 2006
Step9: Concussions...that's always a fun topic. Let's look at the players from each country that reported a concussion. We'll start with the United States
Step10: Hmmm... only two reported concussions in professional hockey?! highly doubtful...let's look at the injuries that were reported as 'Undisclosed' and call them mystery injuries
Step11: Let's look at Canada's reported concussions
Step12: Hmmm...not a lot either. Let's look at the "undisclosed" injuries that were reported
Step13: Switzerland Concussions
Step14: Switzerland "Undisclosed Injuries"
Step15: Sweden Concussions
Step16: Sweden "Undisclosed" Injuries
Step17: Germany Concussions
Step18: Germany "Undisclosed" Injuries
Step19: Czech Republic Concussions
Step20: Czech Republic "Undisclosed Injuries"
Step21: Russia Concussions
Step22: Russia "Undisclosed Injuries"
Step23: Lithuania Concussions
Step24: Lithuania "Undisclosed Injuries"
Step25: Norway Concussions
Step26: Norway "Undisclosed" Injuries
Step27: Let's look at how old the players are
Step28: Young Players (24 years old or younger) for the United States
Step29: Young Players (24 years old or younger) for Canada
Step30: Old Players (36 years old or older) for the United States
Step31: Old Players (36 years old or younger) for Canada
Step32: Let's examine the correlation between height and weight
Step33: And a visual of the correlation...nice
Step34: Let's examine how many lefty's versus righty's (in shooting) each country has
Step35: Interesting...Canada has significantly more left-handed shooters (280) than right-handed shooters. Meanwhile, the USA is pretty even with 110 lefty's and 107 righty's.
Step36: Correlation between Country and Draft Year | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
df= pd.read_excel("NHL 2014-15.xls")
!pip install xlrd
df.columns.value_counts()
Explanation: 2016 NHL Hockey Data Set - Sasha Kandrach
End of explanation
df.head()
Explanation: Here's all of our data:
End of explanation
df.columns
Explanation: Here are each of the columns in the data set:
End of explanation
df['Ctry'].value_counts().head(10)
Explanation: Let's count how many players are from each country:
End of explanation
df['Nat'].value_counts().head(10)
Explanation: Let's count how many players are from each country: they're basically the same but in some cases are slightly different
End of explanation
df['Birth City'].value_counts().head(10)
Explanation: And now let's look at the top ten highest birth cities
End of explanation
df[(df['Birth City'] == 'Toronto') & (df['Draft'] < 2006.0)].head()
Explanation: Let's look at how many of those Toronto-born players were drafted before 2006
End of explanation
df[(df['Birth City'] == 'Edmonton') & (df['Draft'] < 2006.0)].head()
Explanation: Let's look at how many of those Edmonton-born players were drafted before 2006
End of explanation
df[(df['Birth City'] == 'Minneapolis') & (df['Draft'] < 2006.0)].head()
Explanation: Let's look at how many of those Minneapolis-born players were drafted before 2006
End of explanation
usa_concussion = df[(df['Ctry'] == 'USA') & (df['Injury'] == 'Concussion')]
usa_concussion[["First Name", "Last Name"]]
Explanation: Concussions...that's always a fun topic. Let's look at the players from each country that reported a concussion. We'll start with the United States:
End of explanation
usa_mystery_injury = df[(df['Ctry'] == 'USA') & (df['Injury'] == 'Undisclosed')]
usa_mystery_injury[["First Name", "Last Name"]]
us_concussion
Explanation: Hmmm... only two reported concussions in professional hockey?! highly doubtful...let's look at the injuries that were reported as 'Undisclosed' and call them mystery injuries:
End of explanation
can_concussion = df[(df['Ctry'] == 'CAN') & (df['Injury'] == 'Concussion')]
can_concussion[["First Name", "Last Name"]]
Explanation: Let's look at Canada's reported concussions:
End of explanation
can_mystery_injury = df[(df['Ctry'] == 'CAN') & (df['Injury'] == 'Undisclosed')]
can_mystery_injury[["First Name", "Last Name"]]
Explanation: Hmmm...not a lot either. Let's look at the "undisclosed" injuries that were reported:
End of explanation
che_concussion = df[(df['Ctry'] == 'CHE') & (df['Injury'] == 'Concussion')]
che_concussion[["First Name", "Last Name"]]
Explanation: Switzerland Concussions:
End of explanation
che_mystery_injury = df[(df['Ctry'] == 'CHE') & (df['Injury'] == 'Undisclosed')]
che_mystery_injury[["First Name", "Last Name"]]
Explanation: Switzerland "Undisclosed Injuries"
End of explanation
swe_concussion = df[(df['Ctry'] == 'SWE') & (df['Injury'] == 'Concussion')]
swe_concussion[["First Name", "Last Name"]]
Explanation: Sweden Concussions:
End of explanation
swe_mystery_injury = df[(df['Ctry'] == 'SWE') & (df['Injury'] == 'Undisclosed')]
swe_mystery_injury[["First Name", "Last Name"]]
Explanation: Sweden "Undisclosed" Injuries
End of explanation
deu_concussion = df[(df['Ctry'] == 'DEU') & (df['Injury'] == 'Concussion')]
deu_concussion[["First Name", "Last Name"]]
Explanation: Germany Concussions:
End of explanation
deu_mystery_injury = df[(df['Ctry'] == 'DEU') & (df['Injury'] == 'Undisclosed')]
deu_mystery_injury[["First Name", "Last Name"]]
Explanation: Germany "Undisclosed" Injuries:
End of explanation
cze_concussion= df[(df['Ctry'] == 'CZE') & (df['Injury'] == 'Concussion')]
cze_concussion[["First Name", "Last Name"]]
Explanation: Czech Republic Concussions:
End of explanation
cze_mystery_injury = df[(df['Ctry'] == 'CZE') & (df['Injury'] == 'Undisclosed')]
cze_mystery_injury[["First Name", "Last Name"]]
Explanation: Czech Republic "Undisclosed Injuries"
End of explanation
rus_concussion = df[(df['Ctry'] == 'RUS') & (df['Injury'] == 'Concussion')]
rus_concussion[["First Name", "Last Name"]]
Explanation: Russia Concussions:
End of explanation
rus_mystery_injury = df[(df['Ctry'] == 'RUS') & (df['Injury'] == 'Undisclosed')]
rus_mystery_injury[["First Name", "Last Name"]]
Explanation: Russia "Undisclosed Injuries"
End of explanation
ltu_concussion = df[(df['Ctry'] == 'LTU') & (df['Injury'] == 'Concussion')]
ltu_concussion[["First Name", "Last Name"]]
Explanation: Lithuania Concussions
End of explanation
ltu_mystery_injury = df[(df['Ctry'] == 'LTU') & (df['Injury'] == 'Undisclosed')]
ltu_mystery_injury[["First Name", "Last Name"]]
Explanation: Lithuania "Undisclosed Injuries"
End of explanation
nor_concussion = df[(df['Ctry'] == 'NOR') & (df['Injury'] == 'Concussion')]
nor_concussion[["First Name", "Last Name"]]
Explanation: Norway Concussions
End of explanation
nor_mystery_injury = df[(df['Ctry'] == 'NOR') & (df['Injury'] == 'Undisclosed')]
nor_mystery_injury[["First Name", "Last Name"]]
df
Explanation: Norway "Undisclosed" Injuries
End of explanation
birthdate = df[df['DOB']].replace("DOB", "")
birthdate
df['birthyear'] = df['DOB'].astype(str).str.split("'").str.get(1).astype(int)
df
Explanation: Let's look at how old the players are:
End of explanation
young_usa_players = df[(df['Ctry'] == 'USA') & (df['birthyear'] >= 94 )]
young_usa_players[["First Name", "Last Name"]]
Explanation: Young Players (24 years old or younger) for the United States:
End of explanation
young_can_players = df[(df['Ctry'] == 'CAN') & (df['birthyear'] >= 94 )]
young_can_players[["First Name", "Last Name"]]
Explanation: Young Players (24 years old or younger) for Canada:
End of explanation
old_usa_players = df[(df['Ctry'] == 'USA') & (df['birthyear'] <= 80 )]
old_usa_players[["First Name", "Last Name"]]
Explanation: Old Players (36 years old or older) for the United States:
End of explanation
old_can_players = df[(df['Ctry'] == 'CAN') & (df['birthyear'] <= 80 )]
old_can_players[["First Name", "Last Name"]]
Explanation: Old Players (36 years old or younger) for Canada:
End of explanation
df['HT'].describe()
df['Wt'].describe()
Explanation: Let's examine the correlation between height and weight
End of explanation
plt.style.use('ggplot')
df.plot(kind='scatter', x='Wt', y='HT')
Explanation: And a visual of the correlation...nice:
End of explanation
df['S'].value_counts()
df.groupby(['Ctry', 'S']).agg(['count'])
Explanation: Let's examine how many lefty's versus righty's (in shooting) each country has:
End of explanation
usa_left_shot = df[(df['Ctry'] == 'USA') & (df['S'] == 'L')]
usa_left_shot[["First Name", "Last Name"]]
can_left_shot = df[(df['Ctry'] == 'CAN') & (df['S'] == 'L')]
can_left_shot[["First Name", "Last Name"]]
usa_right_shot = df[(df['Ctry'] == 'USA') & (df['S'] == 'R')]
usa_right_shot[["First Name", "Last Name"]]
can_right_shot = df[(df['Ctry'] == 'CAN') & (df['S'] == 'R')]
can_right_shot[["First Name", "Last Name"]]
Explanation: Interesting...Canada has significantly more left-handed shooters (280) than right-handed shooters. Meanwhile, the USA is pretty even with 110 lefty's and 107 righty's.
End of explanation
plt.style.use('seaborn-deep')
df.head(5).plot(kind='bar', x='Ctry', y='Draft')
df
Explanation: Correlation between Country and Draft Year
End of explanation |
3,299 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
En este documento se describen los pasos llevados a cabo para estandarizar los datos disponibles desde la encuesta intercensal y utilizarlos para la construcción de parámetros. Las limpieza se realiza utilizando Python 3.
De los parámetros listados, no es posible obtener P0602 dentro del alcance de este proceso, debido a que los datos de focos ahorradores no fueron incluidos por INEGI en su agregación municipal de datos tabulados. Sería posible construir este parámetro desde los microdatos, para lo cual se requiere un proceso individual que recree la metodología utilizada por INEGI.
2 . Definiciones
PCCS
Step1: La descarga de datos se realiza desde el sitio Beta de INEGI. Los datos de la Encuesta Intercensal 2015 se encuentran en http
Step2: Las ligas quedan almacenadas en un diccionario de python en el que key = 'Clave Geoestadística Estatal'; y value = 'liga para descarga'. Por ejemplo, '09' es la clave geoestadística para la Ciudad de México. Si al diccionario links le solicitamos el valor de la key '09', nos regresa la liga para descargar los indicadores de vivienda de la Ciudad de México, como se muestra a continuación
Step3: Con el diccionario de ligas ya es posible descargar los archivos en una carpeta local para poder procesarlos.
Step4: Cada archivo tiene la misma estructura y contiene los datos de vivienda de 2015 levantados en la encuesta intercensal. La primera hoja, 'Índice', incluye un listado de las hojas y datos que contiene cada libro. Este índice se tomará como referencia para la minería de datos
Step5: La columna 'Tabulado' contiene el nombre de la hoja, mientras que 'Titulo' describe los datos de la misma. Para la construcción de parámetros de la PCCS se utilizarán las siguientes hojas
Step6: 2 . Correr funcion sobre todos los archivos de excel para extraer datos de la hoja 02
Step7: Los datos fueron guardados como diccionario de Python, es necesario convertirlos en un DataFrame unico antes de hacer la limpieza final.
Step8: 3 . Limpieza final del Dataframe 'Pisos'
Step9: HOJAS 08, 09, 16, 19, 20, 21 23, 24, 25, 26
Como se mencionó antes, todas las hojas siguen un proceso similar para la extraccion de datos, con ligeras variaciones.
1 . Funcion para extraer datos de hoja tipo
Para el resto de los archivos reutilizaremos la función "cargahoja" definida anteriormente
2 . Correr función sobre archivos de excel
Para que la función "cargahoja" pueda iterar de manera adecuada sobre todos los archivos, es necesario especificar cuales son las variaciones para cada hoja que la función va a leer. Las principales variaciones son los nombres de las columnas y la ubicación de los encabezados en cada hoja, por lo que los datos de cada hoja pueden extraerse de manera automatizada una vez que se identifique cómo va a tratar el script a cada una de las variaciones.
--- 2.1 . A continuación se definen las columnas para cada hoja
Step10: --- 2.2 . Además de definir las columnas, es necesario definir cuántos renglones tiene que ignorar el script antes de encontrar los encabezados. Estos renglones se definen a continuación en un diccionario
Step11: 2 . Correr funcion sobre todos los archivos de excel
Una vez que se tienen definidos los tratamientos que el script dará a cada variación, se extraen los datos por medio de una función iterativa hacia un diccionario de Python.
Step12: El diccionario resultante contiene los datos de cada hoja clasificados por estado. Sin embargo, la estructura de diccionarios aun requiere ser procesada para obtener dataframes estandar.
Step13: 3 . Limpieza final de los Dataframe
Step14: Extraccion de notas específicas para cada hoja
Cada hoja en el dataset original contiene notas al final del bloque de datos que es conveniente tener en cuenta en el momento de la interpretacion de los datos. Las notas están específicadas para cada municipio con un número en superíndice, uno o dos asteriscos. Debido a que cada hoja contiene diferente información, es conveniente verificar la nomenclatura que tienen las notas para cada hoja. Si la nomenclatura llegara a ser diferente para una misma nota en diferentes estados de una misma hoja, sería necesario crear una nueva columna en los dataset estándar para especificar la nota que corresponde a cada caso.
Para verificar las notas de cada hoja se utilizará el siguiente proceso
Step15: 2 . Correr la función sobre cada hoja y extraer las notas
Step16: Una vez que se han extraido todas las notas de las hojas, se corre un script iterativo para verificar en cuáles notas varía la nomenclatura. Este script funciona de la siguiente manera
Step17: Gracias a este script podemos ver que la nomenclatura en todas las hojas es estándar, y puede agregarse como nota en los metadatos de todos los dataset estándar.
Guardado de datasets estándar
Los dataframes obtenidos a través de los procesos anteriores se guardan en libros de formato OpenXML (.xlsx) para facilitar su lectura tanto desde sistemas informáticos como por personas. Cada libro contiene 2 hojas
Step18: El dataset que se procesó al principio de este estudio (Correspondiente a la hoja 02), se agrega junto con los demás datasets al diccionario de datasets estándar.
Step19: La función para la escritura de datasets estándar es la siguiente
Step20: Una vez definida la función para la escritura de datasets, se ejecuta de manera iterativa sobre los datos
Step21: Al final del proceso se generaron 10 datasets estándar | Python Code:
# Librerias utilizadas
import pandas as pd
import sys
import urllib
import os
import numpy as np
# Configuracion del sistema
print('Python {} on {}'.format(sys.version, sys.platform))
print('Pandas version: {}'.format(pd.__version__))
import platform; print('Running on {} {}'.format(platform.system(), platform.release()))
Explanation: En este documento se describen los pasos llevados a cabo para estandarizar los datos disponibles desde la encuesta intercensal y utilizarlos para la construcción de parámetros. Las limpieza se realiza utilizando Python 3.
De los parámetros listados, no es posible obtener P0602 dentro del alcance de este proceso, debido a que los datos de focos ahorradores no fueron incluidos por INEGI en su agregación municipal de datos tabulados. Sería posible construir este parámetro desde los microdatos, para lo cual se requiere un proceso individual que recree la metodología utilizada por INEGI.
2 . Definiciones
PCCS : Plataforma de Conocimiento de Ciudades Sustentables
Dataset : Conjunto de datos que tratan acerca de un tema.
Dataset fuente : Dataset como se encuentra disponible para su descarga en la página de la fuente de información.
Dataframe : Estructura bidimensional de datos, compuesta por filas que contienen casos y columnas que contienen variables.
Dataset estandar : Dataframe procesado para el uso de la PCCS, etiquetado con la clave geoestadística Municipal de 5 dígitos de INEGI.
Limpieza de dataset de la Encuesta Intercensal 2015 - Módulo Vivienda
1 . Introduccion
Para la construcción de indicadores de la Plataforma de Conocimiento de Ciudades Sustentables se han considerado los siguientes datos disponibles desde el módulo de vivienda la Encuesta Intercensal 2015 del INEGI:
ID |Descripción
---|:----------
P0101|Porcentaje de viviendas con agua entubada
P0102|Porcentaje de viviendas que cuentan con descarga a una red de alcantarillado.
P0402|Viviendas que aprovechan energía solar
P0403|Viviendas con drenaje
P0404|Viviendas con piso de tierra
P0602|Viviendas particulares habitadas con focos ahorradores
P0603|Viviendas particulares habitadas con calentador de agua (boiler)
P0611|Viviendas que utilizan leña o carbón para cocinar
P0612|Viviendas que utilizan leña o carbón para cocinar, que disponen de estufa o fogón con chimenea
P0613|Viviendas habitadas que utilizan gas para cocinar
P1004|Forma de eliminación de residuos
P1010|Porcentaje de viviendas de reutilización de residuos
P1011|Porcentaje de viviendas que separan sus residuos en orgánicos e inorgánicos
3 . Descarga de Datos
End of explanation
# LIGAS PARA DESCARGA DE ARCHIVOS
# Las ligas para descarga tienen una raiz URL común que cambia
# dependiendo del indicador y estado que se busque descargar
url = r'http://www.beta.inegi.org.mx/contenidos/Proyectos/enchogares/especiales/intercensal/2015/tabulados/'
indicador = r'14_vivienda_'
raiz = url+indicador
links = {
'01' : raiz+'ags.xls',
'02' : raiz+'bc.xls',
'03' : raiz+'bcs.xls',
'04' : raiz+'cam.xls',
'05' : raiz+'coah.xls',
'06' : raiz+'col.xls',
'07' : raiz+'chis.xls',
'08' : raiz+'chih.xls',
'09' : raiz+'cdmx.xls',
'10' : raiz+'dgo.xls',
'11' : raiz+'gto.xls',
'12' : raiz+'gro.xls',
'13' : raiz+'hgo.xls',
'14' : raiz+'jal.xls',
'15' : raiz+'mex.xls',
'16' : raiz+'mich.xls',
'17' : raiz+'mor.xls',
'18' : raiz+'nay.xls',
'19' : raiz+'nl.xls',
'20' : raiz+'oax.xls',
'21' : raiz+'pue.xls',
'22' : raiz+'qro.xls',
'23' : raiz+'qroo.xls',
'24' : raiz+'slp.xls',
'25' : raiz+'sin.xls',
'26' : raiz+'son.xls',
'27' : raiz+'tab.xls',
'28' : raiz+'tamps.xlsz',
'29' : raiz+'tlax.xls',
'30' : raiz+'ver.xls',
'31' : raiz+'yuc.xls',
'32' : raiz+'zac.xls'
}
Explanation: La descarga de datos se realiza desde el sitio Beta de INEGI. Los datos de la Encuesta Intercensal 2015 se encuentran en http://www.beta.inegi.org.mx/proyectos/enchogares/especiales/intercensal/
Existen tres maneras de descargar la información:
Datos para la República Mexicana, con la ventaja de que es un solo archivo con variables procesadas y con la desventaja de que su nivel de desagregación es estatal.
Datos estatales, con la ventaja de que cuentan con desagregacion a nivel municipal con variables interpretadas y con la desventaja de que la información está fragmentada en muchos archivos pues hay un archivo por variable por estado.
Microdatos, con la ventaja de que contienen toda la información del Proyecto en pocos archivos y con la desventaja de que tienen que interpretarse antes de obtener valores útiles para la PCCS.
La manera más conveniente es descargar los datos estatales, pues la primera no entregaría datos relevantes para la construccion de indicadores de la PCCS y la segunda requeriría dedicar una gran cantidad de tiempo y esfuerzo para recrear la interpretacion realizada por INEGI.
Todos los indicadores que se utilizarán para la construccion de la PCCS se encuentran en la encuesta de Vivienda, por lo que únicamente se descargará el paquete de datos de esta encuesta
End of explanation
print(links['09'])
Explanation: Las ligas quedan almacenadas en un diccionario de python en el que key = 'Clave Geoestadística Estatal'; y value = 'liga para descarga'. Por ejemplo, '09' es la clave geoestadística para la Ciudad de México. Si al diccionario links le solicitamos el valor de la key '09', nos regresa la liga para descargar los indicadores de vivienda de la Ciudad de México, como se muestra a continuación:
End of explanation
# Descarga de archivos a carpeta local
destino = r'D:\PCCS\00_RawData\01_CSV\Intercensal2015\estatal\14. Vivienda'
archivos = {} # Diccionario para guardar memoria de descarga
for k,v in links.items():
archivo_local = destino + r'\{}.xls'.format(k)
if os.path.isfile(archivo_local):
print('Ya existe el archivo: {}'.format(archivo_local))
archivos[k] = archivo_local
else:
print('Descargando {} ... ... ... ... ... '.format(archivo_local))
urllib.request.urlretrieve(v, archivo_local) #
archivos[k] = archivo_local
print('se descargó {}'.format(archivo_local))
Explanation: Con el diccionario de ligas ya es posible descargar los archivos en una carpeta local para poder procesarlos.
End of explanation
pd.options.display.max_colwidth = 150
df = pd.read_excel(archivos['01'],
sheetname = 'Índice',
skiprows = 6,
usecols = ['Tabulado', 'Título'],
dtype = {'Tabulado' : 'str'},
).set_index('Tabulado')
df
Explanation: Cada archivo tiene la misma estructura y contiene los datos de vivienda de 2015 levantados en la encuesta intercensal. La primera hoja, 'Índice', incluye un listado de las hojas y datos que contiene cada libro. Este índice se tomará como referencia para la minería de datos:
End of explanation
# Funcion para extraer datos de hoja tipo
# La funcion espera los siguientes valores:
# --- entidad: [str] clave geoestadistica de entidad de 2 digitos
# --- ruta: [str] ruta al archivo de excel que contiene la información
# --- hoja: [str] numero de hoja dentro del archivo de excel que se pretende procesar
# --- colnames: [list] nombres para las columnas de datos (Las columnas en los archivos de este
# dataset requieren ser nombradas manualmente por la configuración de los
# encabezados en los archivo fuente)
# --- skip: [int] El numero de renglones en la hoja que el script tiene que ignorar para encontrar
# el renglon de encabezados.
def cargahoja(entidad, ruta, hoja, colnames, skip):
# Abre el archivo de excel
raw_data = pd.read_excel(ruta,
sheetname=hoja,
skiprows=skip).dropna()
# renombra las columnas
raw_data.columns = colnames
# Obten Unicamente las filas con valores estimativos
raw_data = raw_data[raw_data['Estimador'] == 'Valor']
# Crea la columna CVE_MUN
raw_data['CVE_ENT'] = entidad
raw_data['ID_MUN'] = raw_data.Municipio.str.split(' ', n=1).apply(lambda x: x[0])
raw_data['CVE_MUN'] = raw_data['CVE_ENT'].map(str) + raw_data['ID_MUN']
# Borra columnas con informacion irrelevante o duplicada
del (raw_data['CVE_ENT'])
del (raw_data['ID_MUN'])
del (raw_data['Entidad federativa'])
del (raw_data['Estimador'])
raw_data.set_index('CVE_MUN', inplace=True)
return raw_data
Explanation: La columna 'Tabulado' contiene el nombre de la hoja, mientras que 'Titulo' describe los datos de la misma. Para la construcción de parámetros de la PCCS se utilizarán las siguientes hojas:
HOJA | PARAMETRO | DESCRIPCION
----|----------|:-----------
24/25|P0101|Porcentaje de viviendas con agua entubada
26|P0102|Porcentaje de viviendas que cuentan con descarga a una red de alcantarillado.
26|P0403|Viviendas con drenaje
02|P0404|Viviendas con piso de tierra
23|P0603|Viviendas particulares habitadas con calentador de agua (boiler)
08|P0611|Viviendas que utilizan leña o carbón para cocinar
09|P0612|Viviendas que utilizan leña o carbón para cocinar, que disponen de estufa o fogón con chimenea
08|P0613|Viviendas habitadas que utilizan gas para cocinar
19|P1004|Forma de eliminación de residuos
21|P1010|Porcentaje de viviendas de reutilización de residuos
20|P1011|Porcentaje de viviendas que separan sus residuos en orgánicos e inorgánicos
16|P0601|Viviendas particulares habitadas con electricidad
Los siguientes parámetros se pueden obtener desde otras fuentes, pero se incluirán en esta minería por encontrarse también disponibles para 2015 en este dataset.
HOJA | PARAMETRO | DESCRIPCION
---- | ---------- | :-----------
23 | P0604 | Viviendas particulares habitadas con calentador solar
23 | P0605 | Viviendas particulares habitadas con panel fotovoltaico
4 . Estandarización de Dataset
A partir de las hojas identificadas y asociadas con parámetros, es necesario crear un dataframe estándar que sea de fácil lectura para sistemas informáticos y permita la creación de parámetros para la PCCS. Cada hoja de las descritas anteriormente tiene un acomodo distinto de las variables y requiere un proceso diferente, aunque la secuencia general para todas las hojas será la siguiente:
1. Crear una función que sirva para extraer los datos de una hoja "tipo"
2. Correr la función sobre cada archivo de excel y juntar los datos recopilados en un solo DataFrame
3. Limpieza final al dataframe y guardado
HOJA 02: Estimadores de las viviendas particulares habitadas y su distribución porcentual según material en pisos por tamaño de localidad
1 . Funcion para extraer datos de hoja tipo
End of explanation
# correr funcion sobre todos los archivos
colnames = ['Entidad federativa',
'Municipio',
'Estimador',
'Viviendas particulares habitadas',
'Pisos_Tierra',
'Pisos_Cemento o firme',
'Pisos_Mosaico, madera u otro recubrimiento',
'Pisos_No especificado']
DatosPiso = {}
for k,v in archivos.items():
print('Procesando {}'.format(v))
hoja = cargahoja(k, v, '02', colnames, 7)
DatosPiso[k] = hoja
Explanation: 2 . Correr funcion sobre todos los archivos de excel para extraer datos de la hoja 02
End of explanation
PisosDF = pd.DataFrame()
for k,v in DatosPiso.items():
PisosDF = PisosDF.append(v)
Explanation: Los datos fueron guardados como diccionario de Python, es necesario convertirlos en un DataFrame unico antes de hacer la limpieza final.
End of explanation
PisosDF = PisosDF[PisosDF['Municipio'] != 'Total']
PisosDF.describe()
Explanation: 3 . Limpieza final del Dataframe 'Pisos': El dataframe está casi listo para ser utilizado en la construcción de indicadores, únicamente hace falta quitar algunas lineas de "basura" que tienen los datos de totales por Municipio.
End of explanation
# Se define un diccionario con la siguiente sintaxis: 'NUMERO DE HOJA' : [LISTA DE COLUMNAS]
dicthojas = {
'08' : [ # Combustible utilizado para cocinar
'Entidad federativa',
'Municipio',
'Estimador',
'Viviendas particulares habitadas',
'Cocina_con_Lena o carbon',
'Cocina_con_Gas',
'Cocina_con_Electricidad',
'Cocina_con_Otro_Combustible',
'Cocina_con_Los_ocupantes_no_cocinan',
'Cocina_con_no_especificado'
],
'09' : [ # Utilizan leña o carbón para cocinar y distribucion porcentual segun disponibilidad de estufa o fogon
'Entidad federativa',
'Municipio',
'Estimador',
'Viviendas particulares habitadas en las que sus ocupantes utilizan leña o carbon para cocinar',
'Dispone_de_estufa_o_fogon_con_chimenea',
'No dispone_de_estufa_o_fogon_con_chimenea',
'Estufa_o_fogon_no_especificado'
],
'16' : [ # Viviendas con electricidad
'Entidad federativa',
'Municipio',
'Estimador',
'Viviendas particulares habitadas',
'Disponen_de_electricidad',
'No_disponen_de_electricidad',
'No_especificado_de_electricidad'
],
'19' : [ # Forma de eliminación de residuos
'Entidad federativa',
'Municipio',
'Estimador',
'Viviendas particulares habitadas',
'Entregan_residuos_a_servicio_publico_de_recoleccion',
'Tiran_residuos_en_basurero_publico_colocan_en_contenedor_o_deposito',
'Queman_residuos',
'Entierran_residuos_o_tiran_en_otro_lugar',
'Eliminacion_de_residuos_no_especificado',
],
'20' : [ # Viviendas que entregan sus residuos al servicio publico y distribucion porcentual por condición de separacion
'Entidad federativa',
'Municipio',
'Estimador',
'Viviendas particulares habitadas en las que entregan los residuos al servicio publico',
'Separan_organicos_inorganicos',
'No_separan_organicos_inorganicos',
'Separan_residuos_No_especificado'
],
'21' : [ # Separación y reutilización de residuos
'Entidad federativa',
'Municipio',
'Forma de reutilizacion de residuos',
'Estimador',
'Viviendas particulares habitadas',
'Reutilizan_residuos',
'No_reutilizan_residuos',
'No_especificado_reutilizan_residuos',
],
'23' : [ # Disponibilidad y tipo de equipamiento
'Entidad federativa',
'Municipio',
'Tipo de equipamiento',
'Estimador',
'Viviendas particulares habitadas',
'Dispone_de_Equipamiento',
'No_dispone_de_Equipamiento',
'No_especificado_dispone_de_Equipamiento'
],
'24' : [ # Disponibilidad de agua entubada según disponibilidad y acceso
'Entidad federativa',
'Municipio',
'Estimador',
'Viviendas particulares habitadas',
'Entubada_Total',
'Entubada_Dentro_de_la_vivienda',
'Entubada_Fuera_de_la_vivienda,_pero_dentro_del_terreno',
'Acarreo_Total',
'Acarreo_De_llave_comunitaria',
'Acarreo_De_otra_vivienda',
'Acarreo_De_una_pipa',
'Acarreo_De_un_pozo',
'Acarreo_De_un_río_arroyo_o_lago',
'Acarreo_De_la_recolección_de_lluvia',
'Acarreo_Fuente_No_especificada',
'Entubada_o_Acarreo_No_especificado'
],
'25' : [ # Disponibilidad de agua entubada según fuente de abastecimiento
'Entidad federativa',
'Municipio',
'Estimador',
'Viviendas particulares que disponen de agua entubada',
'Agua_entubada_de_Servicio_Publico',
'Agua_entubada_de_Pozo_comunitario',
'Agua_entubada_de_Pozo_particular',
'Agua_entubada_de_Pipa',
'Agua_entubada_de_Otra_Vivienda',
'Agua_entubada_de_Otro_lugar',
'Agua_entubada_de_No_especificado'
],
'26' : [ # Disponibilidad de drenaje y lugar de desalojo
'Entidad federativa',
'Municipio',
'Estimador',
'Viviendas particulares habitadas',
'Drenaje_Total',
'Drenaje_desaloja_a_Red_publica',
'Drenaje_desaloja_a_Fosa_Septica_o_Tanque_Septico',
'Drenaje_desaloja_a_Barranca_o_Grieta',
'Drenaje_desaloja_a_Rio_lago_o_mar',
'No_Dispone_de_drenaje',
'Dispone_drenaje_No_especificado',
]
}
Explanation: HOJAS 08, 09, 16, 19, 20, 21 23, 24, 25, 26
Como se mencionó antes, todas las hojas siguen un proceso similar para la extraccion de datos, con ligeras variaciones.
1 . Funcion para extraer datos de hoja tipo
Para el resto de los archivos reutilizaremos la función "cargahoja" definida anteriormente
2 . Correr función sobre archivos de excel
Para que la función "cargahoja" pueda iterar de manera adecuada sobre todos los archivos, es necesario especificar cuales son las variaciones para cada hoja que la función va a leer. Las principales variaciones son los nombres de las columnas y la ubicación de los encabezados en cada hoja, por lo que los datos de cada hoja pueden extraerse de manera automatizada una vez que se identifique cómo va a tratar el script a cada una de las variaciones.
--- 2.1 . A continuación se definen las columnas para cada hoja:
End of explanation
skiprows = {
'02' : 7, # Tipo de piso
'08' : 7, # Combustible utilizado para cocinar
'09' : 7, # Utilizan leña o carbón para cocinar y distribucion porcentual segun disponibilidad de estufa o fogon
'16' : 7, # disponibilidad de energía eléctrica
'19' : 7, # Forma de eliminación de residuos
'20' : 8, # Viviendas que entregan sus residuos al servicio publico y distribucion porcentual por condición de separacion
'21' : 7, # Separación y reutilización de residuos
'23' : 7, # Disponibilidad y tipo de equipamiento
'24' : 8, # Disponibilidad de agua entubada según disponibilidad y acceso
'25' : 7, # Disponibilidad de agua entubada según fuente de abastecimiento
'26' : 8, # Disponibilidad de drenaje y lugar de desalojo
}
Explanation: --- 2.2 . Además de definir las columnas, es necesario definir cuántos renglones tiene que ignorar el script antes de encontrar los encabezados. Estos renglones se definen a continuación en un diccionario:
End of explanation
HojasDatos = {}
for estado, archivo in archivos.items():
print('Procesando {}'.format(archivo))
hojas = {}
for hoja, columnas in dicthojas.items():
print('---Procesando hoja {}'.format(hoja))
dataset = cargahoja(estado, archivo, hoja, columnas, skiprows[hoja])
if hoja not in HojasDatos.keys():
HojasDatos[hoja] = {}
HojasDatos[hoja][estado] = dataset
Explanation: 2 . Correr funcion sobre todos los archivos de excel
Una vez que se tienen definidos los tratamientos que el script dará a cada variación, se extraen los datos por medio de una función iterativa hacia un diccionario de Python.
End of explanation
# Procesado de diccionarios para obtener datasets estándar
DSstandar = {}
for hoja, estado in HojasDatos.items():
print('Procesando hoja {}'.format(hoja))
tempDS = pd.DataFrame()
for cve_edo, datos in estado.items():
tempDS = tempDS.append(datos)
print('---Se agregó CVE_EDO {} a dataframe estandar'.format(cve_edo))
DSstandar[hoja] = tempDS
Explanation: El diccionario resultante contiene los datos de cada hoja clasificados por estado. Sin embargo, la estructura de diccionarios aun requiere ser procesada para obtener dataframes estandar.
End of explanation
for hoja in DSstandar.keys():
temphoja = DSstandar[hoja]
temphoja = temphoja[temphoja['Municipio'] != 'Total']
DSstandar[hoja] = temphoja
Explanation: 3 . Limpieza final de los Dataframe: Antes de habilitar los dataframes para ser utilizados en la construcción de indicadores, hace falta quitar algunas lineas de "basura" que contienen datos de totales por Municipio.
End of explanation
# Funcion para extraccion de notas de una hoja
# Espera los siguientes input:
# --- ruta: [str] Ruta al archivo de datos del dataset fuente
# --- skip: [str] El numero de renglones en la hoja que el script tiene que ignorar para encontrar
# el renglon de encabezados.
def getnotes(ruta, skip):
tempDF = pd.read_excel(ruta, sheetname=hoja, skiprows=skip) # Carga el dataframe de manera temporal
c1 = tempDF['Unnamed: 0'].dropna() # Carga únicamente la columna 1, que contiene las notas, sin valores NaN
c1.index = range(len(c1)) # Reindexa la serie para compensar los NaN eliminados en el comando anterior
indice = c1[c1.str.contains('Nota')].index[0] # Encuentra el renglon donde inician las notas
rows = range(indice, len(c1)) # Crea una lista de los renglones que contienen notas
templist = c1.loc[rows].tolist() # Crea una lista con las notas
notas = []
for i in templist:
notas.append(i.replace('\xa0', ' ')) # Guarda cada nota y reemplaza caracteres especiales por espacios simples
return notas
Explanation: Extraccion de notas específicas para cada hoja
Cada hoja en el dataset original contiene notas al final del bloque de datos que es conveniente tener en cuenta en el momento de la interpretacion de los datos. Las notas están específicadas para cada municipio con un número en superíndice, uno o dos asteriscos. Debido a que cada hoja contiene diferente información, es conveniente verificar la nomenclatura que tienen las notas para cada hoja. Si la nomenclatura llegara a ser diferente para una misma nota en diferentes estados de una misma hoja, sería necesario crear una nueva columna en los dataset estándar para especificar la nota que corresponde a cada caso.
Para verificar las notas de cada hoja se utilizará el siguiente proceso:
1. Crear una funcion para extraer las notas
2. Correr la función sobre cada hoja y extraer las notas
3. Verificar en qué hojas varía la nomenclatura de notas
1 . Funcion para extraer notas
End of explanation
listanotas = {}
for archivo, ruta in archivos.items():
print('Procesando {} desde {}'.format(archivo, ruta))
for hoja in skiprows.keys(): # Los keys del diccionario 'skiprows' son una lista de las hojas a procesar
if hoja not in listanotas.keys():
listanotas[hoja] = {}
listanotas[hoja][archivo] = getnotes(ruta, skiprows[hoja])
Explanation: 2 . Correr la función sobre cada hoja y extraer las notas
End of explanation
notasunicas = [] # Inicia con una lista vacía
for hoja, dict in listanotas.items(): # Itera sobre el diccionario con todas las notas
for estado, notas in dict.items(): # Itera sobre el diccionario de estados de cada hoja
for nota in notas: # Itera sobre la lista de notas que tiene cada estado
if nota not in notasunicas: # Si la nota no existe en la lista:
print('Estado: {} / Hoja {} / : Nota: {}'.format(estado, hoja, nota)) # Imprime la nota y donde se encontró
notasunicas.append(nota) # Agrega la nota al diccionario
for nota in notasunicas:
print(nota)
Explanation: Una vez que se han extraido todas las notas de las hojas, se corre un script iterativo para verificar en cuáles notas varía la nomenclatura. Este script funciona de la siguiente manera:
1. Comienza con una lista vacía de notas
2. Revisa cada nota y la compara con la lista de notas.
3. Si la nota no existe en la lista, la agrega a la lista
End of explanation
# Creacion de metadatos comunes
metadatos = {
'Nombre del Dataset': 'Encuesta Intercensal 2015 - Tabulados de Vivienda',
'Descripcion del dataset': np.nan,
'Disponibilidad Temporal': '2015',
'Periodo de actualizacion': 'No Determinada',
'Nivel de Desagregacion': 'Municipal',
'Notas': 'Los límites de confianza se calculan al 90 por ciento.' \
'\n1 Excluye las siguientes clases de vivienda: locales no construidos para habitación, viviendas móviles y refugios.' \
'\n* Municipio censado.' \
'\n** Municipio con muestra insuficiente.',
'Fuente': 'INEGI (Microdatos)',
'URL_Fuente': 'http://www.beta.inegi.org.mx/proyectos/enchogares/especiales/intercensal/',
'Dataset base': np.nan,
}
Explanation: Gracias a este script podemos ver que la nomenclatura en todas las hojas es estándar, y puede agregarse como nota en los metadatos de todos los dataset estándar.
Guardado de datasets estándar
Los dataframes obtenidos a través de los procesos anteriores se guardan en libros de formato OpenXML (.xlsx) para facilitar su lectura tanto desde sistemas informáticos como por personas. Cada libro contiene 2 hojas:
1. Hoja de metadatos
2. Hoja de datos, con estimadores de las viviendas particulares habitadas y su distribución porcentual según:
HOJA | DESCRIPCION
--- | :---
02 | Material en pisos por municipio
08 | Combustible utilizado para cocinar por municipio
09 | Viviendas en las que sus ocupantes utilizan leña o carbón para cocinar y su distribución porcentual según disponibilidad de estufa o fogón con chimenea por municipio
19 | Forma de eliminación de residuos por municipio
16 | Disponibilidad de energia electrica por municipio
20 | Viviendas en las que sus ocupantes entregan los residuos al servicio público de recolección o los colocan en un contenedor y su distribución porcentual
21 | Condición de separación y reutilización de residuos por municipio y forma de reutilización de los residuos
23 | Disponibilidad de equipamiento por municipio y tipo de equipamiento
24 | Disponibilidad de agua entubada según disponibilidad y acceso
25 | Disponibilidad de agua entubada según fuente del abastecimiento
26 | Disponibilidad de drenaje y lugar de desalojo por municipio
Al ser datasets que provienen de una misma fuente, comparten varios campos de metadatos por lo que los campos en común se definen una sola vez y los campos particulares serán definidos a través de una función iterativa.
End of explanation
DSstandar['02'] = PisosDF
Explanation: El dataset que se procesó al principio de este estudio (Correspondiente a la hoja 02), se agrega junto con los demás datasets al diccionario de datasets estándar.
End of explanation
# Script para escritura de datasets estándar.
# La funcion espera los siguientes valores:
# --- hoja: (str) numero de hoja
# --- dataset: (Pandas DataFrame) datos que lleva la hoja
# --- metadatos: (dict) metadatos comunes para todas las hojas
# --- desc_hoja: (str) descripcion del contenido de la hoja
def escribedataset(hoja, dataset, metadatos, desc_hoja):
# Compilación de la información
datasetbaseurl = r'https://github.com/INECC-PCCS/01_Dmine/tree/master/Datasets/EI2015'
directoriolocal = r'D:\PCCS\01_Dmine\Datasets\EI2015'
archivo = hoja + '.xlsx'
tempmeta = metadatos
tempmeta['Descripcion del dataset'] = desc_hoja
tempmeta['Dataset base'] = '"' + archivo + '" disponible en \n' + datasetbaseurl
tempmeta = pd.DataFrame.from_dict(tempmeta, orient='index')
tempmeta.columns = ['Descripcion']
tempmeta = tempmeta.rename_axis('Metadato')
# Escritura de dataset estándar
destino = directoriolocal + '\\' + archivo
writer = pd.ExcelWriter(destino)
tempmeta.to_excel(writer, sheet_name ='METADATOS')
dataset.to_excel(writer, sheet_name = hoja)
writer.save()
print('Se guardó: "{}" en \n{}'.format(desc_hoja, destino))
Explanation: La función para la escritura de datasets estándar es la siguiente:
End of explanation
for hoja, dataset in DSstandar.items():
print('Procesando hoja '+hoja)
escribedataset(hoja, dataset, metadatos, df.loc[hoja][0])
Explanation: Una vez definida la función para la escritura de datasets, se ejecuta de manera iterativa sobre los datos:
End of explanation
for hoja in DSstandar.keys():
print('**{}.xlsx**|{}'.format(hoja, df.loc[hoja][0]))
Explanation: Al final del proceso se generaron 10 datasets estándar:
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.